You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Allen, B. S <bs...@la...> - 2011-08-31 16:32:32
|
Florent, No idea on how difficult this would be to implement in MFS (I believe this might of been discuss on the mailing list in the past), but today you could use ZFS or LessFS for the Chunk server drives. Using either of these would allow you to enable dedup behind the scenes of MFS per chunk server. Obviously some efficiency is lost vs a MFS integrated solution as each filesystem doesn't know about the rest of the MFS environment. Also a caveat in this approach, MFS pays attention to actual blocks used on the drive, so it will balance new chunks onto the CS based on actual available space. This could have an affect if one CS has more dedup'able data, to cause an uneven chunk count across the chunk servers. This isn't necessarily a bad thing, but something to watch out for. Ben On Aug 31, 2011, at 8:53 AM, Florent Bautista wrote: Hi all, I would like to know if MooseFS will (in which future ?) take care about data deduplication. I know that MooseFS makes a checksum of every chunk, so would it be possible to have data deduplication at that level ? If two (or more) files with goal=3 each, have chunk(s) in common, only store 3 copies of that (those) chunk(s) and not 6 (or more) like today... For files having different goals, use the more important goal. I think it does not change the architecture of MooseFS... maybe the problem is that MFS Master do not know about checksums, it is made on CS... but we could find a way to go through Is it difficult to add that feature ? What do you think about it ? Thank you guys! -- Florent Bautista ________________________________ Ce message et ses éventuelles pièces jointes sont personnels, confidentiels et à l'usage exclusif de leur destinataire. Si vous n'êtes pas la personne à laquelle ce message est destiné, veuillez noter que vous avez reçu ce courriel par erreur et qu'il vous est strictement interdit d'utiliser, de diffuser, de transférer, d'imprimer ou de copier ce message. This e-mail and any attachments hereto are strictly personal, confidential and intended solely for the addressee. If you are not the intended recipient, be advised that you have received this email in error and that any use, dissemination, forwarding, printing, or copying of this message is strictly prohibited. ________________________________ ------------------------------------------------------------------------------ Special Offer -- Download ArcSight Logger for FREE! Finally, a world-class log management solution at an even better price-free! And you'll get a free "Love Thy Logs" t-shirt when you download Logger. Secure your free ArcSight Logger TODAY! http://p.sf.net/sfu/arcsisghtdev2dev_______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Florent B. <fl...@co...> - 2011-08-31 15:18:58
|
Hi all, I would like to know if MooseFS will (in which future ?) take care about data deduplication. I know that MooseFS makes a checksum of every chunk, so would it be possible to have data deduplication at that level ? If two (or more) files with goal=3 each, have chunk(s) in common, only store 3 copies of that (those) chunk(s) and not 6 (or more) like today... For files having different goals, use the more important goal. I think it does not change the architecture of MooseFS... maybe the problem is that MFS Master do not know about checksums, it is made on CS... but we could find a way to go through Is it difficult to add that feature ? What do you think about it ? Thank you guys! -- Florent Bautista ------------------------------------------------------------------------ Ce message et ses éventuelles pièces jointes sont personnels, confidentiels et à l'usage exclusif de leur destinataire. Si vous n'êtes pas la personne à laquelle ce message est destiné, veuillez noter que vous avez reçu ce courriel par erreur et qu'il vous est strictement interdit d'utiliser, de diffuser, de transférer, d'imprimer ou de copier ce message. This e-mail and any attachments hereto are strictly personal, confidential and intended solely for the addressee. If you are not the intended recipient, be advised that you have received this email in error and that any use, dissemination, forwarding, printing, or copying of this message is strictly prohibited. ------------------------------------------------------------------------ |
From: Davies L. <dav...@gm...> - 2011-08-31 06:19:38
|
Not yet, but we can export parts of mfsmount, then create Python or Go binding of it. Davies On Wed, Aug 31, 2011 at 11:18 AM, Robert Sandilands <rsa...@ne...>wrote: > There is a native API? Where can I find information about it? Or do you > have to reverse it from the code? > > Robert > > > On 8/30/11 10:42 PM, Davies Liu wrote: > > The bottle neck is FUSE and mfsmount, you should try to use native API ( > borrowed from mfsmount) > of MFS to re-implement a HTTP server, one socket per thread , or sockets > pool. > > I just want do it in Go, may by python is easier. > > Davies > > On Wed, Aug 31, 2011 at 8:54 AM, Robert Sandilands <rsa...@ne...>wrote: > >> Further on this subject. >> >> I wrote a dedicated http server to serve the files instead of using >> Apache. It allowed me to gain a few extra percent of performance and >> decreased the memory usage of the web servers. The web server also gave me >> some interesting timings: >> >> File open average 405.3732 ms >> File read average 238.7784 ms >> File close average 286.8376 ms >> File size average 0.0026 ms >> Net read average 2.536 ms >> Net write average 2.2148 ms >> Log to access log average 0.2526 ms >> Log to error log average 0.2234 ms >> >> Average time to process a file 936.2186 ms >> Total files processed 1,503,610 >> >> What I really find scary is that to open a file takes nearly half a >> second. To close a file a quarter of a second. The time to open() and >> close() is nearly 3 times more than the time to read the data. The server >> always reads in multiples of 64 kB except if there are less data available. >> Although it uses posix_fadvise() to try and do some read-ahead. This is the >> average over 5 machines running mfsmount and my custom web server running >> for about 18 hours. >> >> On a machine that only serves a low number of clients the times for open >> and close are negligible. open() and close() seems to scale very badly with >> an increase in clients using mfsmount. >> >> From looking at the code for mfsmount it seems like all communication to >> the master happens over a single TCP socket with a global handle and mutex >> to protect it. This may be the bottle neck? If there are multiple open()'s >> at the same time they may end up waiting for the mutex to get an opportunity >> to communicate with the master? The same handle and mutex is also used to >> read replies and this may also not help the situation? >> >> What prevents multiple sockets to the master? >> >> It also seems to indicate that the only way to get the open() average down >> is to introduce more web servers and that a single web server can only serve >> a very low number of clients. Is that a correct assumption? >> >> >> Robert >> >> On 8/26/11 3:25 AM, Davies Liu wrote: >> >> Hi Robert, >> >> Another hint to make mfsmaster more responsive is to locate the >> metadata.mfs >> on a separated disk with change logs, such as SAS array, then you should >> modify >> the source code of mfsmaster to do this. >> >> PS: what is the average size of you files? MooseFS (like GFS) is >> designed for >> large file (100M+), it can not serve well for amount of small files. >> Haystack from >> Facebook may be the better choice. We (douban.com) use MooseFS to serve >> 200+T(1M files) offline data and beansdb [1] to serve 500 million online >> small >> files, it performs very well. >> >> [1]: http://code.google.com/p/*beansdb*/ >> >> Davies >> >> On Fri, Aug 26, 2011 at 9:08 AM, Robert Sandilands <rsa...@ne... >> > wrote: >> >>> Hi Elliot, >>> >>> There is nothing in the code to change the priority. >>> >>> Taking virtually all other load from the chunk and master servers seems >>> to have improved this significantly. I still see timeouts from mfsmount, >>> but not enough to be problematic. >>> >>> To try and optimize the performance I am experimenting with accessing >>> the data using different APIs and block sizes but this has been >>> inconclusive. I have tried the effect of posix_fadvise(), sendfile() and >>> different sized buffers for read(). I still want to try mmap(). >>> Sendfile() did seem to be slightly slower than read(). >>> >>> Robert >>> >>> On 8/24/11 11:05 AM, Elliot Finley wrote: >>> > On Tue, Aug 9, 2011 at 6:46 PM, Robert Sandilands< >>> rsa...@ne...> wrote: >>> >> Increasing the swap space fixed the fork() issue. It seems that you >>> have to >>> >> ensure that memory available is always double the memory needed by >>> >> mfsmaster. None of the swap space was used over the last 24 hours. >>> >> >>> >> This did solve the extreme comb-like behavior of mfsmaster. It still >>> does >>> >> not resolve its sensitivity to load on the server. I am still seeing >>> >> timeouts on the chunkservers and mounts on the hour due to the high >>> CPU and >>> >> I/O load when the meta data is dumped to disk. It did however decrease >>> >> significantly. >>> > Here is another thought on this... >>> > >>> > The process is niced to -19 (very high priority) so that it has good >>> > performance. It forks once per hour to write out the metadata. I >>> > haven't checked the code for this, but is the forked process lowering >>> > it's priority so it doesn't compete with the original process? >>> > >>> > If it's not, it should be an easy code change to lower the priority in >>> > the child process (metadata writer) so that it doesn't compete with >>> > the original process at the same priority. >>> > >>> > If you check into this, I'm sure the list would appreciate an update. >>> :) >>> > >>> > Elliot >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> EMC VNX: the world's simplest storage, starting under $10K >>> The only unified storage solution that offers unified management >>> Up to 160% more powerful than alternatives and 25% more efficient. >>> Guaranteed. http://p.sf.net/sfu/emc-vnx-dev2dev >>> _______________________________________________ >>> moosefs-users mailing list >>> moo...@li... >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>> >> >> >> >> -- >> - Davies >> >> >> >> >> ------------------------------------------------------------------------------ >> Special Offer -- Download ArcSight Logger for FREE! >> Finally, a world-class log management solution at an even better >> price-free! And you'll get a free "Love Thy Logs" t-shirt when you >> download Logger. Secure your free ArcSight Logger TODAY! >> http://p.sf.net/sfu/arcsisghtdev2dev >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> >> > > > -- > - Davies > > > -- - Davies |
From: Robert S. <rsa...@ne...> - 2011-08-31 03:18:49
|
There is a native API? Where can I find information about it? Or do you have to reverse it from the code? Robert On 8/30/11 10:42 PM, Davies Liu wrote: > The bottle neck is FUSE and mfsmount, you should try to use native API > ( borrowed from mfsmount) > of MFS to re-implement a HTTP server, one socket per thread , or > sockets pool. > > I just want do it in Go, may by python is easier. > > Davies > > On Wed, Aug 31, 2011 at 8:54 AM, Robert Sandilands > <rsa...@ne... <mailto:rsa...@ne...>> wrote: > > Further on this subject. > > I wrote a dedicated http server to serve the files instead of > using Apache. It allowed me to gain a few extra percent of > performance and decreased the memory usage of the web servers. The > web server also gave me some interesting timings: > > File open average 405.3732 ms > File read average 238.7784 ms > File close average 286.8376 ms > File size average 0.0026 ms > Net read average 2.536 ms > Net write average 2.2148 ms > Log to access log average 0.2526 ms > Log to error log average 0.2234 ms > > Average time to process a file 936.2186 ms > Total files processed 1,503,610 > > What I really find scary is that to open a file takes nearly half > a second. To close a file a quarter of a second. The time to > open() and close() is nearly 3 times more than the time to read > the data. The server always reads in multiples of 64 kB except if > there are less data available. Although it uses posix_fadvise() to > try and do some read-ahead. This is the average over 5 machines > running mfsmount and my custom web server running for about 18 hours. > > On a machine that only serves a low number of clients the times > for open and close are negligible. open() and close() seems to > scale very badly with an increase in clients using mfsmount. > > From looking at the code for mfsmount it seems like all > communication to the master happens over a single TCP socket with > a global handle and mutex to protect it. This may be the bottle > neck? If there are multiple open()'s at the same time they may end > up waiting for the mutex to get an opportunity to communicate with > the master? The same handle and mutex is also used to read replies > and this may also not help the situation? > > What prevents multiple sockets to the master? > > It also seems to indicate that the only way to get the open() > average down is to introduce more web servers and that a single > web server can only serve a very low number of clients. Is that a > correct assumption? > > > Robert > > On 8/26/11 3:25 AM, Davies Liu wrote: >> Hi Robert, >> >> Another hint to make mfsmaster more responsive is to locate the >> metadata.mfs >> on a separated disk with change logs, such as SAS array, then you >> should modify >> the source code of mfsmaster to do this. >> >> PS: what is the average size of you files? MooseFS (like GFS) is >> designed for >> large file (100M+), it can not serve well for amount of small >> files. Haystack from >> Facebook may be the better choice. We (douban.com >> <http://douban.com>) use MooseFS to serve >> 200+T(1M files) offline data and beansdb [1] to serve 500 million >> online small >> files, it performs very well. >> >> [1]: http://code.google.com/p/ <http://code.google.com/p/>*beansdb*/ >> >> Davies >> >> On Fri, Aug 26, 2011 at 9:08 AM, Robert Sandilands >> <rsa...@ne... <mailto:rsa...@ne...>> wrote: >> >> Hi Elliot, >> >> There is nothing in the code to change the priority. >> >> Taking virtually all other load from the chunk and master >> servers seems >> to have improved this significantly. I still see timeouts >> from mfsmount, >> but not enough to be problematic. >> >> To try and optimize the performance I am experimenting with >> accessing >> the data using different APIs and block sizes but this has been >> inconclusive. I have tried the effect of posix_fadvise(), >> sendfile() and >> different sized buffers for read(). I still want to try mmap(). >> Sendfile() did seem to be slightly slower than read(). >> >> Robert >> >> On 8/24/11 11:05 AM, Elliot Finley wrote: >> > On Tue, Aug 9, 2011 at 6:46 PM, Robert >> Sandilands<rsa...@ne... >> <mailto:rsa...@ne...>> wrote: >> >> Increasing the swap space fixed the fork() issue. It seems >> that you have to >> >> ensure that memory available is always double the memory >> needed by >> >> mfsmaster. None of the swap space was used over the last >> 24 hours. >> >> >> >> This did solve the extreme comb-like behavior of >> mfsmaster. It still does >> >> not resolve its sensitivity to load on the server. I am >> still seeing >> >> timeouts on the chunkservers and mounts on the hour due to >> the high CPU and >> >> I/O load when the meta data is dumped to disk. It did >> however decrease >> >> significantly. >> > Here is another thought on this... >> > >> > The process is niced to -19 (very high priority) so that it >> has good >> > performance. It forks once per hour to write out the >> metadata. I >> > haven't checked the code for this, but is the forked >> process lowering >> > it's priority so it doesn't compete with the original process? >> > >> > If it's not, it should be an easy code change to lower the >> priority in >> > the child process (metadata writer) so that it doesn't >> compete with >> > the original process at the same priority. >> > >> > If you check into this, I'm sure the list would appreciate >> an update. :) >> > >> > Elliot >> >> >> ------------------------------------------------------------------------------ >> EMC VNX: the world's simplest storage, starting under $10K >> The only unified storage solution that offers unified management >> Up to 160% more powerful than alternatives and 25% more >> efficient. >> Guaranteed. http://p.sf.net/sfu/emc-vnx-dev2dev >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> <mailto:moo...@li...> >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> >> >> >> >> -- >> - Davies > > > ------------------------------------------------------------------------------ > Special Offer -- Download ArcSight Logger for FREE! > Finally, a world-class log management solution at an even better > price-free! And you'll get a free "Love Thy Logs" t-shirt when you > download Logger. Secure your free ArcSight Logger TODAY! > http://p.sf.net/sfu/arcsisghtdev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > <mailto:moo...@li...> > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > > -- > - Davies |
From: Davies L. <dav...@gm...> - 2011-08-31 02:42:54
|
The bottle neck is FUSE and mfsmount, you should try to use native API ( borrowed from mfsmount) of MFS to re-implement a HTTP server, one socket per thread , or sockets pool. I just want do it in Go, may by python is easier. Davies On Wed, Aug 31, 2011 at 8:54 AM, Robert Sandilands <rsa...@ne...>wrote: > Further on this subject. > > I wrote a dedicated http server to serve the files instead of using Apache. > It allowed me to gain a few extra percent of performance and decreased the > memory usage of the web servers. The web server also gave me some > interesting timings: > > File open average 405.3732 ms > File read average 238.7784 ms > File close average 286.8376 ms > File size average 0.0026 ms > Net read average 2.536 ms > Net write average 2.2148 ms > Log to access log average 0.2526 ms > Log to error log average 0.2234 ms > > Average time to process a file 936.2186 ms > Total files processed 1,503,610 > > What I really find scary is that to open a file takes nearly half a second. > To close a file a quarter of a second. The time to open() and close() is > nearly 3 times more than the time to read the data. The server always reads > in multiples of 64 kB except if there are less data available. Although it > uses posix_fadvise() to try and do some read-ahead. This is the average over > 5 machines running mfsmount and my custom web server running for about 18 > hours. > > On a machine that only serves a low number of clients the times for open > and close are negligible. open() and close() seems to scale very badly with > an increase in clients using mfsmount. > > From looking at the code for mfsmount it seems like all communication to > the master happens over a single TCP socket with a global handle and mutex > to protect it. This may be the bottle neck? If there are multiple open()'s > at the same time they may end up waiting for the mutex to get an opportunity > to communicate with the master? The same handle and mutex is also used to > read replies and this may also not help the situation? > > What prevents multiple sockets to the master? > > It also seems to indicate that the only way to get the open() average down > is to introduce more web servers and that a single web server can only serve > a very low number of clients. Is that a correct assumption? > > > Robert > > On 8/26/11 3:25 AM, Davies Liu wrote: > > Hi Robert, > > Another hint to make mfsmaster more responsive is to locate the > metadata.mfs > on a separated disk with change logs, such as SAS array, then you should > modify > the source code of mfsmaster to do this. > > PS: what is the average size of you files? MooseFS (like GFS) is designed > for > large file (100M+), it can not serve well for amount of small files. > Haystack from > Facebook may be the better choice. We (douban.com) use MooseFS to serve > 200+T(1M files) offline data and beansdb [1] to serve 500 million online > small > files, it performs very well. > > [1]: http://code.google.com/p/*beansdb*/ > > Davies > > On Fri, Aug 26, 2011 at 9:08 AM, Robert Sandilands <rsa...@ne...>wrote: > >> Hi Elliot, >> >> There is nothing in the code to change the priority. >> >> Taking virtually all other load from the chunk and master servers seems >> to have improved this significantly. I still see timeouts from mfsmount, >> but not enough to be problematic. >> >> To try and optimize the performance I am experimenting with accessing >> the data using different APIs and block sizes but this has been >> inconclusive. I have tried the effect of posix_fadvise(), sendfile() and >> different sized buffers for read(). I still want to try mmap(). >> Sendfile() did seem to be slightly slower than read(). >> >> Robert >> >> On 8/24/11 11:05 AM, Elliot Finley wrote: >> > On Tue, Aug 9, 2011 at 6:46 PM, Robert Sandilands<rsa...@ne...> >> wrote: >> >> Increasing the swap space fixed the fork() issue. It seems that you >> have to >> >> ensure that memory available is always double the memory needed by >> >> mfsmaster. None of the swap space was used over the last 24 hours. >> >> >> >> This did solve the extreme comb-like behavior of mfsmaster. It still >> does >> >> not resolve its sensitivity to load on the server. I am still seeing >> >> timeouts on the chunkservers and mounts on the hour due to the high CPU >> and >> >> I/O load when the meta data is dumped to disk. It did however decrease >> >> significantly. >> > Here is another thought on this... >> > >> > The process is niced to -19 (very high priority) so that it has good >> > performance. It forks once per hour to write out the metadata. I >> > haven't checked the code for this, but is the forked process lowering >> > it's priority so it doesn't compete with the original process? >> > >> > If it's not, it should be an easy code change to lower the priority in >> > the child process (metadata writer) so that it doesn't compete with >> > the original process at the same priority. >> > >> > If you check into this, I'm sure the list would appreciate an update. :) >> > >> > Elliot >> >> >> >> ------------------------------------------------------------------------------ >> EMC VNX: the world's simplest storage, starting under $10K >> The only unified storage solution that offers unified management >> Up to 160% more powerful than alternatives and 25% more efficient. >> Guaranteed. http://p.sf.net/sfu/emc-vnx-dev2dev >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > > > > -- > - Davies > > > > > ------------------------------------------------------------------------------ > Special Offer -- Download ArcSight Logger for FREE! > Finally, a world-class log management solution at an even better > price-free! And you'll get a free "Love Thy Logs" t-shirt when you > download Logger. Secure your free ArcSight Logger TODAY! > http://p.sf.net/sfu/arcsisghtdev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > -- - Davies |
From: Robert S. <rsa...@ne...> - 2011-08-31 00:54:39
|
Further on this subject. I wrote a dedicated http server to serve the files instead of using Apache. It allowed me to gain a few extra percent of performance and decreased the memory usage of the web servers. The web server also gave me some interesting timings: File open average 405.3732 ms File read average 238.7784 ms File close average 286.8376 ms File size average 0.0026 ms Net read average 2.536 ms Net write average 2.2148 ms Log to access log average 0.2526 ms Log to error log average 0.2234 ms Average time to process a file 936.2186 ms Total files processed 1,503,610 What I really find scary is that to open a file takes nearly half a second. To close a file a quarter of a second. The time to open() and close() is nearly 3 times more than the time to read the data. The server always reads in multiples of 64 kB except if there are less data available. Although it uses posix_fadvise() to try and do some read-ahead. This is the average over 5 machines running mfsmount and my custom web server running for about 18 hours. On a machine that only serves a low number of clients the times for open and close are negligible. open() and close() seems to scale very badly with an increase in clients using mfsmount. From looking at the code for mfsmount it seems like all communication to the master happens over a single TCP socket with a global handle and mutex to protect it. This may be the bottle neck? If there are multiple open()'s at the same time they may end up waiting for the mutex to get an opportunity to communicate with the master? The same handle and mutex is also used to read replies and this may also not help the situation? What prevents multiple sockets to the master? It also seems to indicate that the only way to get the open() average down is to introduce more web servers and that a single web server can only serve a very low number of clients. Is that a correct assumption? Robert On 8/26/11 3:25 AM, Davies Liu wrote: > Hi Robert, > > Another hint to make mfsmaster more responsive is to locate the > metadata.mfs > on a separated disk with change logs, such as SAS array, then you > should modify > the source code of mfsmaster to do this. > > PS: what is the average size of you files? MooseFS (like GFS) is > designed for > large file (100M+), it can not serve well for amount of small files. > Haystack from > Facebook may be the better choice. We (douban.com <http://douban.com>) > use MooseFS to serve > 200+T(1M files) offline data and beansdb [1] to serve 500 million > online small > files, it performs very well. > > [1]: http://code.google.com/p/ <http://code.google.com/p/>*beansdb*/ > > Davies > > On Fri, Aug 26, 2011 at 9:08 AM, Robert Sandilands > <rsa...@ne... <mailto:rsa...@ne...>> wrote: > > Hi Elliot, > > There is nothing in the code to change the priority. > > Taking virtually all other load from the chunk and master servers > seems > to have improved this significantly. I still see timeouts from > mfsmount, > but not enough to be problematic. > > To try and optimize the performance I am experimenting with accessing > the data using different APIs and block sizes but this has been > inconclusive. I have tried the effect of posix_fadvise(), > sendfile() and > different sized buffers for read(). I still want to try mmap(). > Sendfile() did seem to be slightly slower than read(). > > Robert > > On 8/24/11 11:05 AM, Elliot Finley wrote: > > On Tue, Aug 9, 2011 at 6:46 PM, Robert > Sandilands<rsa...@ne... <mailto:rsa...@ne...>> > wrote: > >> Increasing the swap space fixed the fork() issue. It seems that > you have to > >> ensure that memory available is always double the memory needed by > >> mfsmaster. None of the swap space was used over the last 24 hours. > >> > >> This did solve the extreme comb-like behavior of mfsmaster. It > still does > >> not resolve its sensitivity to load on the server. I am still > seeing > >> timeouts on the chunkservers and mounts on the hour due to the > high CPU and > >> I/O load when the meta data is dumped to disk. It did however > decrease > >> significantly. > > Here is another thought on this... > > > > The process is niced to -19 (very high priority) so that it has good > > performance. It forks once per hour to write out the metadata. I > > haven't checked the code for this, but is the forked process > lowering > > it's priority so it doesn't compete with the original process? > > > > If it's not, it should be an easy code change to lower the > priority in > > the child process (metadata writer) so that it doesn't compete with > > the original process at the same priority. > > > > If you check into this, I'm sure the list would appreciate an > update. :) > > > > Elliot > > > ------------------------------------------------------------------------------ > EMC VNX: the world's simplest storage, starting under $10K > The only unified storage solution that offers unified management > Up to 160% more powerful than alternatives and 25% more efficient. > Guaranteed. http://p.sf.net/sfu/emc-vnx-dev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > <mailto:moo...@li...> > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > > -- > - Davies |
From: Robert D. <ro...@in...> - 2011-08-29 22:48:44
|
Any word on the eta? |
From: Allen, B. S <bs...@la...> - 2011-08-29 22:18:31
|
The fstab looks correct. Make sure you have the mount.fuse command. On RHEL this is installed with the "fuse" package. On debian or Ubuntu I believe this is in the fuse-util package. Ben On Aug 29, 2011, at 3:38 PM, Tom De Vylder wrote: > Hi all, > > We're having some troubles getting the fstab entry to work: > > ~# mount /mnt/moose > mount: wrong fs type, bad option, bad superblock on mfsmount, > missing codepage or helper program, or other error > (for several filesystems (e.g. nfs, cifs) you might > need a /sbin/mount.<type> helper program) > In some cases useful info is found in syslog - try > dmesg | tail or so > > ~# tail -1 /etc/fstab > mfsmount /mnt/moose fuse mfsmaster=10.0.0.1,mfsport=9421,_netdev 0 0 > ~# > > It's the same format as listed on http://www.moosefs.org/reference-guide.html but it seems "mfsmount" isn't accepted. > > # mfsmount --version > MFS version 1.6.20 > FUSE library version: 2.8.4 > # cat /etc/debian_version > 6.0.1 > > If anyone needs more information I'd be glad to provide it. > > Kind regards, > Tom De Vylder > ------------------------------------------------------------------------------ > Special Offer -- Download ArcSight Logger for FREE! > Finally, a world-class log management solution at an even better > price-free! And you'll get a free "Love Thy Logs" t-shirt when you > download Logger. Secure your free ArcSight Logger TODAY! > http://p.sf.net/sfu/arcsisghtdev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Tom De V. <to...@pe...> - 2011-08-29 22:02:01
|
Hi all, We're having some troubles getting the fstab entry to work: ~# mount /mnt/moose mount: wrong fs type, bad option, bad superblock on mfsmount, missing codepage or helper program, or other error (for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program) In some cases useful info is found in syslog - try dmesg | tail or so ~# tail -1 /etc/fstab mfsmount /mnt/moose fuse mfsmaster=10.0.0.1,mfsport=9421,_netdev 0 0 ~# It's the same format as listed on http://www.moosefs.org/reference-guide.html but it seems "mfsmount" isn't accepted. # mfsmount --version MFS version 1.6.20 FUSE library version: 2.8.4 # cat /etc/debian_version 6.0.1 If anyone needs more information I'd be glad to provide it. Kind regards, Tom De Vylder |
From: Elliot F. <efi...@gm...> - 2011-08-29 17:42:20
|
On Sun, Aug 28, 2011 at 1:31 PM, John <wil...@gm...> wrote: > I've recently tried to setup moosefs as a backend distributed filesystem for > open nebula but have run into a sizable roadblock where clients cannot for > whatever reason write to the filesystem. > Everything went fine with installation as I followed the step by step to the > tee and have 1 master, 1 metalogger, and 14 chunk servers with the clients > running on each node. All nodes are able to correctly mount the filesystem > and list folders created with mfssetgoal working as well. However when I try > to copy any file to the filesystem the operation hangs the ssh session and > has to be killed by rebooting the system that the copy operation was > attempted from. Without more information, it would be hard to say what's happening here. I ran into the exact same symptom last friday and it turned out to be an MTU problem. bumped the MTU of the client back down to 1500 (from 9000) and everything worked as expected. turned out the switch I was using didn't support the larger MTU. |
From: John <wil...@gm...> - 2011-08-28 19:31:51
|
I've recently tried to setup moosefs as a backend distributed filesystem for open nebula but have run into a sizable roadblock where clients cannot for whatever reason write to the filesystem. Everything went fine with installation as I followed the step by step to the tee and have 1 master, 1 metalogger, and 14 chunk servers with the clients running on each node. All nodes are able to correctly mount the filesystem and list folders created with mfssetgoal working as well. However when I try to copy any file to the filesystem the operation hangs the ssh session and has to be killed by rebooting the system that the copy operation was attempted from. The platform for all nodes is fedora 15 with selinux and iptables disabled. Fuse is version the current 2.8.5 and moosefs version is 1.6.2. Filesystems for both local and for the blocks being served on each machine is ext3. strace of the cp is here: https://gist.github.com/1176892 screen of the CGI panel here showing operations for a cp command to transfer the mfs tarball from local disk to the mfs filesystem: http://imgur.com/8QcAP <https://gist.github.com/1176892> |
From: John <jb...@pu...> - 2011-08-28 16:49:45
|
I've recently tried to setup moosefs as a backend distributed filesystem for open nebula but have run into a sizable roadblock where clients cannot for whatever reason write to the filesystem. Everything went fine with installation as I followed the step by step to the tee and have 1 master, 1 metalogger, and 14 chunk servers with the clients running on each node. All nodes are able to correctly mount the filesystem and list folders created with mfssetgoal working as well. However when I try to copy any file to the filesystem the operation hangs the ssh session and has to be killed by rebooting the system that the copy operation was attempted from. The platform for all nodes is fedora 15 with selinux and iptables disabled. Fuse is version the current 2.8.5 and moosefs version is 1.6.2 strace of the cp is here: https://gist.github.com/1176892 Thanks, John |
From: Robert S. <rsa...@ne...> - 2011-08-26 12:46:46
|
Hi Davies, Our average file sizes is around 560 kB and it grows by approximately 100 kB per year. Our hot-set of files are around 14 million files taking slightly less than 8 TB of space. Around 1 million files are added and removed per week. There is also some growth in the number of hot files with it doubling every 2 years. In a realistic world I would have a dual level storage arrangement with faster storage for the hot files, but that is not a choice available to me. I have experimented with storing the files in a database and it has not been a great success. Databases are generally not optimized for storing large blobs and a lot of databases simply won't store blobs bigger than a certain size. Beansdb looks like something I have been looking for but the lack of English documentation is a bit scary. I did look at it through Google translate and even then the documentation is a bit on the scarce side. Robert On 8/26/11 3:25 AM, Davies Liu wrote: > Hi Robert, > > Another hint to make mfsmaster more responsive is to locate the > metadata.mfs > on a separated disk with change logs, such as SAS array, then you > should modify > the source code of mfsmaster to do this. > > PS: what is the average size of you files? MooseFS (like GFS) is > designed for > large file (100M+), it can not serve well for amount of small files. > Haystack from > Facebook may be the better choice. We (douban.com <http://douban.com>) > use MooseFS to serve > 200+T(1M files) offline data and beansdb [1] to serve 500 million > online small > files, it performs very well. > > [1]: http://code.google.com/p/ <http://code.google.com/p/>*beansdb*/ > > Davies > > On Fri, Aug 26, 2011 at 9:08 AM, Robert Sandilands > <rsa...@ne... <mailto:rsa...@ne...>> wrote: > > Hi Elliot, > > There is nothing in the code to change the priority. > > Taking virtually all other load from the chunk and master servers > seems > to have improved this significantly. I still see timeouts from > mfsmount, > but not enough to be problematic. > > To try and optimize the performance I am experimenting with accessing > the data using different APIs and block sizes but this has been > inconclusive. I have tried the effect of posix_fadvise(), > sendfile() and > different sized buffers for read(). I still want to try mmap(). > Sendfile() did seem to be slightly slower than read(). > > Robert > > On 8/24/11 11:05 AM, Elliot Finley wrote: > > On Tue, Aug 9, 2011 at 6:46 PM, Robert > Sandilands<rsa...@ne... <mailto:rsa...@ne...>> > wrote: > >> Increasing the swap space fixed the fork() issue. It seems that > you have to > >> ensure that memory available is always double the memory needed by > >> mfsmaster. None of the swap space was used over the last 24 hours. > >> > >> This did solve the extreme comb-like behavior of mfsmaster. It > still does > >> not resolve its sensitivity to load on the server. I am still > seeing > >> timeouts on the chunkservers and mounts on the hour due to the > high CPU and > >> I/O load when the meta data is dumped to disk. It did however > decrease > >> significantly. > > Here is another thought on this... > > > > The process is niced to -19 (very high priority) so that it has good > > performance. It forks once per hour to write out the metadata. I > > haven't checked the code for this, but is the forked process > lowering > > it's priority so it doesn't compete with the original process? > > > > If it's not, it should be an easy code change to lower the > priority in > > the child process (metadata writer) so that it doesn't compete with > > the original process at the same priority. > > > > If you check into this, I'm sure the list would appreciate an > update. :) > > > > Elliot > > > ------------------------------------------------------------------------------ > EMC VNX: the world's simplest storage, starting under $10K > The only unified storage solution that offers unified management > Up to 160% more powerful than alternatives and 25% more efficient. > Guaranteed. http://p.sf.net/sfu/emc-vnx-dev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > <mailto:moo...@li...> > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > > -- > - Davies |
From: Alexander A. <akh...@ri...> - 2011-08-26 11:07:15
|
Hi All! I wonder if it is a good idea to put in init/rc script which starts mfsmaster something like mfs_data_path=`grep DATA_PATH /etc/mfsmaster.cfg | grep -v ^# | awk '{print $NF}'` if [ ! -f $mfs_data_path/metadata.mfs ] then mfsmetarestore -a -d $mfs_data_path fi ??????? In my opinion it is bare necessity to automatically make MooseFS healthy in case of power failure/restore. wbr Alexander |
From: Davies L. <dav...@gm...> - 2011-08-26 07:25:36
|
Hi Robert, Another hint to make mfsmaster more responsive is to locate the metadata.mfs on a separated disk with change logs, such as SAS array, then you should modify the source code of mfsmaster to do this. PS: what is the average size of you files? MooseFS (like GFS) is designed for large file (100M+), it can not serve well for amount of small files. Haystack from Facebook may be the better choice. We (douban.com) use MooseFS to serve 200+T(1M files) offline data and beansdb [1] to serve 500 million online small files, it performs very well. [1]: http://code.google.com/p/*beansdb*/ Davies On Fri, Aug 26, 2011 at 9:08 AM, Robert Sandilands <rsa...@ne...>wrote: > Hi Elliot, > > There is nothing in the code to change the priority. > > Taking virtually all other load from the chunk and master servers seems > to have improved this significantly. I still see timeouts from mfsmount, > but not enough to be problematic. > > To try and optimize the performance I am experimenting with accessing > the data using different APIs and block sizes but this has been > inconclusive. I have tried the effect of posix_fadvise(), sendfile() and > different sized buffers for read(). I still want to try mmap(). > Sendfile() did seem to be slightly slower than read(). > > Robert > > On 8/24/11 11:05 AM, Elliot Finley wrote: > > On Tue, Aug 9, 2011 at 6:46 PM, Robert Sandilands<rsa...@ne...> > wrote: > >> Increasing the swap space fixed the fork() issue. It seems that you have > to > >> ensure that memory available is always double the memory needed by > >> mfsmaster. None of the swap space was used over the last 24 hours. > >> > >> This did solve the extreme comb-like behavior of mfsmaster. It still > does > >> not resolve its sensitivity to load on the server. I am still seeing > >> timeouts on the chunkservers and mounts on the hour due to the high CPU > and > >> I/O load when the meta data is dumped to disk. It did however decrease > >> significantly. > > Here is another thought on this... > > > > The process is niced to -19 (very high priority) so that it has good > > performance. It forks once per hour to write out the metadata. I > > haven't checked the code for this, but is the forked process lowering > > it's priority so it doesn't compete with the original process? > > > > If it's not, it should be an easy code change to lower the priority in > > the child process (metadata writer) so that it doesn't compete with > > the original process at the same priority. > > > > If you check into this, I'm sure the list would appreciate an update. :) > > > > Elliot > > > > ------------------------------------------------------------------------------ > EMC VNX: the world's simplest storage, starting under $10K > The only unified storage solution that offers unified management > Up to 160% more powerful than alternatives and 25% more efficient. > Guaranteed. http://p.sf.net/sfu/emc-vnx-dev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- - Davies |
From: Robert S. <rsa...@ne...> - 2011-08-26 01:08:50
|
Hi Elliot, There is nothing in the code to change the priority. Taking virtually all other load from the chunk and master servers seems to have improved this significantly. I still see timeouts from mfsmount, but not enough to be problematic. To try and optimize the performance I am experimenting with accessing the data using different APIs and block sizes but this has been inconclusive. I have tried the effect of posix_fadvise(), sendfile() and different sized buffers for read(). I still want to try mmap(). Sendfile() did seem to be slightly slower than read(). Robert On 8/24/11 11:05 AM, Elliot Finley wrote: > On Tue, Aug 9, 2011 at 6:46 PM, Robert Sandilands<rsa...@ne...> wrote: >> Increasing the swap space fixed the fork() issue. It seems that you have to >> ensure that memory available is always double the memory needed by >> mfsmaster. None of the swap space was used over the last 24 hours. >> >> This did solve the extreme comb-like behavior of mfsmaster. It still does >> not resolve its sensitivity to load on the server. I am still seeing >> timeouts on the chunkservers and mounts on the hour due to the high CPU and >> I/O load when the meta data is dumped to disk. It did however decrease >> significantly. > Here is another thought on this... > > The process is niced to -19 (very high priority) so that it has good > performance. It forks once per hour to write out the metadata. I > haven't checked the code for this, but is the forked process lowering > it's priority so it doesn't compete with the original process? > > If it's not, it should be an easy code change to lower the priority in > the child process (metadata writer) so that it doesn't compete with > the original process at the same priority. > > If you check into this, I'm sure the list would appreciate an update. :) > > Elliot |
From: Jesus <jes...@gm...> - 2011-08-25 13:59:23
|
Hi, I have a chunkserver with 4 million chunks. The mfschunkserver process is consuming almost 1 GB of RAM, is it normal? I am using MFS 1.6.20-2 in Debian squeeze servers (64 bit 2.6.32-5 kernel) I asumed the RAM in the chunkserver was not very important (i.e, it was not dependent on the number of chunks). As opossed to the MFS master, which caches all the metadata in RAM. Thanks and kind regards, |
From: Elliot F. <efi...@gm...> - 2011-08-24 15:06:05
|
On Tue, Aug 9, 2011 at 6:46 PM, Robert Sandilands <rsa...@ne...> wrote: > Increasing the swap space fixed the fork() issue. It seems that you have to > ensure that memory available is always double the memory needed by > mfsmaster. None of the swap space was used over the last 24 hours. > > This did solve the extreme comb-like behavior of mfsmaster. It still does > not resolve its sensitivity to load on the server. I am still seeing > timeouts on the chunkservers and mounts on the hour due to the high CPU and > I/O load when the meta data is dumped to disk. It did however decrease > significantly. Here is another thought on this... The process is niced to -19 (very high priority) so that it has good performance. It forks once per hour to write out the metadata. I haven't checked the code for this, but is the forked process lowering it's priority so it doesn't compete with the original process? If it's not, it should be an easy code change to lower the priority in the child process (metadata writer) so that it doesn't compete with the original process at the same priority. If you check into this, I'm sure the list would appreciate an update. :) Elliot |
From: Fyodor U. <uf...@uf...> - 2011-08-19 12:28:48
|
Hi! I'm still about "reserved files". Studying the sources I have come to the conclusion that the only way to delete this file - add a line "PURGE" to "changelog" file and execute "mfsmetarestore". Has anyone made this? WBR, Fyodor. |
From: Fyodor U. <uf...@uf...> - 2011-08-18 20:10:57
|
On 08/18/2011 07:42 AM, Fyodor Ustinov wrote: > On 08/17/2011 08:43 PM, Fabien Germain wrote: >> Hi Fyodor, >> >> On Wed, Aug 17, 2011 at 1:46 PM, Fyodor Ustinov <uf...@uf... >> <mailto:uf...@uf...>> wrote: >> >> I have one file in "reserved". The documentation says: "Beside >> the trash >> and trash/undel directories MFSMETA holds a third directory reserved >> with files intended for final removal, but still open. These >> files will >> be erased and their data will be deleted immediately after the >> last user >> closes them". >> >> But no one user not have opened this file. Moreover, all users >> unmounted >> mfs share. >> >> How to clean "reserved files"? >> >> >> I already had the same issue : Restarting mfsmaster solved the problem. > Unfortunately, restarting mfsmaster has not helped to me. :( > Hm. If I understand the source: If for any reason the client did not send (or master did not handle) CUTOMA_FUSE_PURGE - file will forever remain in reserved. Question to developers - am I right or am I not watching closely the source? WBR, Fyodor. |
From: Fyodor U. <uf...@uf...> - 2011-08-18 04:42:20
|
On 08/17/2011 08:43 PM, Fabien Germain wrote: > Hi Fyodor, > > On Wed, Aug 17, 2011 at 1:46 PM, Fyodor Ustinov <uf...@uf... > <mailto:uf...@uf...>> wrote: > > I have one file in "reserved". The documentation says: "Beside the > trash > and trash/undel directories MFSMETA holds a third directory reserved > with files intended for final removal, but still open. These files > will > be erased and their data will be deleted immediately after the > last user > closes them". > > But no one user not have opened this file. Moreover, all users > unmounted > mfs share. > > How to clean "reserved files"? > > > I already had the same issue : Restarting mfsmaster solved the problem. Unfortunately, restarting mfsmaster has not helped to me. :( WBR, Fyodor. |
From: Fabien G. <fab...@gm...> - 2011-08-17 17:43:36
|
Hi Fyodor, On Wed, Aug 17, 2011 at 1:46 PM, Fyodor Ustinov <uf...@uf...> wrote: > I have one file in "reserved". The documentation says: "Beside the trash > and trash/undel directories MFSMETA holds a third directory reserved > with files intended for final removal, but still open. These files will > be erased and their data will be deleted immediately after the last user > closes them". > > But no one user not have opened this file. Moreover, all users unmounted > mfs share. > > How to clean "reserved files"? I already had the same issue : Restarting mfsmaster solved the problem. Fabien |
From: Fyodor U. <uf...@uf...> - 2011-08-17 11:46:13
|
Hi! I have one file in "reserved". The documentation says: "Beside the trash and trash/undel directories MFSMETA holds a third directory reserved with files intended for final removal, but still open. These files will be erased and their data will be deleted immediately after the last user closes them". But no one user not have opened this file. Moreover, all users unmounted mfs share. How to clean "reserved files"? WBR, Fyodor. |
From: Fyodor U. <uf...@uf...> - 2011-08-14 06:50:29
|
Hi! I have a lot of sockets in TIME_WAIT state, and there are messages in the log: Aug 14 09:17:02 gate0 mfsmount[1473]: can't bind socket to given ip: EADDRINUSE (Address already in use) Aug 14 09:17:58 gate0 mfsmount[1473]: last message repeated 21 times Aug 14 09:17:58 gate0 mfsmount[1473]: can't bind to given ip: EADDRINUSE (Address already in use) Aug 14 09:17:58 gate0 mfsmount[1473]: file: 9145742, index: 0 - can't connect to proper chunkserver (try counter: 1) Aug 14 09:17:58 gate0 mfsmount[1473]: can't bind to given ip: EADDRINUSE (Address already in use) Aug 14 09:17:58 gate0 mfsmount[1473]: file: 9145742, index: 0 - can't connect to proper chunkserver (try counter: 2) Aug 14 09:17:58 gate0 mfsmount[1473]: can't bind to given ip: EADDRINUSE (Address already in use) Aug 14 09:17:58 gate0 mfsmount[1473]: file: 9145742, index: 0 - can't connect to proper chunkserver (try counter: 3) Aug 14 09:17:59 gate0 mfsmount[1473]: can't bind to given ip: EADDRINUSE (Address already in use) Aug 14 09:17:59 gate0 mfsmount[1473]: file: 9145742, index: 0 - can't connect to proper chunkserver (try counter: 4) Aug 14 09:18:00 gate0 mfsmount[1473]: can't bind to given ip: EADDRINUSE (Address already in use) Aug 14 09:18:00 gate0 mfsmount[1473]: file: 9145742, index: 0 - can't connect to proper chunkserver (try counter: 5) Aug 14 09:18:01 gate0 mfsmount[1473]: can't bind to given ip: EADDRINUSE (Address already in use) Aug 14 09:18:01 gate0 mfsmount[1473]: file: 9145742, index: 0 - can't connect to proper chunkserver (try counter: 6) Aug 14 09:18:03 gate0 mfsmount[1473]: can't bind to given ip: EADDRINUSE (Address already in use) Aug 14 09:18:03 gate0 mfsmount[1473]: file: 9145742, index: 0 - can't connect to proper chunkserver (try counter: 7) Aug 14 09:18:05 gate0 mfsmount[1473]: can't bind to given ip: EADDRINUSE (Address already in use) Aug 14 09:18:05 gate0 mfsmount[1473]: file: 9145742, index: 0 - can't connect to proper chunkserver (try counter: 8) Aug 14 09:18:12 gate0 mfsmount[1473]: can't bind socket to given ip: EADDRINUSE (Address already in use) Aug 14 09:19:14 gate0 mfsmount[1473]: last message repeated 25 times Aug 14 09:20:15 gate0 mfsmount[1473]: last message repeated 28 times Aug 14 09:21:16 gate0 mfsmount[1473]: last message repeated 29 times Aug 14 09:22:17 gate0 mfsmount[1473]: last message repeated 20 times Aug 14 09:23:18 gate0 mfsmount[1473]: last message repeated 30 times This server works as gate between moosefs and nfs. If I understand correctly, mfsmount opens a new connection for every write / read? Or is it a grand mistake I made when configuring? WBR, Fyodor. |
From: Stas O. <sta...@gm...> - 2011-08-13 21:18:20
|
Thanks :). Anything that can be done about the large amount of logging messages? Regards. On Sat, Aug 13, 2011 at 11:53 PM, Fabien Germain <fab...@gm...>wrote: > Hi, > > On Sat, Aug 13, 2011 at 10:37 PM, Stas Oskin <sta...@gm...> wrote: > >> >> Damaged disk is serving by one of your chunkservers, You should remove >>> this disk from the chunkserver's mfshdd config file. MFSMaster is not in >>> charge to remove disks. >>> >> >> This is clear, but I replaced the disk with a new one - will chunkmaster >> see this after restart? >> > > If the disk is mounted and the mountpoint is declared in mfshdd.cfg : yes. > > > MFS starts replication but the replication process will be done slowly >>> (It's configurable). When your disk marked damaged you should see many files >>> with valid copies less than their goal (You can see them on CGIServer), in >>> replication process these files will come back to their goal slowly. >>> >> >> So if I don't see any files below the goal, this means the replication >> completed successfully? >> > > Yes. > > Fabien > > > ------------------------------------------------------------------------------ > FREE DOWNLOAD - uberSVN with Social Coding for Subversion. > Subversion made easy with a complete admin console. Easy > to use, easy to manage, easy to install, easy to extend. > Get a Free download of the new open ALM Subversion platform now. > http://p.sf.net/sfu/wandisco-dev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |