You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: hussein i. <hus...@gm...> - 2011-11-17 19:02:58
|
Hi, I would like to know, is it possible to set up global moosefs system, in which chunkservers are spread globally (for example spread in europe and china or other places)? and related question, when a client requests data, will he receive from the closest chuckserver (for example if the client is in china and there is a chunkserver in china, will receive the data from that server)? Thank you very much Br, Hussein |
From: Yezhou F. <fan...@gm...> - 2011-11-16 19:57:01
|
I found out if I mount about twenty mount point in a server each has about 50 concurrent connections of read will get much better performance than only mount one which has about 500 connections . Does it because fuse is single process. It can not handle too many concurrent connections? |
From: Steffen Z. <me...@sa...> - 2011-11-15 13:18:57
|
Hello, Any news on my patch? Do I have to do something else? Regards, Saz |
From: Steve <st...@bo...> - 2011-11-12 09:30:19
|
Have you looked at the superb moosefs documentation on the website. After that some more detail on your specific issue ? -------Original Message------- From: avi onic Date: 12/11/2011 08:50:00 To: moo...@li... Subject: [Moosefs-users] no mfscgiserv dear all, my moosefs doesn't have mfscgiserv on server. how to get that???? for information my spec is same for client, master, chunk : 1. OS : Slax 6.1.2 2. MooseFS : 1.6.20 - 2 3. using ext3 filesystem thanx for helping me..... ----------------------------------------------------------------------------- RSA(R) Conference 2012 Save $700 by Nov 18 Register now http://p.sf.net/sfu/rsa-sfdev2dev1 _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Allen L. <lan...@gm...> - 2011-11-11 13:43:04
|
Sure, that's the "right way" to do it. You can run the metalogger anywhere you want, but it shouldn't be "integrated" with the chunk server as the OP seemed to be asking for. On 11/10/2011 7:01 PM, wkmail wrote: > On 11/10/2011 12:42 PM, Allen Landsidel wrote: >> That would really bone a lot of configs, hardware config wise. >> >> My chunk servers all run with 512M of RAM, not the 16G or whatever I >> have in the master and the metalogger. It's "proper" that the >> chunkserver does the chunk serving job, and the metalogger does the >> metalogging job, and that the two aren't all mixed up into a single >> process/service. >> >> > > We run the Metalogger process on one of our stronger chunkservers IN > ADDITION to a defined Meta Logger machine thats setup for a backup. > > We figure, having an extra copy of the metadata can't hurt and if its > not impacting the chunkerserver ..... > > -bill > > ------------------------------------------------------------------------------ > RSA(R) Conference 2012 > Save $700 by Nov 18 > Register now > http://p.sf.net/sfu/rsa-sfdev2dev1 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: wkmail <wk...@bn...> - 2011-11-11 00:01:40
|
On 11/10/2011 12:42 PM, Allen Landsidel wrote: > That would really bone a lot of configs, hardware config wise. > > My chunk servers all run with 512M of RAM, not the 16G or whatever I > have in the master and the metalogger. It's "proper" that the > chunkserver does the chunk serving job, and the metalogger does the > metalogging job, and that the two aren't all mixed up into a single > process/service. > > We run the Metalogger process on one of our stronger chunkservers IN ADDITION to a defined Meta Logger machine thats setup for a backup. We figure, having an extra copy of the metadata can't hurt and if its not impacting the chunkerserver ..... -bill |
From: Steve <st...@bo...> - 2011-11-10 23:09:54
|
ahhhhh yes wasn't thinking of the conversion to master more just security of the data utilising existing services and hardware. -------Original Message------- From: Travis Hein Date: 10/11/2011 21:00:48 To: moo...@li... Subject: Re: [Moosefs-users] Lost all metadata on master w/ full disk I think the metalogger was added in in 1.5.x, before that it kind of did not exist. So there would be some backwards compatibility considerations to just embed it. But I think more so the motivation of the meta logger is to have the meta files similar to that of the master server, where in theory the meta logger node could be converted into a master server, e.g. all chunk servers and mfsmount clients used a DNS record to indirect the master server, this DNS record could be updated to point to the metalogger instance that is to be promoted to become the new master server. So when losing the master, renaming the changelog_ml.* files to just changelog.* on the metalogger instance, and running mfsmetarestore, booting up the mfsmaster, and continuing. In this sense, the consideration for this metalogger machine to have a similar amount of available RAM as the master machine would have had, to be able to run the mfsmaster process. And the assumption for larger filesystems would be that not every chunk server in the cluster would have this, so the dedicated planning of a metalogger server(S) was likely assumed. On 11-11-10 2:28 PM, Steve wrote: Sure its possible to start a metalogger your self. It just seems as though it could have been part of the chunkservers |
From: Travis H. <tra...@tr...> - 2011-11-10 20:59:21
|
I think the metalogger was added in in 1.5.x, before that it kind of did not exist. So there would be some backwards compatibility considerations to just embed it. But I think more so the motivation of the meta logger is to have the meta files similar to that of the master server, where in theory the meta logger node could be converted into a master server, e.g. all chunk servers and mfsmount clients used a DNS record to indirect the master server, this DNS record could be updated to point to the metalogger instance that is to be promoted to become the new master server. So when losing the master, renaming the changelog_ml.* files to just changelog.* on the metalogger instance, and running mfsmetarestore, booting up the mfsmaster, and continuing. In this sense, the consideration for this metalogger machine to have a similar amount of available RAM as the master machine would have had, to be able to run the mfsmaster process. And the assumption for larger filesystems would be that not every chunk server in the cluster would have this, so the dedicated planning of a metalogger server(s) was likely assumed. On 11-11-10 2:28 PM, Steve wrote: > Sure its possible to start a metalogger your self. It just seems as though > it could have been part of the chunkservers > > |
From: Allen L. <lan...@gm...> - 2011-11-10 20:42:29
|
That would really bone a lot of configs, hardware config wise. My chunk servers all run with 512M of RAM, not the 16G or whatever I have in the master and the metalogger. It's "proper" that the chunkserver does the chunk serving job, and the metalogger does the metalogging job, and that the two aren't all mixed up into a single process/service. On 11/10/2011 2:28 PM, Steve wrote: > Sure its possible to start a metalogger your self. It just seems as though > it could have been part of the chunkservers > > > > > > > > > > > > -------Original Message------- > > > > From: Allen Landsidel > > Date: 10/11/2011 18:10:23 > > To: moo...@li... > > Subject: Re: [Moosefs-users] Lost all metadata on master w/ full disk > > > > What makes you think it can't? I don't think there's any reason you > > can't run mfsmetalogger on a chunk server, though I haven't tried. > > > > On 11/9/2011 4:28 PM, Steve wrote: > >> and why cant each chunkserver be a logger also by default >> -------Original Message------- >> From: Davies Liu >> Date: 09/11/2011 19:53:33 >> To: Nolan Eakins >> Cc: moo...@li... >> Subject: Re: [Moosefs-users] Lost all metadata on master w/ full disk >> In the mfsmaster, when dumping data to disk, it rename metadata.mfs.back >> to metadata.mfs.back.tmp first, then write to metadata.mfs.back, finally, > it > >> unlink metadata.mfs.back.tmp and metadata.mfs. >> If writing to metadata.mfs.back failed, we will get part of metadata, or >> damaged >> one, and previous metadata will be lost together. >> Why not writing to metadata.mfs.back.tmp first, if successfully, then > rename > >> it >> to metadata.mfs.back, and delete metadata.mfs ? I think this way is much >> safer. >> In the metalogger, it does not check the downloaded metadata, if it > download > >> a >> damaged one, then all the metadata will be lost. let metalogger keep > sereral > >> recently metadata may be safer. >> I will try to fix the above in my deployments, losing metadata is >> TOO UNACCEPTABLE. >> On Tue, Nov 8, 2011 at 7:24 AM, Nolan Eakins<no...@ea...> wrote: >>> I have a little server setup to give MooseFS a try. This server ran >>> out of disk space on its root partition a day or two ago. This >>> partition is where mfsmaster stored all of its data. Needless to say, >>> mfsmaster did something dumb and zeroed out all the metadata files >>> under /var/lib/mfs. It even looks like the metadata logger on another >>> machine was affected since it didn't store anything recoverable. >>> So my entire moosefs file tree is completely hosed now, all because >>> mfsmaster can't handle a full disk. >>> Other than this, I've been happy with MooseFS. >>> Regards, >>> Nolan > ----------------------------------------------------------------------------- > > >>> RSA(R) Conference 2012 >>> Save $700 by Nov 18 >>> Register now >>> http://p.sf.net/sfu/rsa-sfdev2dev1 >>> _______________________________________________ >>> moosefs-users mailing list >>> moo...@li... >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > ----------------------------------------------------------------------------- > > > RSA(R) Conference 2012 > > Save $700 by Nov 18 > > Register now > > http://p.sf.net/sfu/rsa-sfdev2dev1 > > _______________________________________________ > > moosefs-users mailing list > > moo...@li... > > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: Steve <st...@bo...> - 2011-11-10 19:29:03
|
Sure its possible to start a metalogger your self. It just seems as though it could have been part of the chunkservers -------Original Message------- From: Allen Landsidel Date: 10/11/2011 18:10:23 To: moo...@li... Subject: Re: [Moosefs-users] Lost all metadata on master w/ full disk What makes you think it can't? I don't think there's any reason you can't run mfsmetalogger on a chunk server, though I haven't tried. On 11/9/2011 4:28 PM, Steve wrote: > and why cant each chunkserver be a logger also by default > > > > > > > > > > -------Original Message------- > > > > From: Davies Liu > > Date: 09/11/2011 19:53:33 > > To: Nolan Eakins > > Cc: moo...@li... > > Subject: Re: [Moosefs-users] Lost all metadata on master w/ full disk > > > > In the mfsmaster, when dumping data to disk, it rename metadata.mfs.back > > to metadata.mfs.back.tmp first, then write to metadata.mfs.back, finally, it > > > unlink metadata.mfs.back.tmp and metadata.mfs. > > > > If writing to metadata.mfs.back failed, we will get part of metadata, or > damaged > > one, and previous metadata will be lost together. > > > > Why not writing to metadata.mfs.back.tmp first, if successfully, then rename > it > > to metadata.mfs.back, and delete metadata.mfs ? I think this way is much > safer. > > > > In the metalogger, it does not check the downloaded metadata, if it download > a > > damaged one, then all the metadata will be lost. let metalogger keep sereral > > > recently metadata may be safer. > > > > I will try to fix the above in my deployments, losing metadata is > > TOO UNACCEPTABLE. > > > > On Tue, Nov 8, 2011 at 7:24 AM, Nolan Eakins<no...@ea...> wrote: > >> I have a little server setup to give MooseFS a try. This server ran >> out of disk space on its root partition a day or two ago. This >> partition is where mfsmaster stored all of its data. Needless to say, >> mfsmaster did something dumb and zeroed out all the metadata files >> under /var/lib/mfs. It even looks like the metadata logger on another >> machine was affected since it didn't store anything recoverable. >> So my entire moosefs file tree is completely hosed now, all because >> mfsmaster can't handle a full disk. >> Other than this, I've been happy with MooseFS. >> Regards, >> Nolan > ----------------------------------------------------------------------------- > > >> RSA(R) Conference 2012 >> Save $700 by Nov 18 >> Register now >> http://p.sf.net/sfu/rsa-sfdev2dev1 >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > > > ----------------------------------------------------------------------------- RSA(R) Conference 2012 Save $700 by Nov 18 Register now http://p.sf.net/sfu/rsa-sfdev2dev1 _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Allen L. <lan...@gm...> - 2011-11-10 18:09:19
|
What makes you think it can't? I don't think there's any reason you can't run mfsmetalogger on a chunk server, though I haven't tried. On 11/9/2011 4:28 PM, Steve wrote: > and why cant each chunkserver be a logger also by default > > > > > > > > > > -------Original Message------- > > > > From: Davies Liu > > Date: 09/11/2011 19:53:33 > > To: Nolan Eakins > > Cc: moo...@li... > > Subject: Re: [Moosefs-users] Lost all metadata on master w/ full disk > > > > In the mfsmaster, when dumping data to disk, it rename metadata.mfs.back > > to metadata.mfs.back.tmp first, then write to metadata.mfs.back, finally, it > > > unlink metadata.mfs.back.tmp and metadata.mfs. > > > > If writing to metadata.mfs.back failed, we will get part of metadata, or > damaged > > one, and previous metadata will be lost together. > > > > Why not writing to metadata.mfs.back.tmp first, if successfully, then rename > it > > to metadata.mfs.back, and delete metadata.mfs ? I think this way is much > safer. > > > > In the metalogger, it does not check the downloaded metadata, if it download > a > > damaged one, then all the metadata will be lost. let metalogger keep sereral > > > recently metadata may be safer. > > > > I will try to fix the above in my deployments, losing metadata is > > TOO UNACCEPTABLE. > > > > On Tue, Nov 8, 2011 at 7:24 AM, Nolan Eakins<no...@ea...> wrote: > >> I have a little server setup to give MooseFS a try. This server ran >> out of disk space on its root partition a day or two ago. This >> partition is where mfsmaster stored all of its data. Needless to say, >> mfsmaster did something dumb and zeroed out all the metadata files >> under /var/lib/mfs. It even looks like the metadata logger on another >> machine was affected since it didn't store anything recoverable. >> So my entire moosefs file tree is completely hosed now, all because >> mfsmaster can't handle a full disk. >> Other than this, I've been happy with MooseFS. >> Regards, >> Nolan > ----------------------------------------------------------------------------- > > >> RSA(R) Conference 2012 >> Save $700 by Nov 18 >> Register now >> http://p.sf.net/sfu/rsa-sfdev2dev1 >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > > > |
From: Steve <st...@bo...> - 2011-11-09 21:29:10
|
and why cant each chunkserver be a logger also by default -------Original Message------- From: Davies Liu Date: 09/11/2011 19:53:33 To: Nolan Eakins Cc: moo...@li... Subject: Re: [Moosefs-users] Lost all metadata on master w/ full disk In the mfsmaster, when dumping data to disk, it rename metadata.mfs.back to metadata.mfs.back.tmp first, then write to metadata.mfs.back, finally, it unlink metadata.mfs.back.tmp and metadata.mfs. If writing to metadata.mfs.back failed, we will get part of metadata, or damaged one, and previous metadata will be lost together. Why not writing to metadata.mfs.back.tmp first, if successfully, then rename it to metadata.mfs.back, and delete metadata.mfs ? I think this way is much safer. In the metalogger, it does not check the downloaded metadata, if it download a damaged one, then all the metadata will be lost. let metalogger keep sereral recently metadata may be safer. I will try to fix the above in my deployments, losing metadata is TOO UNACCEPTABLE. On Tue, Nov 8, 2011 at 7:24 AM, Nolan Eakins <no...@ea...> wrote: > I have a little server setup to give MooseFS a try. This server ran > out of disk space on its root partition a day or two ago. This > partition is where mfsmaster stored all of its data. Needless to say, > mfsmaster did something dumb and zeroed out all the metadata files > under /var/lib/mfs. It even looks like the metadata logger on another > machine was affected since it didn't store anything recoverable. > > So my entire moosefs file tree is completely hosed now, all because > mfsmaster can't handle a full disk. > > Other than this, I've been happy with MooseFS. > > Regards, > Nolan > > ----------------------------------------------------------------------------- > RSA(R) Conference 2012 > Save $700 by Nov 18 > Register now > http://p.sf.net/sfu/rsa-sfdev2dev1 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- - Davies ----------------------------------------------------------------------------- RSA(R) Conference 2012 Save $700 by Nov 18 Register now http://p.sf.net/sfu/rsa-sfdev2dev1 _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Davies L. <dav...@gm...> - 2011-11-09 05:32:06
|
new patch, with bug fixed On Tue, Nov 8, 2011 at 6:14 PM, Davies Liu <dav...@gm...> wrote: > I have made a patch to solve this, at attached. > > On Tue, Nov 8, 2011 at 5:42 PM, Davies Liu <dav...@gm...> wrote: >> In the mfsmaster, when dumping data to disk, it rename metadata.mfs.back >> to metadata.mfs.back.tmp first, then write to metadata.mfs.back, finally, it >> unlink metadata.mfs.back.tmp and metadata.mfs. >> >> If writing to metadata.mfs.back failed, we will get part of metadata, or damaged >> one, and previous metadata will be lost together. >> >> Why not writing to metadata.mfs.back.tmp first, if successfully, then rename it >> to metadata.mfs.back, and delete metadata.mfs ? I think this way is much safer. >> >> In the metalogger, it does not check the downloaded metadata, if it download a >> damaged one, then all the metadata will be lost. let metalogger keep sereral >> recently metadata may be safer. >> >> I will try to fix the above in my deployments, losing metadata is >> TOO UNACCEPTABLE. >> >> On Tue, Nov 8, 2011 at 7:24 AM, Nolan Eakins <no...@ea...> wrote: >>> I have a little server setup to give MooseFS a try. This server ran >>> out of disk space on its root partition a day or two ago. This >>> partition is where mfsmaster stored all of its data. Needless to say, >>> mfsmaster did something dumb and zeroed out all the metadata files >>> under /var/lib/mfs. It even looks like the metadata logger on another >>> machine was affected since it didn't store anything recoverable. >>> >>> So my entire moosefs file tree is completely hosed now, all because >>> mfsmaster can't handle a full disk. >>> >>> Other than this, I've been happy with MooseFS. >>> >>> Regards, >>> Nolan >>> >>> ------------------------------------------------------------------------------ >>> RSA(R) Conference 2012 >>> Save $700 by Nov 18 >>> Register now >>> http://p.sf.net/sfu/rsa-sfdev2dev1 >>> _______________________________________________ >>> moosefs-users mailing list >>> moo...@li... >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>> >> >> >> >> -- >> - Davies >> > > > > -- > - Davies > -- - Davies |
From: Davies L. <dav...@gm...> - 2011-11-08 10:17:16
|
On Tue, Nov 8, 2011 at 7:24 AM, Nolan Eakins <no...@ea...> wrote: > I have a little server setup to give MooseFS a try. This server ran > out of disk space on its root partition a day or two ago. This > partition is where mfsmaster stored all of its data. Needless to say, > mfsmaster did something dumb and zeroed out all the metadata files > under /var/lib/mfs. It even looks like the metadata logger on another > machine was affected since it didn't store anything recoverable. You should search for metadata.mfs.emergency in these dir: / /tmp /var/ /usr/ /usr/share /usr/local/ /usr/local/var/ /usr/local/var/share > So my entire moosefs file tree is completely hosed now, all because > mfsmaster can't handle a full disk. > > Other than this, I've been happy with MooseFS. > > Regards, > Nolan > > ------------------------------------------------------------------------------ > RSA(R) Conference 2012 > Save $700 by Nov 18 > Register now > http://p.sf.net/sfu/rsa-sfdev2dev1 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- - Davies |
From: Davies L. <dav...@gm...> - 2011-11-08 10:14:34
|
I have made a patch to solve this, at attached. On Tue, Nov 8, 2011 at 5:42 PM, Davies Liu <dav...@gm...> wrote: > In the mfsmaster, when dumping data to disk, it rename metadata.mfs.back > to metadata.mfs.back.tmp first, then write to metadata.mfs.back, finally, it > unlink metadata.mfs.back.tmp and metadata.mfs. > > If writing to metadata.mfs.back failed, we will get part of metadata, or damaged > one, and previous metadata will be lost together. > > Why not writing to metadata.mfs.back.tmp first, if successfully, then rename it > to metadata.mfs.back, and delete metadata.mfs ? I think this way is much safer. > > In the metalogger, it does not check the downloaded metadata, if it download a > damaged one, then all the metadata will be lost. let metalogger keep sereral > recently metadata may be safer. > > I will try to fix the above in my deployments, losing metadata is > TOO UNACCEPTABLE. > > On Tue, Nov 8, 2011 at 7:24 AM, Nolan Eakins <no...@ea...> wrote: >> I have a little server setup to give MooseFS a try. This server ran >> out of disk space on its root partition a day or two ago. This >> partition is where mfsmaster stored all of its data. Needless to say, >> mfsmaster did something dumb and zeroed out all the metadata files >> under /var/lib/mfs. It even looks like the metadata logger on another >> machine was affected since it didn't store anything recoverable. >> >> So my entire moosefs file tree is completely hosed now, all because >> mfsmaster can't handle a full disk. >> >> Other than this, I've been happy with MooseFS. >> >> Regards, >> Nolan >> >> ------------------------------------------------------------------------------ >> RSA(R) Conference 2012 >> Save $700 by Nov 18 >> Register now >> http://p.sf.net/sfu/rsa-sfdev2dev1 >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > > > > -- > - Davies > -- - Davies |
From: Davies L. <dav...@gm...> - 2011-11-08 09:43:04
|
In the mfsmaster, when dumping data to disk, it rename metadata.mfs.back to metadata.mfs.back.tmp first, then write to metadata.mfs.back, finally, it unlink metadata.mfs.back.tmp and metadata.mfs. If writing to metadata.mfs.back failed, we will get part of metadata, or damaged one, and previous metadata will be lost together. Why not writing to metadata.mfs.back.tmp first, if successfully, then rename it to metadata.mfs.back, and delete metadata.mfs ? I think this way is much safer. In the metalogger, it does not check the downloaded metadata, if it download a damaged one, then all the metadata will be lost. let metalogger keep sereral recently metadata may be safer. I will try to fix the above in my deployments, losing metadata is TOO UNACCEPTABLE. On Tue, Nov 8, 2011 at 7:24 AM, Nolan Eakins <no...@ea...> wrote: > I have a little server setup to give MooseFS a try. This server ran > out of disk space on its root partition a day or two ago. This > partition is where mfsmaster stored all of its data. Needless to say, > mfsmaster did something dumb and zeroed out all the metadata files > under /var/lib/mfs. It even looks like the metadata logger on another > machine was affected since it didn't store anything recoverable. > > So my entire moosefs file tree is completely hosed now, all because > mfsmaster can't handle a full disk. > > Other than this, I've been happy with MooseFS. > > Regards, > Nolan > > ------------------------------------------------------------------------------ > RSA(R) Conference 2012 > Save $700 by Nov 18 > Register now > http://p.sf.net/sfu/rsa-sfdev2dev1 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- - Davies |
From: avi o. <oni...@gm...> - 2011-11-08 08:58:21
|
dear all, my moosefs doesn't have mfscgiserv on server. how to get that???? for information my spec is same for client, master, chunk : 1. OS : Slax 6.1.2 2. MooseFS : 1.6.20 - 2 3. using ext3 filesystem thanx for helping me..... |
From: Nolan E. <no...@ea...> - 2011-11-07 23:24:37
|
I have a little server setup to give MooseFS a try. This server ran out of disk space on its root partition a day or two ago. This partition is where mfsmaster stored all of its data. Needless to say, mfsmaster did something dumb and zeroed out all the metadata files under /var/lib/mfs. It even looks like the metadata logger on another machine was affected since it didn't store anything recoverable. So my entire moosefs file tree is completely hosed now, all because mfsmaster can't handle a full disk. Other than this, I've been happy with MooseFS. Regards, Nolan |
From: Steffen Z. <me...@sa...> - 2011-11-07 16:25:12
|
Hello, I've changed the init scripts to be a little bit cleaner. Also I've implemented status and reload commands to all services (chunkserver, master, metalogger). To have less differences between init scripts, I've also changed the defaults files. As I don't like to spam this list with a lot of patches or big emails, you can find all patches (including the one from my last message to control file) here: http://saz.sh/src/moosefs/patches/2011-11-07_16:27:30/ Regards, Saz |
From: Steffen Z. <me...@sa...> - 2011-11-07 15:37:10
|
Hello, I've tried to build Debian packages from source. mfscgiserv wouldn't be build, as a valid python interpreter was not available (I'm building the packages with pbuilder). Here is a patch to fix this problem: --- control.orig 2011-11-07 16:08:26.656755998 +0100 +++ control 2011-11-05 17:21:14.946703126 +0100 @@ -2,7 +2,7 @@ Section: admin Priority: extra Maintainer: Jakub Bogusz <co...@mo...> -Build-Depends: debhelper (>= 5), autotools-dev, libc6-dev, libfuse-dev, pkg-config, zlib1g-dev +Build-Depends: debhelper (>= 5), autotools-dev, libc6-dev, libfuse-dev, pkg-config, zlib1g-dev, python Standards-Version: 3.7.3 Homepage: http://moosefs.com/ Regards, Saz |
From: Michał B. <mic...@ge...> - 2011-11-05 07:03:34
|
Hi! Please check these locations for metadata file: /metadata.mfs.emergency /tmp/metadata.mfs.emergency /var/metadata.mfs.emergency /usr/metadata.mfs.emergency /usr/share/metadata.mfs.emergency /usr/local/metadata.mfs.emergency /usr/local/var/metadata.mfs.emergency /usr/local/share/metadata.mfs.emergency Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Allen Landsidel [mailto:lan...@gm...] Sent: Friday, November 04, 2011 3:13 PM To: moo...@li... Subject: [Moosefs-users] Out of disk space on master / recovery failed So I didn't plan ahead well and ended up with /var filling up on my master over night, causing the master to crash. mfsmetarestore refused to recover the system, I think because it didn't get a chance to write out the metadata file. It seems there's something wrong with the way it's doing writes. After the crash both the metadata.mfs and metadata.mfs.back were 0 bytes, and mfsmetarestore (obviously) refused to read from them. Some but not all of the changelog files were 0 bytes as well. Same story on the backup (metalogger) server. Just a heads up, I think a little more checking would be in order here to make sure there is space available for the metadata, and at least to prevent the master from crashing when/if it can't write the metadata. If it had stayed up with all the metadata in memory I could've seen the disk issue and brought up another metalogger with more disk space to catch up and take over. ---------------------------------------------------------------------------- -- RSA(R) Conference 2012 Save $700 by Nov 18 Register now http://p.sf.net/sfu/rsa-sfdev2dev1 _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Allen L. <lan...@gm...> - 2011-11-04 15:37:55
|
My chunk servers are on different machines, I actually have a setup that's supposed to be pretty resilient. 4 Chunkservers, 1 master, 1 metalogger, and 1 app server running mfsmount -- all separate machines (VMs). I too lost all the data. The chunks survived, but since the metadata was corrupt, all the files were lost and the chunks marked for recycling (goal 0). I should have paid more attention to the default drive the master was using for metadata, but a full-on crash just due to disk space exhaustion on the master seems excessive. On 11/4/2011 10:52 AM, Steve wrote: > This happened to me too, on a mfsmaster running on a small silicon drive. > Lost the lot. A powercut causing a mfschunkserver failure creating extra > logging. > > Looks like a repeatable problem that needs addressing. > > > > > > > > > > > > > > -------Original Message------- > > > > From: Allen Landsidel > > Date: 04/11/2011 14:37:09 > > To: moo...@li... > > Subject: [Moosefs-users] Out of disk space on master / recovery failed > > > > So I didn't plan ahead well and ended up with /var filling up on my > > master over night, causing the master to crash. > > > > mfsmetarestore refused to recover the system, I think because it didn't > > get a chance to write out the metadata file. It seems there's something > > wrong with the way it's doing writes. After the crash both the > > metadata.mfs and metadata.mfs.back were 0 bytes, and mfsmetarestore > > (obviously) refused to read from them. > > > > Some but not all of the changelog files were 0 bytes as well. Same > > story on the backup (metalogger) server. > > > > Just a heads up, I think a little more checking would be in order here > > to make sure there is space available for the metadata, and at least to > > prevent the master from crashing when/if it can't write the metadata. > > If it had stayed up with all the metadata in memory I could've seen the > > disk issue and brought up another metalogger with more disk space to > > catch up and take over. > > > > > > > > ----------------------------------------------------------------------------- > > > RSA(R) Conference 2012 > > Save $700 by Nov 18 > > Register now > > http://p.sf.net/sfu/rsa-sfdev2dev1 > > _______________________________________________ > > moosefs-users mailing list > > moo...@li... > > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: Steve <st...@bo...> - 2011-11-04 14:53:06
|
This happened to me too, on a mfsmaster running on a small silicon drive. Lost the lot. A powercut causing a mfschunkserver failure creating extra logging. Looks like a repeatable problem that needs addressing. -------Original Message------- From: Allen Landsidel Date: 04/11/2011 14:37:09 To: moo...@li... Subject: [Moosefs-users] Out of disk space on master / recovery failed So I didn't plan ahead well and ended up with /var filling up on my master over night, causing the master to crash. mfsmetarestore refused to recover the system, I think because it didn't get a chance to write out the metadata file. It seems there's something wrong with the way it's doing writes. After the crash both the metadata.mfs and metadata.mfs.back were 0 bytes, and mfsmetarestore (obviously) refused to read from them. Some but not all of the changelog files were 0 bytes as well. Same story on the backup (metalogger) server. Just a heads up, I think a little more checking would be in order here to make sure there is space available for the metadata, and at least to prevent the master from crashing when/if it can't write the metadata. If it had stayed up with all the metadata in memory I could've seen the disk issue and brought up another metalogger with more disk space to catch up and take over. ----------------------------------------------------------------------------- RSA(R) Conference 2012 Save $700 by Nov 18 Register now http://p.sf.net/sfu/rsa-sfdev2dev1 _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Allen L. <lan...@gm...> - 2011-11-04 14:13:24
|
So I didn't plan ahead well and ended up with /var filling up on my master over night, causing the master to crash. mfsmetarestore refused to recover the system, I think because it didn't get a chance to write out the metadata file. It seems there's something wrong with the way it's doing writes. After the crash both the metadata.mfs and metadata.mfs.back were 0 bytes, and mfsmetarestore (obviously) refused to read from them. Some but not all of the changelog files were 0 bytes as well. Same story on the backup (metalogger) server. Just a heads up, I think a little more checking would be in order here to make sure there is space available for the metadata, and at least to prevent the master from crashing when/if it can't write the metadata. If it had stayed up with all the metadata in memory I could've seen the disk issue and brought up another metalogger with more disk space to catch up and take over. |
From: Michał B. <mic...@ge...> - 2011-11-04 12:31:07
|
We are afraid it could rather have been some hardware problem than a bug in software itself. If you encounter it again, please check your hardware or if you could reproduce it, let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Davies Liu [mailto:dav...@gm...] Sent: Friday, November 04, 2011 6:51 AM To: Michał Borychowski Cc: moo...@li... Subject: Re: [Moosefs-users] mfschunkserver eats 100% CPU 2011/11/3 Michał Borychowski <mic...@ge...>: > Hi! > > Is this a repeatable problem? Do you know a scenario which causes this behaviour? No,I have not see it again. > > Kind regards > Michał Borychowski > MooseFS Support Manager > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. > ul. Wołoska 7, 02-672 Warszawa > Budynek MARS, klatka D > Tel.: +4822 874-41-00 > Fax : +4822 874-41-01 > > > -----Original Message----- > From: Davies Liu [mailto:dav...@gm...] > Sent: Thursday, November 03, 2011 2:47 AM > To: moo...@li... > Subject: [Moosefs-users] mfschunkserver eats 100% CPU > > Hi, > > Found one bug of mfschunkserver, it eats 100% CPU without any activities. > > strace shows: > > [pid 7754] gettimeofday({1320117856, 191716}, NULL) = 0 [pid 7754] read(222, 0x7f397c978d08, 408) = -1 EAGAIN (Resource temporarily unavailable) [pid 7754] read(29, 0x7f397cad2268, 29368) = -1 EAGAIN (Resource temporarily unavailable) [pid 7754] poll([{fd=12, events=POLLIN}, {fd=11, events=POLLIN}, {fd=6, events=POLLIN}, {fd=9, events=POLLIN}, {fd=224, events=POLLIN|POLLOUT}, {fd=222, events=POLLIN}, {fd=125, events=POLLIN|POLLOUT}, {fd=29, events=POLLIN}], 8, 50) = 2 ([{fd=224, revents=POLLOUT}, {fd=125, revents=POLLOUT}]) [pid 7754] gettimeofday({1320117856, 192052}, NULL) = 0 [pid 7754] read(222, 0x7f397c978d08, 408) = -1 EAGAIN (Resource temporarily unavailable) [pid 7754] read(29, 0x7f397cad2268, 29368) = -1 EAGAIN (Resource temporarily unavailable) [pid 7754] poll([{fd=12, events=POLLIN}, {fd=11, events=POLLIN}, {fd=6, events=POLLIN}, {fd=9, events=POLLIN}, {fd=224, events=POLLIN|POLLOUT}, {fd=222, events=POLLIN}, {fd=125, events=POLLIN|POLLOUT}, {fd=29, events=POLLIN}], 8, 50) = 2 ([{fd=224, revents=POLLOUT}, {fd=125, revents=POLLOUT}]) [pid 7754] gettimeofday({1320117856, 192404}, NULL) = 0 [pid 7754] read(222, 0x7f397c978d08, 408) = -1 EAGAIN (Resource temporarily unavailable) [pid 7754] read(29, 0x7f397cad2268, 29368) = -1 EAGAIN (Resource temporarily unavailable) [pid 7754] poll([{fd=12, events=POLLIN}, {fd=11, events=POLLIN}, {fd=6, events=POLLIN}, {fd=9, events=POLLIN}, {fd=224, events=POLLIN|POLLOUT}, {fd=222, events=POLLIN}, {fd=125, events=POLLIN|POLLOUT}, {fd=29, events=POLLIN}], 8, 50) = 2 ([{fd=224, revents=POLLOUT}, {fd=125, revents=POLLOUT}]) [pid 7754] gettimeofday({1320117856, 192740}, NULL) = 0 [pid 7754] read(222, 0x7f397c978d08, 408) = -1 EAGAIN (Resource temporarily unavailable) [pid 7754] read(29, 0x7f397cad2268, 29368) = -1 EAGAIN (Resource temporarily unavailable) [pid 7754] poll([{fd=12, events=POLLIN}, {fd=11, events=POLLIN}, {fd=6, events=POLLIN}, {fd=9, events=POLLIN}, {fd=224, events=POLLIN|POLLOUT}, {fd=222, events=POLLIN}, {fd=125, events=POLLIN|POLLOUT}, {fd=29, events=POLLIN}], 8, 50) = 2 ([{fd=224, revents=POLLOUT}, {fd=125, revents=POLLOUT}]) [pid 7754] gettimeofday({1320117856, 193063}, NULL) = 0 [pid 7754] read(222, 0x7f397c978d08, 408) = -1 EAGAIN (Resource temporarily unavailable) [pid 7754] read(29, 0x7f397cad2268, 29368) = -1 EAGAIN (Resource temporarily unavailable) [pid 7754] poll([{fd=12, events=POLLIN}, {fd=11, events=POLLIN}, {fd=6, events=POLLIN}, {fd=9, events=POLLIN}, {fd=224, events=POLLIN|POLLOUT}, {fd=222, events=POLLIN}, {fd=125, events=POLLIN|POLLOUT}, {fd=29, events=POLLIN}], 8, 50) = 2 ([{fd=224, revents=POLLOUT}, {fd=125, revents=POLLOUT}]) [pid 7754] gettimeofday({1320117856, 193386}, NULL) = 0 [pid 7754] read(222, 0x7f397c978d08, 408) = -1 EAGAIN (Resource temporarily unavailable) [pid 7754] read(29, 0x7f397cad2268, 29368) = -1 EAGAIN (Resource temporarily unavailable) [pid 7754] poll([{fd=12, events=POLLIN}, {fd=11, events=POLLIN}, {fd=6, events=POLLIN}, {fd=9, events=POLLIN}, {fd=224, events=POLLIN|POLLOUT}, {fd=222, events=POLLIN}, {fd=125, events=POLLIN|POLLOUT}, {fd=29, events=POLLIN}], 8, 50) = 2 ([{fd=224, revents=POLLOUT}, {fd=125, revents=POLLOUT}]) [pid 7754] gettimeofday({1320117856, 193710}, NULL) = 0 [pid 7754] read(222, 0x7f397c978d08, 408) = -1 EAGAIN (Resource temporarily unavailable) [pid 7754] read(29, 0x7f397cad2268, 29368) = -1 EAGAIN (Resource temporarily unavailable) [pid 7754] poll([{fd=12, events=POLLIN}, {fd=11, events=POLLIN}, {fd=6, events=POLLIN}, {fd=9, events=POLLIN}, {fd=224, events=POLLIN|POLLOUT}, {fd=222, events=POLLIN}, {fd=125, events=POLLIN|POLLOUT}, {fd=29, events=POLLIN}], 8, 50) = 2 ([{fd=224, revents=POLLOUT}, {fd=125, revents=POLLOUT}]) [pid 7754] gettimeofday({1320117856, 194033}, NULL) = 0 [pid 7754] read(222, 0x7f397c978d08, 408) = -1 EAGAIN (Resource temporarily unavailable) [pid 7754] read(29, 0x7f397cad2268, 29368) = -1 EAGAIN (Resource temporarily unavailable) [pid 7754] poll([{fd=12, events=POLLIN}, {fd=11, events=POLLIN}, {fd=6, events=POLLIN}, {fd=9, events=POLLIN}, {fd=224, events=POLLIN|POLLOUT}, {fd=222, events=POLLIN}, {fd=125, events=POLLIN|POLLOUT}, {fd=29, events=POLLIN}], 8, 50) = 2 ([{fd=224, revents=POLLOUT}, {fd=125, revents=POLLOUT}]) [pid 7754] gettimeofday({1320117856, 194893}, NULL) = 0 [pid 7754] read(222, 0x7f397c978d08, 408) = -1 EAGAIN (Resource temporarily unavailable) [pid 7754] read(29, 0x7f397cad2268, 29368) = -1 EAGAIN (Resource temporarily unavailable) [pid 7754] poll([{fd=12, events=POLLIN}, {fd=11, events=POLLIN}, {fd=6, events=POLLIN}, {fd=9, events=POLLIN}, {fd=224, events=POLLIN|POLLOUT}, {fd=222, events=POLLIN}, {fd=125, events=POLLIN|POLLOUT}, {fd=29, events=POLLIN}], 8, 50) = 2 ([{fd=224, revents=POLLOUT}, {fd=125, revents=POLLOUT}]) > > It seems that fd 224 and 125 are ready , but it read fd 29 and 222, with -1, then fall into infinite loop. > > -- > - Davies > > ------------------------------------------------------------------------------ > RSA(R) Conference 2012 > Save $700 by Nov 18 > Register now > http://p.sf.net/sfu/rsa-sfdev2dev1 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > -- - Davies ------------------------------------------------------------------------------ RSA(R) Conference 2012 Save $700 by Nov 18 Register now http://p.sf.net/sfu/rsa-sfdev2dev1 _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |