From: Nolan E. <no...@ea...> - 2011-11-07 23:24:37
|
I have a little server setup to give MooseFS a try. This server ran out of disk space on its root partition a day or two ago. This partition is where mfsmaster stored all of its data. Needless to say, mfsmaster did something dumb and zeroed out all the metadata files under /var/lib/mfs. It even looks like the metadata logger on another machine was affected since it didn't store anything recoverable. So my entire moosefs file tree is completely hosed now, all because mfsmaster can't handle a full disk. Other than this, I've been happy with MooseFS. Regards, Nolan |
From: Davies L. <dav...@gm...> - 2011-11-08 09:43:04
|
In the mfsmaster, when dumping data to disk, it rename metadata.mfs.back to metadata.mfs.back.tmp first, then write to metadata.mfs.back, finally, it unlink metadata.mfs.back.tmp and metadata.mfs. If writing to metadata.mfs.back failed, we will get part of metadata, or damaged one, and previous metadata will be lost together. Why not writing to metadata.mfs.back.tmp first, if successfully, then rename it to metadata.mfs.back, and delete metadata.mfs ? I think this way is much safer. In the metalogger, it does not check the downloaded metadata, if it download a damaged one, then all the metadata will be lost. let metalogger keep sereral recently metadata may be safer. I will try to fix the above in my deployments, losing metadata is TOO UNACCEPTABLE. On Tue, Nov 8, 2011 at 7:24 AM, Nolan Eakins <no...@ea...> wrote: > I have a little server setup to give MooseFS a try. This server ran > out of disk space on its root partition a day or two ago. This > partition is where mfsmaster stored all of its data. Needless to say, > mfsmaster did something dumb and zeroed out all the metadata files > under /var/lib/mfs. It even looks like the metadata logger on another > machine was affected since it didn't store anything recoverable. > > So my entire moosefs file tree is completely hosed now, all because > mfsmaster can't handle a full disk. > > Other than this, I've been happy with MooseFS. > > Regards, > Nolan > > ------------------------------------------------------------------------------ > RSA(R) Conference 2012 > Save $700 by Nov 18 > Register now > http://p.sf.net/sfu/rsa-sfdev2dev1 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- - Davies |
From: Davies L. <dav...@gm...> - 2011-11-08 10:14:34
Attachments:
meta.patch
|
I have made a patch to solve this, at attached. On Tue, Nov 8, 2011 at 5:42 PM, Davies Liu <dav...@gm...> wrote: > In the mfsmaster, when dumping data to disk, it rename metadata.mfs.back > to metadata.mfs.back.tmp first, then write to metadata.mfs.back, finally, it > unlink metadata.mfs.back.tmp and metadata.mfs. > > If writing to metadata.mfs.back failed, we will get part of metadata, or damaged > one, and previous metadata will be lost together. > > Why not writing to metadata.mfs.back.tmp first, if successfully, then rename it > to metadata.mfs.back, and delete metadata.mfs ? I think this way is much safer. > > In the metalogger, it does not check the downloaded metadata, if it download a > damaged one, then all the metadata will be lost. let metalogger keep sereral > recently metadata may be safer. > > I will try to fix the above in my deployments, losing metadata is > TOO UNACCEPTABLE. > > On Tue, Nov 8, 2011 at 7:24 AM, Nolan Eakins <no...@ea...> wrote: >> I have a little server setup to give MooseFS a try. This server ran >> out of disk space on its root partition a day or two ago. This >> partition is where mfsmaster stored all of its data. Needless to say, >> mfsmaster did something dumb and zeroed out all the metadata files >> under /var/lib/mfs. It even looks like the metadata logger on another >> machine was affected since it didn't store anything recoverable. >> >> So my entire moosefs file tree is completely hosed now, all because >> mfsmaster can't handle a full disk. >> >> Other than this, I've been happy with MooseFS. >> >> Regards, >> Nolan >> >> ------------------------------------------------------------------------------ >> RSA(R) Conference 2012 >> Save $700 by Nov 18 >> Register now >> http://p.sf.net/sfu/rsa-sfdev2dev1 >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > > > > -- > - Davies > -- - Davies |
From: Davies L. <dav...@gm...> - 2011-11-09 05:32:06
Attachments:
meta.patch
|
new patch, with bug fixed On Tue, Nov 8, 2011 at 6:14 PM, Davies Liu <dav...@gm...> wrote: > I have made a patch to solve this, at attached. > > On Tue, Nov 8, 2011 at 5:42 PM, Davies Liu <dav...@gm...> wrote: >> In the mfsmaster, when dumping data to disk, it rename metadata.mfs.back >> to metadata.mfs.back.tmp first, then write to metadata.mfs.back, finally, it >> unlink metadata.mfs.back.tmp and metadata.mfs. >> >> If writing to metadata.mfs.back failed, we will get part of metadata, or damaged >> one, and previous metadata will be lost together. >> >> Why not writing to metadata.mfs.back.tmp first, if successfully, then rename it >> to metadata.mfs.back, and delete metadata.mfs ? I think this way is much safer. >> >> In the metalogger, it does not check the downloaded metadata, if it download a >> damaged one, then all the metadata will be lost. let metalogger keep sereral >> recently metadata may be safer. >> >> I will try to fix the above in my deployments, losing metadata is >> TOO UNACCEPTABLE. >> >> On Tue, Nov 8, 2011 at 7:24 AM, Nolan Eakins <no...@ea...> wrote: >>> I have a little server setup to give MooseFS a try. This server ran >>> out of disk space on its root partition a day or two ago. This >>> partition is where mfsmaster stored all of its data. Needless to say, >>> mfsmaster did something dumb and zeroed out all the metadata files >>> under /var/lib/mfs. It even looks like the metadata logger on another >>> machine was affected since it didn't store anything recoverable. >>> >>> So my entire moosefs file tree is completely hosed now, all because >>> mfsmaster can't handle a full disk. >>> >>> Other than this, I've been happy with MooseFS. >>> >>> Regards, >>> Nolan >>> >>> ------------------------------------------------------------------------------ >>> RSA(R) Conference 2012 >>> Save $700 by Nov 18 >>> Register now >>> http://p.sf.net/sfu/rsa-sfdev2dev1 >>> _______________________________________________ >>> moosefs-users mailing list >>> moo...@li... >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>> >> >> >> >> -- >> - Davies >> > > > > -- > - Davies > -- - Davies |
From: Michał B. <mic...@ge...> - 2012-01-12 10:05:12
|
Hi Davies! Sorry for the late reply but only now we had time to investigate your patch. The change will be implemented in one of the upcoming releases. We didn't use your patch in the exact shape you sent but we were highly inspired by it :) So again - thank you very much for your commitment! Kind regards Michał -----Original Message----- From: Davies Liu [mailto:dav...@gm...] Sent: Wednesday, November 09, 2011 6:32 AM To: Nolan Eakins Cc: moo...@li... Subject: Re: [Moosefs-users] Lost all metadata on master w/ full disk new patch, with bug fixed On Tue, Nov 8, 2011 at 6:14 PM, Davies Liu <dav...@gm...> wrote: > I have made a patch to solve this, at attached. > > On Tue, Nov 8, 2011 at 5:42 PM, Davies Liu <dav...@gm...> wrote: >> In the mfsmaster, when dumping data to disk, it rename >> metadata.mfs.back to metadata.mfs.back.tmp first, then write to >> metadata.mfs.back, finally, it unlink metadata.mfs.back.tmp and metadata.mfs. >> >> If writing to metadata.mfs.back failed, we will get part of metadata, >> or damaged one, and previous metadata will be lost together. >> >> Why not writing to metadata.mfs.back.tmp first, if successfully, then >> rename it to metadata.mfs.back, and delete metadata.mfs ? I think this way is much safer. >> >> In the metalogger, it does not check the downloaded metadata, if it >> download a damaged one, then all the metadata will be lost. let >> metalogger keep sereral recently metadata may be safer. >> >> I will try to fix the above in my deployments, losing metadata is TOO >> UNACCEPTABLE. >> >> On Tue, Nov 8, 2011 at 7:24 AM, Nolan Eakins <no...@ea...> wrote: >>> I have a little server setup to give MooseFS a try. This server ran >>> out of disk space on its root partition a day or two ago. This >>> partition is where mfsmaster stored all of its data. Needless to >>> say, mfsmaster did something dumb and zeroed out all the metadata >>> files under /var/lib/mfs. It even looks like the metadata logger on >>> another machine was affected since it didn't store anything recoverable. >>> >>> So my entire moosefs file tree is completely hosed now, all because >>> mfsmaster can't handle a full disk. >>> >>> Other than this, I've been happy with MooseFS. >>> >>> Regards, >>> Nolan >>> >>> -------------------------------------------------------------------- >>> ---------- >>> RSA(R) Conference 2012 >>> Save $700 by Nov 18 >>> Register now >>> http://p.sf.net/sfu/rsa-sfdev2dev1 >>> _______________________________________________ >>> moosefs-users mailing list >>> moo...@li... >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>> >> >> >> >> -- >> - Davies >> > > > > -- > - Davies > -- - Davies |
From: Davies L. <dav...@gm...> - 2011-11-08 10:17:16
|
On Tue, Nov 8, 2011 at 7:24 AM, Nolan Eakins <no...@ea...> wrote: > I have a little server setup to give MooseFS a try. This server ran > out of disk space on its root partition a day or two ago. This > partition is where mfsmaster stored all of its data. Needless to say, > mfsmaster did something dumb and zeroed out all the metadata files > under /var/lib/mfs. It even looks like the metadata logger on another > machine was affected since it didn't store anything recoverable. You should search for metadata.mfs.emergency in these dir: / /tmp /var/ /usr/ /usr/share /usr/local/ /usr/local/var/ /usr/local/var/share > So my entire moosefs file tree is completely hosed now, all because > mfsmaster can't handle a full disk. > > Other than this, I've been happy with MooseFS. > > Regards, > Nolan > > ------------------------------------------------------------------------------ > RSA(R) Conference 2012 > Save $700 by Nov 18 > Register now > http://p.sf.net/sfu/rsa-sfdev2dev1 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- - Davies |
From: Steve <st...@bo...> - 2011-11-09 21:29:10
|
and why cant each chunkserver be a logger also by default -------Original Message------- From: Davies Liu Date: 09/11/2011 19:53:33 To: Nolan Eakins Cc: moo...@li... Subject: Re: [Moosefs-users] Lost all metadata on master w/ full disk In the mfsmaster, when dumping data to disk, it rename metadata.mfs.back to metadata.mfs.back.tmp first, then write to metadata.mfs.back, finally, it unlink metadata.mfs.back.tmp and metadata.mfs. If writing to metadata.mfs.back failed, we will get part of metadata, or damaged one, and previous metadata will be lost together. Why not writing to metadata.mfs.back.tmp first, if successfully, then rename it to metadata.mfs.back, and delete metadata.mfs ? I think this way is much safer. In the metalogger, it does not check the downloaded metadata, if it download a damaged one, then all the metadata will be lost. let metalogger keep sereral recently metadata may be safer. I will try to fix the above in my deployments, losing metadata is TOO UNACCEPTABLE. On Tue, Nov 8, 2011 at 7:24 AM, Nolan Eakins <no...@ea...> wrote: > I have a little server setup to give MooseFS a try. This server ran > out of disk space on its root partition a day or two ago. This > partition is where mfsmaster stored all of its data. Needless to say, > mfsmaster did something dumb and zeroed out all the metadata files > under /var/lib/mfs. It even looks like the metadata logger on another > machine was affected since it didn't store anything recoverable. > > So my entire moosefs file tree is completely hosed now, all because > mfsmaster can't handle a full disk. > > Other than this, I've been happy with MooseFS. > > Regards, > Nolan > > ----------------------------------------------------------------------------- > RSA(R) Conference 2012 > Save $700 by Nov 18 > Register now > http://p.sf.net/sfu/rsa-sfdev2dev1 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- - Davies ----------------------------------------------------------------------------- RSA(R) Conference 2012 Save $700 by Nov 18 Register now http://p.sf.net/sfu/rsa-sfdev2dev1 _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Allen L. <lan...@gm...> - 2011-11-10 18:09:19
|
What makes you think it can't? I don't think there's any reason you can't run mfsmetalogger on a chunk server, though I haven't tried. On 11/9/2011 4:28 PM, Steve wrote: > and why cant each chunkserver be a logger also by default > > > > > > > > > > -------Original Message------- > > > > From: Davies Liu > > Date: 09/11/2011 19:53:33 > > To: Nolan Eakins > > Cc: moo...@li... > > Subject: Re: [Moosefs-users] Lost all metadata on master w/ full disk > > > > In the mfsmaster, when dumping data to disk, it rename metadata.mfs.back > > to metadata.mfs.back.tmp first, then write to metadata.mfs.back, finally, it > > > unlink metadata.mfs.back.tmp and metadata.mfs. > > > > If writing to metadata.mfs.back failed, we will get part of metadata, or > damaged > > one, and previous metadata will be lost together. > > > > Why not writing to metadata.mfs.back.tmp first, if successfully, then rename > it > > to metadata.mfs.back, and delete metadata.mfs ? I think this way is much > safer. > > > > In the metalogger, it does not check the downloaded metadata, if it download > a > > damaged one, then all the metadata will be lost. let metalogger keep sereral > > > recently metadata may be safer. > > > > I will try to fix the above in my deployments, losing metadata is > > TOO UNACCEPTABLE. > > > > On Tue, Nov 8, 2011 at 7:24 AM, Nolan Eakins<no...@ea...> wrote: > >> I have a little server setup to give MooseFS a try. This server ran >> out of disk space on its root partition a day or two ago. This >> partition is where mfsmaster stored all of its data. Needless to say, >> mfsmaster did something dumb and zeroed out all the metadata files >> under /var/lib/mfs. It even looks like the metadata logger on another >> machine was affected since it didn't store anything recoverable. >> So my entire moosefs file tree is completely hosed now, all because >> mfsmaster can't handle a full disk. >> Other than this, I've been happy with MooseFS. >> Regards, >> Nolan > ----------------------------------------------------------------------------- > > >> RSA(R) Conference 2012 >> Save $700 by Nov 18 >> Register now >> http://p.sf.net/sfu/rsa-sfdev2dev1 >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > > > |
From: Steve <st...@bo...> - 2011-11-10 19:29:03
|
Sure its possible to start a metalogger your self. It just seems as though it could have been part of the chunkservers -------Original Message------- From: Allen Landsidel Date: 10/11/2011 18:10:23 To: moo...@li... Subject: Re: [Moosefs-users] Lost all metadata on master w/ full disk What makes you think it can't? I don't think there's any reason you can't run mfsmetalogger on a chunk server, though I haven't tried. On 11/9/2011 4:28 PM, Steve wrote: > and why cant each chunkserver be a logger also by default > > > > > > > > > > -------Original Message------- > > > > From: Davies Liu > > Date: 09/11/2011 19:53:33 > > To: Nolan Eakins > > Cc: moo...@li... > > Subject: Re: [Moosefs-users] Lost all metadata on master w/ full disk > > > > In the mfsmaster, when dumping data to disk, it rename metadata.mfs.back > > to metadata.mfs.back.tmp first, then write to metadata.mfs.back, finally, it > > > unlink metadata.mfs.back.tmp and metadata.mfs. > > > > If writing to metadata.mfs.back failed, we will get part of metadata, or > damaged > > one, and previous metadata will be lost together. > > > > Why not writing to metadata.mfs.back.tmp first, if successfully, then rename > it > > to metadata.mfs.back, and delete metadata.mfs ? I think this way is much > safer. > > > > In the metalogger, it does not check the downloaded metadata, if it download > a > > damaged one, then all the metadata will be lost. let metalogger keep sereral > > > recently metadata may be safer. > > > > I will try to fix the above in my deployments, losing metadata is > > TOO UNACCEPTABLE. > > > > On Tue, Nov 8, 2011 at 7:24 AM, Nolan Eakins<no...@ea...> wrote: > >> I have a little server setup to give MooseFS a try. This server ran >> out of disk space on its root partition a day or two ago. This >> partition is where mfsmaster stored all of its data. Needless to say, >> mfsmaster did something dumb and zeroed out all the metadata files >> under /var/lib/mfs. It even looks like the metadata logger on another >> machine was affected since it didn't store anything recoverable. >> So my entire moosefs file tree is completely hosed now, all because >> mfsmaster can't handle a full disk. >> Other than this, I've been happy with MooseFS. >> Regards, >> Nolan > ----------------------------------------------------------------------------- > > >> RSA(R) Conference 2012 >> Save $700 by Nov 18 >> Register now >> http://p.sf.net/sfu/rsa-sfdev2dev1 >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > > > ----------------------------------------------------------------------------- RSA(R) Conference 2012 Save $700 by Nov 18 Register now http://p.sf.net/sfu/rsa-sfdev2dev1 _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Travis H. <tra...@tr...> - 2011-11-10 20:59:21
|
I think the metalogger was added in in 1.5.x, before that it kind of did not exist. So there would be some backwards compatibility considerations to just embed it. But I think more so the motivation of the meta logger is to have the meta files similar to that of the master server, where in theory the meta logger node could be converted into a master server, e.g. all chunk servers and mfsmount clients used a DNS record to indirect the master server, this DNS record could be updated to point to the metalogger instance that is to be promoted to become the new master server. So when losing the master, renaming the changelog_ml.* files to just changelog.* on the metalogger instance, and running mfsmetarestore, booting up the mfsmaster, and continuing. In this sense, the consideration for this metalogger machine to have a similar amount of available RAM as the master machine would have had, to be able to run the mfsmaster process. And the assumption for larger filesystems would be that not every chunk server in the cluster would have this, so the dedicated planning of a metalogger server(s) was likely assumed. On 11-11-10 2:28 PM, Steve wrote: > Sure its possible to start a metalogger your self. It just seems as though > it could have been part of the chunkservers > > |
From: Allen L. <lan...@gm...> - 2011-11-10 20:42:29
|
That would really bone a lot of configs, hardware config wise. My chunk servers all run with 512M of RAM, not the 16G or whatever I have in the master and the metalogger. It's "proper" that the chunkserver does the chunk serving job, and the metalogger does the metalogging job, and that the two aren't all mixed up into a single process/service. On 11/10/2011 2:28 PM, Steve wrote: > Sure its possible to start a metalogger your self. It just seems as though > it could have been part of the chunkservers > > > > > > > > > > > > -------Original Message------- > > > > From: Allen Landsidel > > Date: 10/11/2011 18:10:23 > > To: moo...@li... > > Subject: Re: [Moosefs-users] Lost all metadata on master w/ full disk > > > > What makes you think it can't? I don't think there's any reason you > > can't run mfsmetalogger on a chunk server, though I haven't tried. > > > > On 11/9/2011 4:28 PM, Steve wrote: > >> and why cant each chunkserver be a logger also by default >> -------Original Message------- >> From: Davies Liu >> Date: 09/11/2011 19:53:33 >> To: Nolan Eakins >> Cc: moo...@li... >> Subject: Re: [Moosefs-users] Lost all metadata on master w/ full disk >> In the mfsmaster, when dumping data to disk, it rename metadata.mfs.back >> to metadata.mfs.back.tmp first, then write to metadata.mfs.back, finally, > it > >> unlink metadata.mfs.back.tmp and metadata.mfs. >> If writing to metadata.mfs.back failed, we will get part of metadata, or >> damaged >> one, and previous metadata will be lost together. >> Why not writing to metadata.mfs.back.tmp first, if successfully, then > rename > >> it >> to metadata.mfs.back, and delete metadata.mfs ? I think this way is much >> safer. >> In the metalogger, it does not check the downloaded metadata, if it > download > >> a >> damaged one, then all the metadata will be lost. let metalogger keep > sereral > >> recently metadata may be safer. >> I will try to fix the above in my deployments, losing metadata is >> TOO UNACCEPTABLE. >> On Tue, Nov 8, 2011 at 7:24 AM, Nolan Eakins<no...@ea...> wrote: >>> I have a little server setup to give MooseFS a try. This server ran >>> out of disk space on its root partition a day or two ago. This >>> partition is where mfsmaster stored all of its data. Needless to say, >>> mfsmaster did something dumb and zeroed out all the metadata files >>> under /var/lib/mfs. It even looks like the metadata logger on another >>> machine was affected since it didn't store anything recoverable. >>> So my entire moosefs file tree is completely hosed now, all because >>> mfsmaster can't handle a full disk. >>> Other than this, I've been happy with MooseFS. >>> Regards, >>> Nolan > ----------------------------------------------------------------------------- > > >>> RSA(R) Conference 2012 >>> Save $700 by Nov 18 >>> Register now >>> http://p.sf.net/sfu/rsa-sfdev2dev1 >>> _______________________________________________ >>> moosefs-users mailing list >>> moo...@li... >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > ----------------------------------------------------------------------------- > > > RSA(R) Conference 2012 > > Save $700 by Nov 18 > > Register now > > http://p.sf.net/sfu/rsa-sfdev2dev1 > > _______________________________________________ > > moosefs-users mailing list > > moo...@li... > > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: Steve <st...@bo...> - 2011-11-10 23:09:54
|
ahhhhh yes wasn't thinking of the conversion to master more just security of the data utilising existing services and hardware. -------Original Message------- From: Travis Hein Date: 10/11/2011 21:00:48 To: moo...@li... Subject: Re: [Moosefs-users] Lost all metadata on master w/ full disk I think the metalogger was added in in 1.5.x, before that it kind of did not exist. So there would be some backwards compatibility considerations to just embed it. But I think more so the motivation of the meta logger is to have the meta files similar to that of the master server, where in theory the meta logger node could be converted into a master server, e.g. all chunk servers and mfsmount clients used a DNS record to indirect the master server, this DNS record could be updated to point to the metalogger instance that is to be promoted to become the new master server. So when losing the master, renaming the changelog_ml.* files to just changelog.* on the metalogger instance, and running mfsmetarestore, booting up the mfsmaster, and continuing. In this sense, the consideration for this metalogger machine to have a similar amount of available RAM as the master machine would have had, to be able to run the mfsmaster process. And the assumption for larger filesystems would be that not every chunk server in the cluster would have this, so the dedicated planning of a metalogger server(S) was likely assumed. On 11-11-10 2:28 PM, Steve wrote: Sure its possible to start a metalogger your self. It just seems as though it could have been part of the chunkservers |
From: wkmail <wk...@bn...> - 2011-11-11 00:01:40
|
On 11/10/2011 12:42 PM, Allen Landsidel wrote: > That would really bone a lot of configs, hardware config wise. > > My chunk servers all run with 512M of RAM, not the 16G or whatever I > have in the master and the metalogger. It's "proper" that the > chunkserver does the chunk serving job, and the metalogger does the > metalogging job, and that the two aren't all mixed up into a single > process/service. > > We run the Metalogger process on one of our stronger chunkservers IN ADDITION to a defined Meta Logger machine thats setup for a backup. We figure, having an extra copy of the metadata can't hurt and if its not impacting the chunkerserver ..... -bill |
From: Allen L. <lan...@gm...> - 2011-11-11 13:43:04
|
Sure, that's the "right way" to do it. You can run the metalogger anywhere you want, but it shouldn't be "integrated" with the chunk server as the OP seemed to be asking for. On 11/10/2011 7:01 PM, wkmail wrote: > On 11/10/2011 12:42 PM, Allen Landsidel wrote: >> That would really bone a lot of configs, hardware config wise. >> >> My chunk servers all run with 512M of RAM, not the 16G or whatever I >> have in the master and the metalogger. It's "proper" that the >> chunkserver does the chunk serving job, and the metalogger does the >> metalogging job, and that the two aren't all mixed up into a single >> process/service. >> >> > > We run the Metalogger process on one of our stronger chunkservers IN > ADDITION to a defined Meta Logger machine thats setup for a backup. > > We figure, having an extra copy of the metadata can't hurt and if its > not impacting the chunkerserver ..... > > -bill > > ------------------------------------------------------------------------------ > RSA(R) Conference 2012 > Save $700 by Nov 18 > Register now > http://p.sf.net/sfu/rsa-sfdev2dev1 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |