From: Davies L. <dav...@gm...> - 2011-11-08 09:43:04
|
In the mfsmaster, when dumping data to disk, it rename metadata.mfs.back to metadata.mfs.back.tmp first, then write to metadata.mfs.back, finally, it unlink metadata.mfs.back.tmp and metadata.mfs. If writing to metadata.mfs.back failed, we will get part of metadata, or damaged one, and previous metadata will be lost together. Why not writing to metadata.mfs.back.tmp first, if successfully, then rename it to metadata.mfs.back, and delete metadata.mfs ? I think this way is much safer. In the metalogger, it does not check the downloaded metadata, if it download a damaged one, then all the metadata will be lost. let metalogger keep sereral recently metadata may be safer. I will try to fix the above in my deployments, losing metadata is TOO UNACCEPTABLE. On Tue, Nov 8, 2011 at 7:24 AM, Nolan Eakins <no...@ea...> wrote: > I have a little server setup to give MooseFS a try. This server ran > out of disk space on its root partition a day or two ago. This > partition is where mfsmaster stored all of its data. Needless to say, > mfsmaster did something dumb and zeroed out all the metadata files > under /var/lib/mfs. It even looks like the metadata logger on another > machine was affected since it didn't store anything recoverable. > > So my entire moosefs file tree is completely hosed now, all because > mfsmaster can't handle a full disk. > > Other than this, I've been happy with MooseFS. > > Regards, > Nolan > > ------------------------------------------------------------------------------ > RSA(R) Conference 2012 > Save $700 by Nov 18 > Register now > http://p.sf.net/sfu/rsa-sfdev2dev1 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- - Davies |