From: Stas O. <sta...@gm...> - 2010-07-08 19:03:51
|
Hi. Master server flushes metadata kept in RAM to the metadata.mfs.back binary > file every hour on the hour (xx:00). So probably the best moment would be to > copy the metadata file every hour on the half hour (30 minutes after the > dump). Then you would maximally lose 1.5h of data. You can choose any method > of copying – cp, scp, rsync, etc. Metalogger receives the metadata file > every 24 hours so it is better to copy metadata from the master server. > > Can I also save the change logs, and replay them into meta-data? How much time it will be able recover up to point of crash? > Having metadata backed up you won’t see newly created files, files to > which some data was appended would come back to the previous size, there > would still exist deleted files and files you changed names of would come > back to their previous names (and locations). But still you would have > information and access to all the files created in the X past years. > > > > If you have not read this entry, we strongly recommend it: > > http://www.moosefs.org/news-reader/items/metadata-ins-and-outs.html > > Thanks, now it much more clear. Speaking of crashes, will the master detect it has crashed, and will try to replay the logs itself? Or it's recommended to use the script described at the end of the entry, to always check for repayable logs? Regards. |