From: Michał B. <mic...@co...> - 2012-04-20 19:54:28
|
Hi Group! As already mentioned we'd just like to stress how important it is to make regular metadata backups. The future 1.6.26 version will keep several metadata files back on the disk (as already done by Ben) and still it would be wise to copy them somewhere else. The full rack awareness which you'd expect would probably be introduced in a way suggested by Ken with levelgoals. But still remember that running such a MooseFS installation would require a really quick connection between the two sites which may be difficult to accomplish. Synchronizing two MooseFS installations might be really better done with simple rsync. PS. @Ken, mind that your solution (at least when we looked at it) didn't have support for levelgoals in mfsmetarestore (did you run a recovery test?) and in mfsmetadump. Kind regards Michał Borychowski MooseFS Support Manager -----Original Message----- From: Steve Wilson [mailto:st...@pu...] Sent: Wednesday, April 04, 2012 7:30 PM To: moo...@li... Subject: Re: [Moosefs-users] Backup strategies On 04/03/2012 03:56 PM, Steve Thompson wrote: > OK, so now you have a nice and shiny and absolutely massive MooseFS > file system. How do you back it up? > > I am using Bacula and divide the MFS file system into separate areas > (eg directories beginning with a, those beginning with b, and so on) > and use several different chunkservers to run the backup jobs, on the > theory that at least some of the data is local to the backup process. > But this still leaves the vast majority of data to travel the network > twice (a planned dedicated storage network has not yet been > implemented). This results in pretty bad backup performance and high network load. Any clever ideas? > > Steve We have four 22TB and one 14TB MooseFS volumes that we backup onto disk-based backup servers. We used to use rsnapshot but now we use rsync in combination with ZFS snapshots. Each evening before our backup run, we take a snapshot on the backup filesystem and label it with the date. Then we run rsync on the volumes being backed up and only what has been modified since the previous backup is transfered over the network. The result is the equivalent of taking a full backup each night and it's very easy to recover data. I also use ZFS compression and dedup to help conserve space on our backup servers. The dedup option is especially helpful when a user decides to rename a large directory; rsync may have to bring it across the network and write it to the filesystem but ZFS will recognize the data as duplicates of already stored data. Steve ---------------------------------------------------------------------------- -- Better than sec? Nothing is better than sec when it comes to monitoring Big Data applications. Try Boundary one-second resolution app monitoring today. Free. http://p.sf.net/sfu/Boundary-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |