From: Filipe B. <fil...@gm...> - 2009-06-03 14:11:38
|
Hi Alex, On Wed, Jun 3, 2009 at 02:24, Alex Harrington <al...@lo...> wrote: >> I just bought some big disks to a spare server I have, I plan >> to create a local RAID5 volume to use as the BackupPC main >> repository and leave BackupPC working there on the local >> disks. To do my off-sites, I plan to use rsync -aH to >> duplicate that data into the external USB disks. If I need to > > I think you'll find that that won't work. Whenever I've tried to copy a > relatively small backuppc install, the number of hardlinks cause tools > like rsync to run out of RAM and fail. Well, it will certainly take long and use a lot of RAM, but it does not mean that it will not work... I have 16GB RAM on my backup server. I am presently using this strategy on another machine to clone a volume to an external disk that is sent offsite. The volume has 18,281,269 inodes. It usually takes between 12h to 14h to complete a "rsync -aH --delete", and it certainly uses a lot of RAM, but it does work. > I've yet to see a good way of doing that for multi-terabyte pools > documented - and at present we basically image the whole drive that > backuppc is on and offsite that, but it's slow and a manual process. Right now I would say ZFS remote replication is the best solution for this problem, and it is what I would recommend for "multi-terabyte pools" (more than 10TB). However, I think rsync is good enough for most deployments out there, and it is certainly simple enough that any sysadmin knows how to handle it. Cheers, Filipe |