From: Les M. <les...@gm...> - 2009-08-17 21:38:43
|
Jeremy Mann wrote: > >> What operations are you watching to see these numbers? The only one >> where network bandwidth matters much is the initial copy of a new host. >> The rest of the time you are mostly doing comparisions. Backuppc >> will be slower than native rsync because it is in perl and because it is >> working with a compressed copy for the comparison. And perhaps you >> didn't use the --ignore-times option on the runs you are using for >> comparison. Backuppc does this on fulls and it will slow things down to >> the speed that the remote can read the whole disk for the checksum >> comparisons - but it gives you an integrity check on your pooled copy. > > I'm watching a live output of Ganglia showing network usage while the > backups are going. Also simple math.. I just finished one full backup, 16 > GB in 143 minutes. That's simply unacceptable for a full backup. You still haven't said whether this is the first run where the files are actually copied or not - if not you shouldn't expect much network activity. How long would it take the target to read all it's files? Something like 'time tar -cf - / | cat >/dev/null' would be a reasonable test. (Don't do -cf /dev/null with gnutar because it will cheat and not read the files.) That would be the fastest an rsync run could possibly complete with the --ignore times option even if you don't transfer any data or create new files on the server. > Now if you tell me my hardware isn't fast enough, the BackupPC server is a > dual Opteron 2.2 Ghz with 8 GB RAM and 24 300GB drives in a 3ware RAID5 > array, it isn't. Either end can limit the speed. How many concurrent runs do you do? Also, you should expect much faster rates if you have a few large files than if you have millions of tiny ones. -- Les Mikesell les...@gm... |