From: Les M. <les...@gm...> - 2010-07-07 12:39:39
|
Udo Rader wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Hi, > > we need to setup a "reasonable" backup solution for one of our > customers, consisting of a HQ office and two branches. The most > important requirement is off-site backup. Backup volume will be > approximately 50 GB per office for a full backup, meaning 150GB in total. > > So far, we have used backuppc for almost any backup solution, but this > time I am not 100% sure because of the "WAN" side of life. > > What I am wondering is if it was possible to retrieve files from the > clients more intelligently: > > If I understand BackupPC's pooling concept correctly, pooling takes > place after the file has been transferred to the server and is compared > to other files in the pool. > > Yet what I have in mind would be "don't transfer a file if we already > have it in the pool", thus drastically reducing the amount of data > transferred over the net. > > IIRC, the rsync protocol per se should just allow that, but I am unsure > if BackupPC utilizes it at that level. > > So is this "transfer minimization" doable with BackupPC? Rsync compares only against the last full backup tree on the same host, transferring the differences. Once you get started it will work the way you want, but each new machine will transfer everything on the first run and if the same file is added on many machines, it will be copied separately from each, then merged in the pool. One thing you might do to to help get started would be to take the server on-site to each office for the first full run, then move to its permanent location. Or if you have the bandwidth, perhaps you can do the initial run over a weekend. -- Les Mikesell les...@gm... |