From: dan <dan...@gm...> - 2008-01-11 16:26:00
|
right, a FULL rsync will not transfer the data but will do the block checksum. If you do an incremetal backup rsync will just list files, check their timestamp and size and be done. I find that on any filesystem that changes the timestamp on write that rsync incremetals are perfect. I actually do 1 full backup per month and then incrementals every other day of the month. the previous post mentioned something about the compression being for the pool. In fact you can also compress the rsync stream with the -z flag which works great on limited bandwidth connections as long as you have a decent CPU. The ssh stream can also be compressed but I would recommend not using ssh compression if you are using rsync compression because the gains are minimal for the CPU hit. also, the rsync compression should be better as it is using gzip where ssh is just using the old compress compression which is much less effective than gzip. it takes some fine tuning but you can specify compression level for gzip also with --compress-level=# which is 0-9 with 9 being best though 3 is about the best compression-per-cpu cycle and 4-9 are minor gains. just remember that if you compress on disk, each file will get compressed on the client, decompressed on the server, then recompressed on the server so performance can be hit a lot from adding compression. luckily, the rsync stream and decompression runs on a different process for backuppc so a dual core processor help a lot. On Jan 9, 2008 12:18 PM, Les Mikesell <le...@fu...> wrote: > Jinshi wrote: > > Thanks for the suggestion. Regarding the network transfer, why do I > > still need to transfer all the files? The new full backup is following a > > previous full backup. > > An rsync 'full' does a block-checksum comparison on all the files. It > doesn't actually transfer the data, but does read everything from the > disk which may be your bottleneck now. > > > So all the files are already in "pool" and do not > > need to transfer again. It should only transfer the new files, which I > > do not have any. 8 hours is too long to just generate file list and > > transfer almost no file over network. So, what BackupPC is doing? > > That's the difference between rsync fulls and incrementals. In an > incremental run it will skip anything where the directory timestamp and > length match the previous copy. In a full it does a check of the > contents that doesn't take a lot of bandwidth but make take a long time, > plus it rebuilds a complete directory tree in your archive. > > > I understand that 6MB/s network transfer may be close to "as good as I > > can get". If I do compress with rsync (that is, if I can be convinced > > that I need to transfer all 170GB every time I do full backup), when > > will the original pool files be deleted (so to free up space)? If later > > decide not to do the compression, when will the cpool files will be > deleted? > > Compression has to do with the pooled file copy, not the rsync transfer. > Backuppc's version of rsync is capable of comparing your compressed > archive file against the uncompressed remote target, so if they match > nothing is transferred again. If you have a real network bandwidth > issue (like backing up over a WAN or the internet) you can enable ssh > compression. > > -- > Les Mikesell > les...@gm... > > > ------------------------------------------------------------------------- > Check out the new SourceForge.net Marketplace. > It's the best place to buy or sell services for > just about anything Open Source. > > http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: http://backuppc.wiki.sourceforge.net > Project: http://backuppc.sourceforge.net/ > |