From: <ha...@gm...> - 2011-12-10 23:54:34
|
On Sat, Dec 10, 2011 at 7:45 PM, member horvath <me...@th...> wrote: > I've have considered the archive function however I wasn't aware that > the changes would be rsync'd. > I thought it would create a tar archive of the most recent backup then > xfer that to the archive host. > Am I wrong in thinking this? > I also need to ensure data integrity by check suming the remote copy > with the onsite. The archive function just creates the tar snapshot of the hosts you specify. You can point that wherever you like, but making it local is faster. Once its complete your script should then transfer it, however and wherever you like. Rsync just happens to be what most people would use for this, and that protocol includes a checksum verification. If you use a different transport mechanism, then you many need to include a checksum routine in your script. Regarding your scheduling requirements, the best procedure is to get to know the BackupPC way and work within that, rather than imposing rules that probably came previous systems. Note that within the BPC data store, there actually is no physical difference between incremental and full backup sets - the only difference is the algorithm use to determine whether or not files have been updated, and that depends on the transport protocol. You can configure things so that you have for example " at least this many " and be conservative. Also note the archive tar snapshots will not have the space savings inherent in BPC's data storage deduplication, so it's likely using this method will result in your offsite storage requirements being many times larger than the BPC server onsite. Not necessarily a problem as diskspace is cheap these days, but something to plan for. . . |