From: John R. <rou...@re...> - 2008-01-28 18:58:57
|
Hello R: Since the tao brings up a few things I do, i expanded on your post a bit. I am just getting our BackupPc install working before we switch over to it, so some of this is work in progress On Mon, Jan 28, 2008 at 11:46:03AM -0600, R. Quenett wrote: > One of the specifics I need to learn a lot more about is the verification > of the data I am backing up. Do existing versions of backuppc make it > possible for a user to attack this area from within backuppc itself? Is > anyone using an approach involving, say, md5sum that they would be > willing to share? I think you need integrity checks at a higher level (or the ability to tell the backup system that file X must always have a certain cryptographic checksum, size, ...). Something like tripwire fills that bill, or rpm verify if you use an rpm based system and don't have non-rpm'ed files that need their integrity checked. Although the history function of BackupPC can provide a warning of unexpected data change, it's in the web page and not easily accessible by automated tools. IIRC the BackupPC roadmap does discuss using the history for a tripwire/integrity like purpose in the future. The case mentioned in the Tao with people intentionally damaging files here and there (been there and it's a major pain) detection is the best you can do, however if the damage is done inside say a database filled with records, tripwire like tools that work at the filesystem level won't help as the file is always changing. Also just because you are backing up consistant bits, doesn't mean you can retreive them. Monthly I generate tars of the last backups for different hosts and shares and use tar -dzvf backup.tgz to compare it against the bits on disk. Something like: sudo -u backup /tools/BackupPC/bin/BackupPC_tarCreate -h HOSTNAME \ -n -1 -s /var/log .| sudo tar -dvf - -C /var/log produces: ... ./wtmp: Size differs ./wtmp.1 ./yum.log ./yum.log.1 ./yum.log.2 ./nagios/.bash_history ./nagios/.ssh/ ./nagios/.viminfo ./nagios/archives/ ... You will have discrepancies (e.g. wtmp, lastlog, files that get updated/rotated after backups etc). But there is no way to get a full system verify without some discrepancy unless you shut down the system (or taking a full system snapshot if your filesystem supports it and backup the snapshot) and running a backup/verify cycle before you bring it up. For offsite backups, I am testing the use of filled incrementals (note to the backuppc author, I hope these don't go away) for hosts co-located with the backup server. Then I rsync the incremental directory to a remote system. We only have a handful of systems like that so it works fine. For addressing historical backups, we will be doing disk copies of the backuppc spool directory for deeper archival storage with 6 months in online storage (dailies of the most recent 2 weeks, weekly for another two weeks then monthly for 6 months out). We have discussed using an encrypted filesystem for the backuppc pool, but normal physical security of the drives has served us well so far, and we are concerned about speed issues using an an encrypted drive. Does anybody have any experience w/ encrypted drives? Plus there is the whole key management issue. Nothing like having an old backup tape that can't be restored because the encryption key is missing. -- -- rouilj John Rouillard System Administrator Renesys Corporation 603-643-9300 x 111 |