On 12/11/2012 7:44 PM, Dan Langille wrote:
> On Dec 11, 2012, at 7:11 PM, Phil Stracchino wrote:
>> On 12/11/12 16:47, ccspro wrote:
>>> And make sure your Catalog backup has the lowest priority (say 99) so
>>> it will be completed after all Jobs are done for the day.
>> Personally, I don't bother with a catalog backup job separate from my DB
>> backups, since my Bacula catalog is by far the largest schema in my DB
>> anyway (96% of both total application data and total application data
>> rows). I just back up the DB last and have redundant, replicated DB
> I dump my DB to a plain text file daily. I then rsync that file, and the *.conf
> files to three others servers. Two offsite, one on-site.
> That file also gets backed up.
I run DB, as well as bacula-dir and bacula-sd, as KVM VMs in a 2-node
Corosync/Pacemaker cluster, and so use a different approach. Each VM is
installed on a separate DRBD device and uses local LVM volumes on each
node for /tmp, spool area, etc. Periodically, the VMs are taken down and
the DRBD devices dd'd to, and the VM XML libvirt definitions copied to,
USB hard drives, one copy in a local fire safe and an offsite copy.
These VM backups correspond to the offsite tapes and catalog as of the
date/time the offsite backups are written. So, for true disaster
recovery, (ie. fire, tornado, etc.), it is simply a matter of dd'ing the
VM backups to DRBD devices on the new hardware and recreating the VMs
using the backed up libvirt XML files. The catalog and everything else
will be current with the offsite backups.
For recovery in the event of less disastrous problems, like corruption
of the database, I daily copy a text dump of the DB to another local
server using NFS.