Menu

Tree [1d5e42] master /
 History

HTTPS access


File Date Author Commit
 BackupPC_deleteBackup 2009-11-09 smckown smckown [38e440] This script can selectively remove backups.
 BackupPC_ovz 2008-06-06 smckown smckown [867007] Documentation fixups
 BackupPC_verifyPool 2008-04-04 smckown smckown [9122a9] Adding BackupPC_verifyPool to the backuppc scri...
 README 2008-06-06 smckown smckown [867007] Documentation fixups
 README.bpcdump 2009-12-16 R. Steve McKown R. Steve McKown [f75d9d] Merge the svn ovz repo, whose history is in the...
 README.esata 2009-12-16 R. Steve McKown R. Steve McKown [1d5e42] Merge the old svn esata repo, whose history is ...
 bpcbackup 2009-12-16 R. Steve McKown R. Steve McKown [f75d9d] Merge the svn ovz repo, whose history is in the...
 bpcbackup.crontab 2009-12-16 R. Steve McKown R. Steve McKown [f75d9d] Merge the svn ovz repo, whose history is in the...
 bpcdump 2009-12-16 R. Steve McKown R. Steve McKown [f75d9d] Merge the svn ovz repo, whose history is in the...
 esata 2009-12-16 R. Steve McKown R. Steve McKown [1d5e42] Merge the old svn esata repo, whose history is ...
 vzmigrate 2009-12-16 R. Steve McKown R. Steve McKown [f75d9d] Merge the svn ovz repo, whose history is in the...
 vzmigrate.orig 2009-12-16 R. Steve McKown R. Steve McKown [f75d9d] Merge the svn ovz repo, whose history is in the...

Read Me

Introduction

BackupPC_ovz is a script that adds OpenVZ integration to BackupPC (BackupPC).
BackupPC has no problems backing up an OpenVZ (ovz) Hardware Node (HN) or an ovz
Virtual Environment (VE), but by making BackupPC aware of ovz's internals, the
backup of VEs can be made far more efficient.

BackupPC_ovz adds the following capabilities to BackupPC:

 * VE backups are taken from a snapshot of the VE's filesystem after the VE
   has been shut down.  This guarantees that the filesystem data are in a
   consistent state without requiring application specific backup or pre-backup
   processing activities.

 * The VE is shut down only long enough to snapshot its filesystem, then is
   automatically restarted.  Typical VE downtime will be 30 seconds, depending
   upon the amount of application-dependent processing occurs at shutdown and
   startup.

 * Currently, only the rsync BackupPC XferMethod has been tested, but tar
   should probably work.  Because the backup and restore agent processes
   actually run on the HN hosting the VE, direct restore from BackupPC's web
   interface can be used to do a 'bare metal' recovery of the VE.

 * Any time the VE's /etc directory is backed up, the backup will add an
   /etc/vzdump directory containing the VE's configuration on the HN, notably
   the $VEID.conf file.

 * The VE is configured as if it were any other server to be backed up, with
   the notable addition of the BackupPC_ovz command to its client backup and
   restore commands, etc.

 * Although VE backups are actually performed by the HN, BackupPC_ovz determines
   the VE <-> HN mapping just before each backup run, eliminating any static
   mapping requirement in the BackupPC configuration.  It is acceptable to
   periodically rebalance VE's using the ovz vzmigrate utility, as BackupPC_ovz
   will correctly locate a moved VE at the next backup.

Requirements

BackupPC_ovz requires that the HN be set up correctly as if it were to be a
server backed up by BackupPC.  Specifically, this means a recent version of
rsync (we currently use 3.0.0pre6) and an ssh public key installed into the HN
root user's .ssh/authorized_keys2 file.  The companion private key, as usual,
belongs to the backuppc user on the BackupPC server.

Additionally, BackupPC requires that the private storage area, $VE_PRIVATE, for
the VE to be backed exists on a file system hosted on an LVM logical volume
(LV).  There are no restrictions imposed by BackupPC_ovz on the filesystem
used, as long as it is mountable by the HN, which by definition it must be.

Limitations

BackupPC_ovz imposes certain limitations.  Primary of these is allowing only
a single VE backup from a given HN at any time.  Other VE backups attempting
to run while an existing VE backup is in progress will error and BackupPC will
fail, indicating an inability to retrieve the file list.  This is not a
catastrophic problem, as BackupPC will reschedule another backup attempt at a
later time.  The reason for this limitation is primarily to simplify the first
releases of BackupPC_ovz.  It would be possible to extend BackupPC_ovz to
remove this limitation.  For smaller environments, this limitation shouldn't
pose a problem.

BackupPC_ovz uses LVM2, which must be installed on each HN.  All VE private
storage areas must be on filesystem(s) hosted on LVM LVs.

Each HN must have perl installed, including the Proc::PID::File page, which is
installed in Ubuntu by installing the libproc-pid-file-perl apt package.

VE host names in BackupPC must equate to the exact hostname as returned for
the VE's primary IP address from the DNS server that BackupPC and all HNs
use.  In other words, "host <vename>" returns an IP addr, and "host IPaddr"
returns <vename>.  It is that <vename> exactly that must be used as the VE
host name in BackupPC.  In our environment, DNS returns fully qualified host
names, so therefore the hosts in our BackupPC configuration are named with
their fully qualified domain names.

Installation

To install BackupPC_ovz:

 * Install BackupPC as normal.

 * Install a recent version of rsync (we use 3.0.0pre6+) into each HN and the
   BackupPC's VE.  A recent version of rsync is also required inside any VE
   that may be online migrated via vzmigrate when shared storage is not in use.
   (In otherwords, install rsync 3.0.0pre6+ on all HN's and inside all VEs).

   NOTE: BackupPC doesn't have to be installed in a VE; it could be installed
   on a separate server.  Do not install BackupPC or any other application
   directly onto an HN (see the BackupPC FAQs).

 * Configure all HNs as Hosts (backup/restore targets in BackupPC) per standard
   BackupPC instructions and verify that backup and restore operations work
   correctly.  Until the HNs can be succesfully backed up and restored,
   operations on VE's cannot be successfully completed.  We reccommend the
   rsync or tar XferMethods, using ssh as a transport.  Only the rsync method
   has been tested at this time.

   - On the BackupPC SSH FAQ, there are instructions for installing an SSH
     key on servers to be backed up by BackupPC.  Because VEs are actually
     backed up and restored through the context of the hardware node (HN),
     only the HNs need to have the keys.  Keys do NOT need to be installed
     in the VEs themselves.

 * Create the file /etc/backuppc/BackupPC_ovz.hnlist.  Its contents should
   contain the fully qualified hostname of each HN.  An example:

   ---- /etc/backuppc/BackupPC_ovz.hnlist ----
   # List the HNs by hostname of IP address in this file.
   pe18001.mydomain.com
   pe18002.mydomain.com
   pe18003.mydomain.com
   ---- end of file ----

 * Install BackupPC_ovz into /usr/bin of each HN.  The owner should be root,
   the group root and file permissions 0755.

 * Create the first VE Host in BackupPC.  Set its XferMethod to rsync (or tar).
   Three changes to the host specific configuration are required to use
   BackupPC_ovz:

   - On the Backup Settings page, set the DumpPreUserCommand and
     RestorePreUserCommand fields to contain:
	/usr/bin/BackupPC_ovz refresh

   - On the Xfer page, add:
	/usr/bin/BackupPC_ovz server
     to the beginning of the RsyncClientCmd field without altering the field's
     contents in any other way.  Our VE's have this data in the RsyncClientCmd
     field:
	/usr/bin/BackupPC_ovz server $host $sshPath -q -x -l root
	    $host $rsyncPath $argList+

   - On the Xfer page, add
	/usr/bin/BackupPC_ovz server restore
     to the beginning of the RsyncClientRestoreCmd field without altering the
     field's contents in any other way.  Our VE's have this data in the
     RsyncClientRestoreCmd field:
	/usr/bin/BackupPC_ovz server restore $host $sshPath -q -x -l root
	    $host $rsyncPath $argList+

 * To add subsequent VE's to BackupPC, add each new VE into BackupPC using the
   NEWHOST=COPYHOST mechanism, as documented on the Edit Hosts page.  This will
   automatically copy the modifications made for an existing VE host into a
   new VE host.

Using BackupPC with VEs

Once a VE has been added as a host to BackupPC, BackupPC will automatically
schedule the first and each subsequent backup according to the defined backup
schedule(s).  Backups of restores to a running VE are no different in terms of
BackupPC usage than any other host.

Special recovery features of VEs under BackupPC

Because BackupPC actually backs up and recovers VE data using its 'parent' HN,
additional recovery features are available.  For example, a VE can be
recovered in its entirey, analogous to a 'bare metal' recovery of a physical
server:

 * Stop the VE to be fully recovered using vzctl, if it is running.
 * Using BackupPC, select all files and directories of the appropriate VE
   backup and use Direct Restore to restore everything.
   - Restore NOT to the VE host, but to the HN host that will host the newly
     recovered VE.
   - In the Direct Restore dialog, select the appropriate HN filesystem
     location to restore the VE.  For example, if VE 123 has its private data
     at /var/lib/vz/private/123 on the HN, then the recovery directory would be
     /var/lib/vz/private/123.
 * After the restore is complete, recover the ovz-specific VE configuration
   files from the VE's /etc/vzdump directory into the appropriate locations
   of the HN's /etc/vz/conf directory.  This is only required if the config
   file(s) has(have) changed.
 * Start the VE using ovz's vzctl utiltiy.

The above strategy works great to restore an existing VE to a prior state, as
the rsync xfer method will not overwrite files that are the same, reducing I/O
and therefore recovery time.

What happens if we need to recover a VE where no existing version of the VE
is running anywhere?  Consider a disaster recovery case where the HN hosting
the VE melted and is completely unrecoverable.  We then use the same
process as above to recover the VE to an HN -- even one that might never have
hosted the VE before.

 * Using BackupPC, select all files and directories of the appropriate VE
   backup and use Direct Restore to restore everything.
   - Restore NOT to the VE host, but to the HN host that will host the newly
     recovered VE.
   - In the Direct Restore dialog, select the appropriate HN filesystem
     location to restore the VE.  For example, let's assume we recover this VE
     to VEID 123 (it could have been different at the time it was backed up).
     In this case, the recovery directory might be /var/lib/vz/private/123.
 * Create an empty /var/lib/vz/root/123 directory on the HN.
 * After the restore is complete, recover the ovz-specific VE configuration
   files from the VE's /etc/vzdump directory into the appropriate locations
   of the HN's /etc/vz/conf directory.
 * Start the VE using ovz's vzctl utiltiy.

Configurations Tested:

 * Twin Dell PowerEdge 1800 servers (the HNs).
   Running a minimal Ubuntu Gutsy server OS.
   Raid disks running under LVM2.
   XFS filesystem for VE private areas.
 * BackupPC running as a VE on one of the PE1800's.
   Running version 3.0.0-ubuntu2
 * A number of other VE's distributed between the two PE1800's.
 * VEs running various OS's: Mandrake 2006, Ubuntu Feisty, Ubuntu Gutsy.
 * VE backups and restores using the rsync method.
 * All VEs and HNs have rsync 3.0.0pre6 or newer installed.