From: <af...@sb...> - 2013-09-18 13:01:37
|
Hi list, I'm new here, hoping to use BackupPC to implement from scratch, here goes... I admit, I'm not really sure where to start after combing the docs some, but to learn how to first analyze my backup server drive structure for capture and store THAT off site first and test the restore. Following are the initial noobish dumb questions on how to start out to do that, if hopefully someone might offer some suggestions and a little coaching. I've started out leasing a backup machine from http://backupsy.com finding them so far to be well resourced, well provisioned and responsive to support questions, having a fair amount of experience and several data center locations established across their entry to the industry in 2013. The backup server is a 256K RAM KVM VPS with 256 Gb storage at Backupsy on a 1 Gig connection. Apache2 is up, I can add any supports needed, but currently it is a vanilla lean console install. I installed Deb 7 AMD/64 manually and configured to use entire disk, but keep /home separate (shown below)... /home was separated so I could image the backup server config itself efficiently, without other stored client data in the way. Client images eventually stored in /home will exist on a duplicate server as well. The backup server itself is in basic install state so it's a good place to practice image capture / restore currently, easily reinstalled if I bork it. Backupsy provides a spinner with GpartedLive and SystemRescueCD I assume can help, (which I'm not familiar with these tools but have poked around in some). I'd like to learn how to use the running snapshot method to do this, since my client machines will need snapshot working anyhow. Once configured, the backup machine won't change often, short of client backups saved in /home. ************ Questions... ************ I assume BackupPC runs on the backup server itself? (unsure if client servers require some part of it installed). Can BacupPC snapshot and export it's own server's image? (see current drive config below). If it can, where do I start? If it can't what alternate process should I use to create a snapshot disaster recovery process for the backup machine? Thanks in advance for any pointers to get a noob off and going. 8-) Mike ************ Current fdisk output... ************ Disk /dev/vda: 268.4 GB, 268435456000 bytes 16 heads, 63 sectors/track, 520126 cylinders, total 524288000 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0000b9b1 Device Boot Start End Blocks Id System /dev/vda1 * 2048 499711 248832 83 Linux /dev/vda2 501758 524285951 261892097 5 Extended /dev/vda5 501760 524285951 261892096 8e Linux LVM Disk /dev/mapper/back1-root: 9999 MB, 9999220736 bytes 255 heads, 63 sectors/track, 1215 cylinders, total 19529728 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/back1-root doesn't contain a valid partition table Disk /dev/mapper/back1-swap_1: 1069 MB, 1069547520 bytes 255 heads, 63 sectors/track, 130 cylinders, total 2088960 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/back1-swap_1 doesn't contain a valid partition table Disk /dev/mapper/back1-home: 257.1 GB, 257106640896 bytes 255 heads, 63 sectors/track, 31258 cylinders, total 502161408 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/back1-home doesn't contain a valid partition table |
From: Carl W. S. <ch...@re...> - 2013-09-18 13:28:28
|
On 09/18 09:00 , af...@sb... wrote: > I've started out leasing a backup machine from http://backupsy.com finding > them so far to be well resourced, well provisioned and responsive to support > questions, having a fair amount of experience and several data center > locations established across their entry to the industry in 2013. Just curious, is there a reason you bought a virtual machine at a remote location rather than experimenting on a local machine or virtual machine? > I assume BackupPC runs on the backup server itself? (unsure if client > servers require some part of it installed). BackupPC is a collection of perl scripts with a nice web interface front end, which store data on the machine BackupPC is installed on. (you could make it more complicated than that, but it's pretty advanced and there's not much call for it). The clients are usually backed up with rsync over SSH, or rsyncd. There is also a method using tar (only good for local backups, or data mounted on a network filesystem of some sort), an FTP method (which I haven't used), and a tar over SMB method (which is now broken in Samba-3.6.3 due to some Samba changes). There was a project to make a BackupPC client software which would run on the machine to be backed up, but I do not know the status of it. > Can BacupPC snapshot and export it's own server's image? (see current drive > config below). BackupPC doesn't really do 'images', it backs up individual files, which you can assemble into a tarball when you want to restore. I have my BackupPC servers back up at least their own /etc/ directory. FWIW, my best-practices advice is to put /var/lib/backuppc on its own filesystem, so that if the '/' filesystem becomes corrupt or the disk dies or the like, the backed-up data is still ok; and if the backup data becomes corrupt the OS is still there to try to recover it. Also, BackupPC can store some data in the compressed pool of files (/var/lib/backuppc/cpool) and some in the uncompressed pool (/var/lib/backuppc/pool). I make sure to back up the BackupPC server's own backup in the uncompressed pool, so that in case of disaster and only the simplest tools being available to get at your data (i.e. BackupPC itself not functioning) you can still read your data. Here's my configuration for backing up the local machine. $ cat /etc/backuppc/localhost.pl # # Local server backup as user backuppc # # dunno why it needs to ping, # but after the upgrade to 3.2.1 this became necessary $Conf{PingCmd} = '/bin/true'; $Conf{XferMethod} = 'tar'; # let it back itself up anytime it wants to. $Conf{BlackoutPeriods} = []; $Conf{TarShareName} = ['/']; $Conf{BackupFilesExclude} = ['/proc', '/sys', '/var/lib/backuppc', '/var/lib/vmware', '/var/log', '/tmp', '/var/tmp', '/mnt', '/media']; $Conf{TarClientCmd} = '/usr/bin/env LC_ALL=C /usr/bin/sudo $tarPath -c -v -f - -C $shareName --totals'; # remove extra shell escapes ($fileList+ etc.) that are # needed for remote backups but may break local ones $Conf{TarFullArgs} = '$fileList'; $Conf{TarIncrArgs} = '--newer=$incrDate $fileList'; # turning off compression on these files, so they can be recovered without # backuppc. # wouldn't make sense to need your backup server, # in order to recover your backup server, now would it? $Conf{CompressLevel} = 0; -- Carl Soderstrom Systems Administrator Real-Time Enterprises www.real-time.com |
From: <af...@sb...> - 2013-09-18 15:35:48
|
----- Original Message ----- From: "Carl Wilhelm Soderstrom" <ch...@re...> To: <bac...@li...> Sent: Wednesday, September 18, 2013 9:28 AM Subject: Re: [BackupPC-users] New to town - where to begin... > On 09/18 09:00 , af...@sb... wrote: >> I've started out leasing a backup machine from http://backupsy.com >> finding >> them so far to be well resourced, well provisioned and responsive to >> support >> questions, having a fair amount of experience and several data center >> locations established across their entry to the industry in 2013. > > Just curious, is there a reason you bought a virtual machine at a remote > location rather than experimenting on a local machine or virtual machine? > >> I assume BackupPC runs on the backup server itself? (unsure if client >> servers require some part of it installed). > > BackupPC is a collection of perl scripts with a nice web interface front > end, which store data on the machine BackupPC is installed on. (you could > make it more complicated than that, but it's pretty advanced and there's > not > much call for it). > > The clients are usually backed up with rsync over SSH, or rsyncd. There is > also a method using tar (only good for local backups, or data mounted on a > network filesystem of some sort), an FTP method (which I haven't used), > and > a tar over SMB method (which is now broken in Samba-3.6.3 due to some > Samba > changes). There was a project to make a BackupPC client software which > would > run on the machine to be backed up, but I do not know the status of it. > >> Can BacupPC snapshot and export it's own server's image? (see current >> drive >> config below). > > BackupPC doesn't really do 'images', it backs up individual files, which > you > can assemble into a tarball when you want to restore. > > I have my BackupPC servers back up at least their own /etc/ directory. > FWIW, > my best-practices advice is to put /var/lib/backuppc on its own > filesystem, > so that if the '/' filesystem becomes corrupt or the disk dies or the > like, > the backed-up data is still ok; and if the backup data becomes corrupt the > OS is still there to try to recover it. Also, BackupPC can store some data > in the compressed pool of files (/var/lib/backuppc/cpool) and some in the > uncompressed pool (/var/lib/backuppc/pool). I make sure to back up the > BackupPC server's own backup in the uncompressed pool, so that in case of > disaster and only the simplest tools being available to get at your data > (i.e. BackupPC itself not functioning) you can still read your data. > Here's > my configuration for backing up the local machine. > > $ cat /etc/backuppc/localhost.pl > # > # Local server backup as user backuppc > # > > # dunno why it needs to ping, > # but after the upgrade to 3.2.1 this became necessary > $Conf{PingCmd} = '/bin/true'; > > $Conf{XferMethod} = 'tar'; > > # let it back itself up anytime it wants to. > $Conf{BlackoutPeriods} = []; > > $Conf{TarShareName} = ['/']; > > $Conf{BackupFilesExclude} = ['/proc', '/sys', '/var/lib/backuppc', > '/var/lib/vmware', '/var/log', '/tmp', '/var/tmp', '/mnt', '/media']; > > $Conf{TarClientCmd} = '/usr/bin/env LC_ALL=C /usr/bin/sudo > $tarPath -c -v -f > - -C $shareName --totals'; > > # remove extra shell escapes ($fileList+ etc.) that are > # needed for remote backups but may break local ones > $Conf{TarFullArgs} = '$fileList'; > $Conf{TarIncrArgs} = '--newer=$incrDate $fileList'; > > # turning off compression on these files, so they can be recovered without > # backuppc. > # wouldn't make sense to need your backup server, > # in order to recover your backup server, now would it? > $Conf{CompressLevel} = 0; > > > -- > Carl Soderstrom > Systems Administrator > Real-Time Enterprises > www.real-time.com > Hi Carl, Thank you for your reply ... I may be putting more stock than I should in BackupPC adopting a "method" or grossly misunderstanding feasibility completely, in order to avoid file based recovery as much as possible, hence my reference to saving and restoring an "image" of the OS snapshot I'm hoping a VM should lend itself to. > BackupPC doesn't really do 'images', it backs up individual files, which > you > can assemble into a tarball when you want to restore. A snapshot file-set is still a "file based" functionality to BackupPC, I would hope. It may be an LVM tarballed as a definition of "image" perhaps. I may have abused the term "image", (not meaning to imply iso), but compressing an entire OS from two partitions into tar based, encrypted files transmitting off-machine is my meaning in this context, if that helps. I may also be giving up incremental backups in favor of an expedient "total restore" concept. > Just curious, is there a reason you bought a virtual machine at a remote > location rather than experimenting on a local machine or virtual machine? Yes, proximity and bandwidth resources mostly with sufficient redundancy to attain reliability. I handle a number of global operations reliant on combinations of our own AND 3rd party VPS both, one of the reasons I need a restoration solution that works fairly full scope across many VM configurations. We tend to "nest in" solutions at each data center we deal with globally, which are fairly small client machines in every instance, Windows and Linux both. > BackupPC is a collection of perl scripts with a nice web interface front > end, which store data on the machine BackupPC is installed on. (you could > make it more complicated than that, but it's pretty advanced and there's > not > much call for it). I was hoping to springboard off of this script for VMs on ... http://www.redhat.com/archives/virt-tools-list/2009-October/msg00069.html > I have my BackupPC servers back up at least their own /etc/ directory. > FWIW, > my best-practices advice is to put /var/lib/backuppc on its own > filesystem, > so that if the '/' filesystem becomes corrupt or the disk dies or the > like, > the backed-up data is still ok; and if the backup data becomes corrupt the > OS is still there to try to recover it. I would like to avoid depending on data stored on the same server for recovery, save that a "snapshot" process suggests to do this real time as it is, to an off-machine storage. Again, this is "disaster" minded and disaster does not assume something simple to repair, but to quickly rebuild the entire package in as few steps as possible. Ultimately the back-up(s) off-server need to be the gold standard. I may need to also consider the backup server as the one "different" approach to the rest possibly, but having a "pair" of identical backup servers in separate locations lends itself to cross-distributed process for each backup server to backup the other, kind of thing. Restoring an "OS only" at a VPS vendor is quick and easy. Following this by over-writing the OS from snapshot, to restore integrated client specific application and data makes the most sense to me, if a VM disk storage can aptly represent a "stateful machine". Many times, discerning the OS file sets from Application and again from data in Windows for example is impossible, so again a "snapshot" represents a container based recovery concept, if I'm not all wet in assuming this is attainable. When addressing client user VM's you can never be sure what the client did to the OS or the data and in a disaster situation the VPS vendor is likely to be also facing severe issues at the time of a recovery. Tarball by FTP was attractive for simplicity, compression and encryption options, having control of the ftp and related security at all points as we do. Using Duplicity and / or Duplicati at the client was a consideration. In Windows VM, this refers to the use of a compressed MS Shadow Copy shrunk down versus what I hope "snapshot" can do in Linux. Being a novice admin I am, doing file level restoration seems fraught with issues of time stamps, mnt folders and a whole myriad of "how to capture and restore Linux", not to mention the "running" state of a client VM. It looked to me like using a dynamic snapshot of the entire running VM, might permit a restorable "set" which could be sent back into the server, fsked and be right back up running. I may be dissolutioned, over stating simplicity here, but hopefully you can see my desire for a "package" restoration, versus file sets. We deal with multiple flavors of several 3rd party VPS dependencies. So, the idea of having to restore 50 client VM's at one location remotely and patch each one differently back to a running state from file restores, LOOKED like I would be facing 3 hours or worse per VM in a disaster recovery. I need something where at worst, the basic VM is rebuilt from template, then blow the last working snapshot back in place and go. Hopefully the need to re-cook the template isn't even an issue since an image restore should basically reconstruct the entire VM, providing the boot load and container are still intact. As for a lighter set of "client-centric" data, this might represent a scripted secondary file approach in the event a client simply blew away his essential DB for example. That I can handle as a file based process. For disaster recovery I'm fearful to face file level situations for lack of my own competence if nothing else. Again, please forgive if I'm over-simplifying an idealistic approach. I know many well accomplished Admins may look at this as folly, but I liken it to "Restoration As A Service" (RAAS) kind of thing, trying to do so outside the VPS hypervisor context. It may solve all of my woes at the price of larger backup copies less frequently, sacrificing incremental if necessary. Is the approach even viable, perhaps if not with BackupPC, another tool set even? Mike |
From: Les M. <les...@gm...> - 2013-09-18 17:08:29
|
On Wed, Sep 18, 2013 at 10:35 AM, <af...@sb...> wrote: > > > I may be putting more stock than I should in BackupPC adopting a "method" or > grossly misunderstanding feasibility completely, in order to avoid file > based recovery as much as possible, hence my reference to saving and > restoring an "image" of the OS snapshot I'm hoping a VM should lend itself > to. Backuppc is just the wrong place to start if you want to back up only large files that change every run (like VM images or database dumps). > >> BackupPC doesn't really do 'images', it backs up individual files, which >> you >> can assemble into a tarball when you want to restore. > > A snapshot file-set is still a "file based" functionality to BackupPC, I > would hope. It may be an LVM tarballed as a definition of "image" perhaps. > I may have abused the term "image", (not meaning to imply iso), but > compressing an entire OS from two partitions into tar based, encrypted files > transmitting off-machine is my meaning in this context, if that helps. I > may also be giving up incremental backups in favor of an expedient "total > restore" concept. Backuppc just doesn't have any advantage in this process and adds a certain amount of unnecessary overhead. What backuppc does very well is back up large sets of files where there is a lot of duplication, both between backup runs and on different target hosts. In this scenario, all files with exactly the same content are pooled and optionally compressed so you can keep a much larger backup history online than you would expect. However, when file contents change between runs, backuppc can use rsync to only transfer the differences but the process of comparing against a compressed copy and reconstructing the full new file by merging the old contents with the differences is slow and inefficient and not very practical for files the size of vm images. And, if that is all you have you'll end up with unique files and no pooling. > >> Just curious, is there a reason you bought a virtual machine at a remote >> location rather than experimenting on a local machine or virtual machine? > > Yes, proximity and bandwidth resources mostly with sufficient redundancy to > attain reliability. I handle a number of global operations reliant on > combinations of our own AND 3rd party VPS both, one of the reasons I need a > restoration solution that works fairly full scope across many VM > configurations. We tend to "nest in" solutions at each data center we deal > with globally, which are fairly small client machines in every instance, > Windows and Linux both. > >> BackupPC is a collection of perl scripts with a nice web interface front >> end, which store data on the machine BackupPC is installed on. (you could >> make it more complicated than that, but it's pretty advanced and there's >> not >> much call for it). > > I was hoping to springboard off of this script for VMs on ... > http://www.redhat.com/archives/virt-tools-list/2009-October/msg00069.html If you really want to save images, you probably want something more like rdiff-backup that can store deltas instead of reconstructing whole files. Or, do infrequent full-image saves that you would use to bring up a base machine to a point where you could restore from a file-based backuppc backup. This approach gives you easier access to single file or directory restores (that in my experience are needed a lot more often than full machine restores). > I may need to also consider the backup server as the one "different" > approach to the rest possibly, but having a "pair" of identical backup > servers in separate locations lends itself to cross-distributed process for > each backup server to backup the other, kind of thing. Backing up a backuppc server presents its own problems. Normally, the pool directory has millions of hardlinks which are hard for any file-based backup scheme to reconstruct, and the archive filesystem is usually too large to copy as an image - and both approaches require the filesystem to be idle during the copy. Depending on the rate of change and your backup windows, it may be practical to just run independent servers in different locations hitting the same targets, which will also eliminate any single point of failure. > Restoring an "OS only" at a VPS vendor is quick and easy. Following this by > over-writing the OS from snapshot, to restore integrated client specific > application and data makes the most sense to me, if a VM disk storage can > aptly represent a "stateful machine". Many times, discerning the OS file > sets from Application and again from data in Windows for example is > impossible, so again a "snapshot" represents a container based recovery > concept, if I'm not all wet in assuming this is attainable. When addressing > client user VM's you can never be sure what the client did to the OS or the > data and in a disaster situation the VPS vendor is likely to be also facing > severe issues at the time of a recovery. Assuming you never work with physical hardware, you might be on the right track, but I don't think backuppc is going to be the best tool. On the other hand, if you can think in terms of making a 'base' image to get to a point where you can drop the files back, backuppc would be good for keeping a long history of files online with easy access to any individual file or directory. > Being a novice admin I am, doing file level restoration seems fraught with > issues of time stamps, mnt folders and a whole myriad of "how to capture and > restore Linux", not to mention the "running" state of a client VM. Linux is pretty straightforward. Pretty much everything is a file without a lot of magic. You might find the 'ReaR' package interesting: it builds a bootable iso with a system's own tools that comes up with a script to reconstruct the partitioning and filesystem layout, then restore from a tar (or similar) backup. So it basically distills all of the magic into a few shell scripts and it works on either hardware or VMs.. I don't think anyone has built in a backuppc restore yet but it should not be hard at all. You can also use clonezilla to do image backups and it works on windows too, but you have to shut down and reboot to use it. > It > looked to me like using a dynamic snapshot of the entire running VM, might > permit a restorable "set" which could be sent back into the server, fsked > and be right back up running. I may be dissolutioned, over stating > simplicity here, but hopefully you can see my desire for a "package" > restoration, versus file sets. Except - in practice it seems much more common to accidentally delete or corrupt a few files and not notice for a week than to have the host melt completely. > We deal with multiple flavors of several 3rd party VPS dependencies. So, > the idea of having to restore 50 client VM's at one location remotely and > patch each one differently back to a running state from file restores, > LOOKED like I would be facing 3 hours or worse per VM in a disaster > recovery. I need something where at worst, the basic VM is rebuilt from > template, then blow the last working snapshot back in place and go. > Hopefully the need to re-cook the template isn't even an issue since an > image restore should basically reconstruct the entire VM, providing the boot > load and container are still intact. For a lot of things, that's the best approach - use a base VM image, spin it up, then restore files on top of it. > Again, please forgive if I'm over-simplifying an idealistic approach. I > know many well accomplished Admins may look at this as folly, but I liken it > to "Restoration As A Service" (RAAS) kind of thing, trying to do so outside > the VPS hypervisor context. It may solve all of my woes at the price of > larger backup copies less frequently, sacrificing incremental if necessary. You are over-simplifying in the sense that different data and services have different backup requirements. Sometimes you need 'live' replication across locations. Sometimes you need long, long, histories of changes. Sometimes you need a 'warm' spare that can spin up quickly but might not be up to the minute with data. And sometimes, especially in the non-VM world, you need to restore the data onto different hardware or under a different OS than where the backup was made. > Is the approach even viable, perhaps if not with BackupPC, another tool set > even? Backuppc is great for the long-history, easy file access, sort of thing, not so good for VM images, database dumps and other huge fast-changing items. -- Les Mikesell les...@gm... |