From: rbastedo <bac...@ba...> - 2011-10-12 20:08:25
|
I've recently been given the task of setting up BackupPC to back up some of our servers running RHEL and PgSQL. Management wants me to back up data to an NFS where it can then be saved to tape via DPM for offsite storage. I have set up a pair of test machines, the BackupPC server is on a RHEL 6 box and the target I am backing up is a little CentOS 5 desktop. BackupPC is successfully backing up the CentOS machine to it's /data/BackupPC TopDir. I've used rsync with ssh2 to perform 2 small backups. How do I get it to save the backup to the NFS storage out on the network instead? I will not have console access to the RHEL server that will be the eventual real backup target, I will have to tell their admin what to do to get it set up on that end Also if there's a simpler way of doing all this (using our existing infrastructure) I'd like to hear about it. Just a little background so you will understand the nature of my question, I'm a DBA and have been spending all my time in happy database land and this is a new area for me not only in terms of running Red Hat but also I'm not adeptly skilled in Linux and this is my first attempt with BackupPC. That I have gotten this far on my own is just a little amazing to me. Thanks. Rick Bastedo +---------------------------------------------------------------------- |This was sent by rdb...@kc... via Backup Central. |Forward SPAM to ab...@ba.... +---------------------------------------------------------------------- |
From: Timothy J M. <tm...@ob...> - 2011-10-12 20:32:47
|
rbastedo <bac...@ba...> wrote on 10/12/2011 03:48:45 PM: > How do I get it to save the backup to the NFS storage out on the > network instead? Why not mount an NFS share to TOPDIR? If BackupPC is already working for you correctly, you should be done! The thing is, though: the BackupPC pool on tape by itself is probably *not* what management had in mind when they're asking you to store the backups on a remote server (and remote tape). I have a feeling you could do exactly what you're describing, yet it would be a failure in the end--and you might not realize this until you try to restore all of this... Another, likely better, way to get your backed up data onto the NFS server might be to use the Archive feature. Read the doc: basically, it automatically creates a tar file of your backups for you. I (and others) use it to archive to a removable drive for swapping. It might fit your needs better. > Also if there's a simpler way of doing all this (using our existing > infrastructure) I'd like to hear about it. I would question BackupPC as being the tool for achieving this (or what I think you want). Are you going to be using the pooling and history capabilities of BackupPC? Or is that what your DPM sysytem is for? If not, it's probably overkill as a substitute for tar or rsync directly. Scripting a simple rsync would probably be substantially simpler and more reliable, with a lot less moving parts. Timothy J. Massey Out of the Box Solutions, Inc. Creative IT Solutions Made Simple! http://www.OutOfTheBoxSolutions.com tm...@ob... 22108 Harper Ave. St. Clair Shores, MI 48080 Office: (800)750-4OBS (4627) Cell: (586)945-8796 |
From: Arnold K. <ar...@ar...> - 2011-10-12 22:18:40
Attachments:
signature.asc
|
On Wednesday 12 October 2011 21:48:45 rbastedo wrote: > I've recently been given the task of setting up BackupPC to back up some of > our servers running RHEL and PgSQL. Management wants me to back up data to > an NFS where it can then be saved to tape via DPM for offsite storage. There are three possibilities here I think: 1. Use the nfs-share as the topdir for backuppc. This works, but saving this to tape is a) impractical and be will not help in any way for restore. 2.1 Set backuppc up with a local topdir and use a tapearchive-"host" in backuppc to write out complete tars every night to the nfs-share before the remote tape-machine picks these to write the tape. Easy to set up, good for restore. And you only need the tapes in the case that backuppc fails. 2.2 Set up backuppc on the remote-machine with the tape, let it get the data from the db over network and make it write the (additional) tapes locally. 3. If you don't really need the built-in backups of backuppc and just want to collect full and/or incremental data to save to tape, use amanda... In our business we are currently switching our (and our clients) backups from amanda to backuppc with less-then-daily write-out to nfs-/iscsi-shares or tapes. Have fun, Arnold |
From: John R. <rou...@re...> - 2011-10-19 19:27:46
|
On Wed, Oct 12, 2011 at 12:48:45PM -0700, rbastedo wrote: > I've recently been given the task of setting up BackupPC to back up > some of our servers running RHEL and PgSQL. > Management wants me to back up data to an NFS where it can then be > saved to tape via DPM for offsite storage. > > I have set up a pair of test machines, the BackupPC server is on a > RHEL 6 box and the target I am backing up is a little CentOS 5 desktop. > BackupPC is successfully backing up the CentOS machine to it's >/data/BackupPC TopDir. I've used rsync with ssh2 to perform 2 small backups. How are you backing up postgres? Are you dumping the database (pg_dumpall) and then backing up the resulting file with backupc? Are you shutting down postgres before doing the backups of the data area using backuppc and then restarting after the backup is complete? Do you have backuppc customized to perform a PITR backup? If you aren't doing one of these three things your postgres backups are pretty much worthless and won't be able to be restored. Given this is your first foray into backups I thought these may be important things to check. -- -- rouilj John Rouillard System Administrator Renesys Corporation 603-244-9084 (cell) 603-643-9300 x 111 |
From: Rick B. <rba...@gm...> - 2011-10-19 21:42:19
|
On Wed, Oct 19, 2011 at 12:27 PM, John Rouillard < rou...@re...> wrote: > On Wed, Oct 12, 2011 at 12:48:45PM -0700, rbastedo wrote: > > I've recently been given the task of setting up BackupPC to back up > > some of our servers running RHEL and PgSQL. > > Management wants me to back up data to an NFS where it can then be > > saved to tape via DPM for offsite storage. > > > > I have set up a pair of test machines, the BackupPC server is on a > > RHEL 6 box and the target I am backing up is a little CentOS 5 desktop. > > BackupPC is successfully backing up the CentOS machine to it's > >/data/BackupPC TopDir. I've used rsync with ssh2 to perform 2 small > backups. > > How are you backing up postgres? > > Are you dumping the database (pg_dumpall) and then backing up the > resulting file with backupc? > Are you shutting down postgres before doing the backups of the > data area using backuppc and then restarting after the backup > is complete? > Do you have backuppc customized to perform a PITR backup? > > If you aren't doing one of these three things your postgres backups > are pretty much worthless and won't be able to be restored. > > Given this is your first foray into backups I thought these may be > important things to check. > > The vendor who maintains that system does the primary backup to disk so the PostgreSQL backups are done for us and stored on a 'backup' box. Now the task is to get that backup stored to our tape system that us controlled by DPM. Today I was given the machine to install RHEL 6.1 on so I can configure our "production" BackupPC installation. I am just now at the installation phase, and have the screens up asking what kind of installation I want to do (Basic Server, Database Server, Web Server...). Is there a general walk through for setting this up so that maybe it will work a bit better than my first install? The first one was on a test machine and I didn't know I wanted Apache or PERL or rsync or any of that stuff. So I know that I know more now than I did then but I also am aware that I'm not really sure of everything. I'm just hoping this goes better than the test install. Once again - the backup is done to disk already, the vendor is liable for the quality of that backup so all I'm tasked at doing is getting a copy of that onto tape for offsite storage. I can't move it directly to our tape system because DPM is in control of it and apparently doesn't speak Linux. Rick Bastedo |
From: Les M. <les...@gm...> - 2011-10-19 22:40:17
|
On Wed, Oct 19, 2011 at 4:42 PM, Rick Bastedo <rba...@gm...> wrote: > The vendor who maintains that system does the primary backup to disk so the > PostgreSQL backups are done for us and stored on a 'backup' box. > Now the task is to get that backup stored to our tape system that us > controlled by DPM. I don't really think backuppc is a good fit for this unless you also want a history of several days/weeks copies of these files saved as a side effect. > Today I was given the machine to install RHEL 6.1 on so I can configure our > "production" BackupPC installation. > I am just now at the installation phase, and have the screens up asking what > kind of installation I want to do (Basic Server, Database Server, Web > Server...). > Is there a general walk through for setting this up so that maybe it will > work a bit better than my first install? > The first one was on a test machine and I didn't know I wanted Apache or > PERL or rsync or any of that stuff. > So I know that I know more now than I did then but I also am aware that I'm > not really sure of everything. You should probably configure the EPEL yum repository and use 'yum install backuppc' to do the install. The way yum works, it will install any needed dependencies so it won't matter much what you installed initially. https://fedoraproject.org/wiki/EPEL6-FAQ#How_do_I_use_it.3F > I'm just hoping this goes better than the test install. If you install from source, you have to get all the pieces right yourself. > Once again - the backup is done to disk already, the vendor is liable for > the quality of that backup so all I'm tasked at doing is getting a copy of > that onto tape for offsite storage. > I can't move it directly to our tape system because DPM is in control of it > and apparently doesn't speak Linux. I don't see how you expect backuppc to help with this process. If DPM or whatever process controls it can see a windows share, why not use samba to share the directory where the file is stored? Backuppc can get a copy daily from some specified location and store in its own archive format, but you'll have to do something else to get it back out again. -- Les Mikesell les...@gm... |
From: Rick B. <rba...@gm...> - 2011-10-21 15:54:18
|
Well I've been told we are using backuppc, so that's why I am using it. That's the way it works where I work. I tried installing from EPEL. You said: "You should probably configure the EPEL yum repository and use 'yum install backuppc' to do the install. The way yum works, it will install any needed dependencies so it won't matter much what you installed initially. https://fedoraproject.org/wiki/EPEL6-FAQ#How_do_I_use_it.3F" Here's the error I got when trying: yum install http://mirror.pnl.gov/epel/6/x86_64/BackupPC-3.2.1-6.el6.x86_64.rpm >(blah blah blah --> Processing Dependencies... etc) ... ... ... ... --> Finished Dependency Resolution Error: Package: BackupPC-3.2.1-6.el6.x86_64 (/BackupPC-3.2.1-6.el6.x86_64) Requires: perl(Net::FTP::RetrHandle) Error: Package: BackupPC-3.2.1-6.el6.x86_64 (/BackupPC-3.2.1-6.el6.x86_64) Requires: perl(XML::RSS) Error: Package: BackupPC-3.2.1-6.el6.x86_64 (/BackupPC-3.2.1-6.el6.x86_64) Requires: perl(Time::ParseDate) Error: Package: BackupPC-3.2.1-6.el6.x86_64 (/BackupPC-3.2.1-6.el6.x86_64) Requires: perl(Archive::Zip) Error: Package: BackupPC-3.2.1-6.el6.x86_64 (/BackupPC-3.2.1-6.el6.x86_64) Requires: perl-Time-modules Error: Package: BackupPC-3.2.1-6.el6.x86_64 (/BackupPC-3.2.1-6.el6.x86_64) Requires: perl(File::RsyncP) Error: Package: BackupPC-3.2.1-6.el6.x86_64 (/BackupPC-3.2.1-6.el6.x86_64) Requires: perl(Net::FTP::AutoReconnect) I added EPEL as a repository when I installed and yes this is a registered RHEL 6.1 install. I make no claim at being adept in Linux, so don't fear offending me by pointing out obvious idiocy on my part. Rick Bastedo On Wed, Oct 19, 2011 at 3:40 PM, Les Mikesell <les...@gm...> wrote: > On Wed, Oct 19, 2011 at 4:42 PM, Rick Bastedo <rba...@gm...> wrote: > > > The vendor who maintains that system does the primary backup to disk so > the > > PostgreSQL backups are done for us and stored on a 'backup' box. > > Now the task is to get that backup stored to our tape system that us > > controlled by DPM. > > I don't really think backuppc is a good fit for this unless you also > want a history of several days/weeks copies of these files saved as a > side effect. > > > Today I was given the machine to install RHEL 6.1 on so I can configure > our > > "production" BackupPC installation. > > I am just now at the installation phase, and have the screens up asking > what > > kind of installation I want to do (Basic Server, Database Server, Web > > Server...). > > Is there a general walk through for setting this up so that maybe it will > > work a bit better than my first install? > > The first one was on a test machine and I didn't know I wanted Apache or > > PERL or rsync or any of that stuff. > > So I know that I know more now than I did then but I also am aware that > I'm > > not really sure of everything. > > You should probably configure the EPEL yum repository and use 'yum > install backuppc' to do the install. The way yum works, it will > install any needed dependencies so it won't matter much what you > installed initially. > https://fedoraproject.org/wiki/EPEL6-FAQ#How_do_I_use_it.3F > > > I'm just hoping this goes better than the test install. > > If you install from source, you have to get all the pieces right yourself. > > > Once again - the backup is done to disk already, the vendor is liable for > > the quality of that backup so all I'm tasked at doing is getting a copy > of > > that onto tape for offsite storage. > > I can't move it directly to our tape system because DPM is in control of > it > > and apparently doesn't speak Linux. > > I don't see how you expect backuppc to help with this process. If > DPM or whatever process controls it can see a windows share, why not > use samba to share the directory where the file is stored? Backuppc > can get a copy daily from some specified location and store in its own > archive format, but you'll have to do something else to get it back > out again. > > -- > Les Mikesell > les...@gm... > |
From: Les M. <les...@gm...> - 2011-10-21 16:23:04
|
On Fri, Oct 21, 2011 at 10:54 AM, Rick Bastedo <rba...@gm...> wrote: > Well I've been told we are using backuppc, so that's why I am using it. > That's the way it works where I work. Perhaps if you explained the details of what you have to do to get something on to the DPM system someone could help with making backuppc do it for you. > I tried installing from EPEL. > > You said: > "You should probably configure the EPEL yum repository and use 'yum > install backuppc' to do the install. The way yum works, it will > install any needed dependencies so it won't matter much what you > installed initially. > https://fedoraproject.org/wiki/EPEL6-FAQ#How_do_I_use_it.3F" > > > Here's the error I got when trying: > > yum install > http://mirror.pnl.gov/epel/6/x86_64/BackupPC-3.2.1-6.el6.x86_64.rpm >>(blah blah blah --> Processing Dependencies... etc) > ... ... ... ... > --> Finished Dependency Resolution > Error: Package: BackupPC-3.2.1-6.el6.x86_64 (/BackupPC-3.2.1-6.el6.x86_64) > Requires: perl(Net::FTP::RetrHandle) > Error: Package: BackupPC-3.2.1-6.el6.x86_64 (/BackupPC-3.2.1-6.el6.x86_64) > Requires: perl(XML::RSS) > Error: Package: BackupPC-3.2.1-6.el6.x86_64 (/BackupPC-3.2.1-6.el6.x86_64) > Requires: perl(Time::ParseDate) > Error: Package: BackupPC-3.2.1-6.el6.x86_64 (/BackupPC-3.2.1-6.el6.x86_64) > Requires: perl(Archive::Zip) > Error: Package: BackupPC-3.2.1-6.el6.x86_64 (/BackupPC-3.2.1-6.el6.x86_64) > Requires: perl-Time-modules > Error: Package: BackupPC-3.2.1-6.el6.x86_64 (/BackupPC-3.2.1-6.el6.x86_64) > Requires: perl(File::RsyncP) > Error: Package: BackupPC-3.2.1-6.el6.x86_64 (/BackupPC-3.2.1-6.el6.x86_64) > Requires: perl(Net::FTP::AutoReconnect) > > I added EPEL as a repository when I installed and yes this is a registered > RHEL 6.1 install. > I make no claim at being adept in Linux, so don't fear offending me by > pointing out obvious idiocy on my part. I've only done it on CentOS where yum is the native update tool and is configured to resolve/install dependencies automatically, so it 'just works'. Maybe the RHEL yum setup is different. Can you 'yum install' each of those required packages (enclose the names in single quotes, or change the style to look like perl-File-RsyncP)? -- Les Mikesell les...@gm... |
From: Rick B. <rba...@gm...> - 2011-10-25 22:16:26
|
I got back to this today. I have been finding and installing each of the dependencies and then came to: "perl-XML-RSS" It had a dependency of: "perl-DateTime-Format-Mail" I found that and tried installing it and got: " Requires: perl(DateTime) >= 0.1705 Installed: 1:perl-DateTime-0.5300-1.el6.x86_64 (@rhel-6-server-rpms) perl(DateTime) = 0.53 " I looked this up and it seems to be a dilemma, I've got a newer version than what it seems to require however it won't install. This is the last thing BackupPC seems to be complaining about, if I can figure this out I can proceed with the installation. I don't remember this happening on my test system, but even though it was the same OS it was a different environment. Ideas about getting around this please? Rick Bastedo On Fri, Oct 21, 2011 at 9:22 AM, Les Mikesell <les...@gm...> wrote: > On Fri, Oct 21, 2011 at 10:54 AM, Rick Bastedo <rba...@gm...> wrote: > > Well I've been told we are using backuppc, so that's why I am using it. > > That's the way it works where I work. > > Perhaps if you explained the details of what you have to do to get > something on to the DPM system someone could help with making backuppc > do it for you. > > > I tried installing from EPEL. > > > > You said: > > "You should probably configure the EPEL yum repository and use 'yum > > install backuppc' to do the install. The way yum works, it will > > install any needed dependencies so it won't matter much what you > > installed initially. > > https://fedoraproject.org/wiki/EPEL6-FAQ#How_do_I_use_it.3F" > > > > > > Here's the error I got when trying: > > > > yum install > > http://mirror.pnl.gov/epel/6/x86_64/BackupPC-3.2.1-6.el6.x86_64.rpm > >>(blah blah blah --> Processing Dependencies... etc) > > ... ... ... ... > > --> Finished Dependency Resolution > > Error: Package: BackupPC-3.2.1-6.el6.x86_64 > (/BackupPC-3.2.1-6.el6.x86_64) > > Requires: perl(Net::FTP::RetrHandle) > > Error: Package: BackupPC-3.2.1-6.el6.x86_64 > (/BackupPC-3.2.1-6.el6.x86_64) > > Requires: perl(XML::RSS) > > Error: Package: BackupPC-3.2.1-6.el6.x86_64 > (/BackupPC-3.2.1-6.el6.x86_64) > > Requires: perl(Time::ParseDate) > > Error: Package: BackupPC-3.2.1-6.el6.x86_64 > (/BackupPC-3.2.1-6.el6.x86_64) > > Requires: perl(Archive::Zip) > > Error: Package: BackupPC-3.2.1-6.el6.x86_64 > (/BackupPC-3.2.1-6.el6.x86_64) > > Requires: perl-Time-modules > > Error: Package: BackupPC-3.2.1-6.el6.x86_64 > (/BackupPC-3.2.1-6.el6.x86_64) > > Requires: perl(File::RsyncP) > > Error: Package: BackupPC-3.2.1-6.el6.x86_64 > (/BackupPC-3.2.1-6.el6.x86_64) > > Requires: perl(Net::FTP::AutoReconnect) > > > > I added EPEL as a repository when I installed and yes this is a > registered > > RHEL 6.1 install. > > I make no claim at being adept in Linux, so don't fear offending me by > > pointing out obvious idiocy on my part. > > I've only done it on CentOS where yum is the native update tool and is > configured to resolve/install dependencies automatically, so it 'just > works'. Maybe the RHEL yum setup is different. Can you 'yum > install' each of those required packages (enclose the names in single > quotes, or change the style to look like perl-File-RsyncP)? > > -- > Les Mikesell > les...@gm... > |
From: Richard S. <hob...@gm...> - 2011-10-26 01:21:07
|
On Tue, Oct 25, 2011 at 5:16 PM, Rick Bastedo <rba...@gm...> wrote: > I got back to this today. > I have been finding and installing each of the dependencies and then came > to: > > "perl-XML-RSS" > > It had a dependency of: "perl-DateTime-Format-Mail" > I found that and tried installing it and got: > " Requires: perl(DateTime) >= 0.1705 > Installed: 1:perl-DateTime-0.5300-1.el6.x86_64 > (@rhel-6-server-rpms) > perl(DateTime) = 0.53 > " > I looked this up and it seems to be a dilemma, I've got a newer version than > what it seems to require however it won't install. > This is the last thing BackupPC seems to be complaining about, if I can > figure this out I can proceed with the installation. > I don't remember this happening on my test system, but even though it was > the same OS it was a different environment. > Ideas about getting around this please? Well, in theory the version you have installed is newer than the requirement, EXCEPT, that there's an epoch of 1 on yours which trumps the version (the 1 in front of perl-DateTime). Since BackupPC is provided by EPEL on Fedora, I would file a bug against it on bugzilla. They may reassign it to the correct perl component but you may eventually get things fixed. Richard |
From: Richard S. <hob...@gm...> - 2011-10-26 01:22:19
|
Actually I got that turned around... I don't know why it's not installing. On Tue, Oct 25, 2011 at 8:21 PM, Richard Shaw <hob...@gm...> wrote: > On Tue, Oct 25, 2011 at 5:16 PM, Rick Bastedo <rba...@gm...> wrote: >> I got back to this today. >> I have been finding and installing each of the dependencies and then came >> to: >> >> "perl-XML-RSS" >> >> It had a dependency of: "perl-DateTime-Format-Mail" >> I found that and tried installing it and got: >> " Requires: perl(DateTime) >= 0.1705 >> Installed: 1:perl-DateTime-0.5300-1.el6.x86_64 >> (@rhel-6-server-rpms) >> perl(DateTime) = 0.53 >> " >> I looked this up and it seems to be a dilemma, I've got a newer version than >> what it seems to require however it won't install. >> This is the last thing BackupPC seems to be complaining about, if I can >> figure this out I can proceed with the installation. >> I don't remember this happening on my test system, but even though it was >> the same OS it was a different environment. >> Ideas about getting around this please? > > Well, in theory the version you have installed is newer than the > requirement, EXCEPT, that there's an epoch of 1 on yours which trumps > the version (the 1 in front of perl-DateTime). Since BackupPC is > provided by EPEL on Fedora, I would file a bug against it on bugzilla. > They may reassign it to the correct perl component but you may > eventually get things fixed. > > Richard > |
From: Les M. <les...@gm...> - 2011-10-26 19:40:59
|
On Tue, Oct 25, 2011 at 8:22 PM, Richard Shaw <hob...@gm...> wrote: > Actually I got that turned around... I don't know why it's not installing. > > > On Tue, Oct 25, 2011 at 8:21 PM, Richard Shaw <hob...@gm...> wrote: >> On Tue, Oct 25, 2011 at 5:16 PM, Rick Bastedo <rba...@gm...> wrote: >>> I got back to this today. >>> I have been finding and installing each of the dependencies and then came >>> to: >>> >>> "perl-XML-RSS" >>> >>> It had a dependency of: "perl-DateTime-Format-Mail" >>> I found that and tried installing it and got: >>> " Requires: perl(DateTime) >= 0.1705 >>> Installed: 1:perl-DateTime-0.5300-1.el6.x86_64 >>> (@rhel-6-server-rpms) >>> perl(DateTime) = 0.53 >>> " >>> I looked this up and it seems to be a dilemma, I've got a newer version than >>> what it seems to require however it won't install. >>> This is the last thing BackupPC seems to be complaining about, if I can >>> figure this out I can proceed with the installation. >>> I don't remember this happening on my test system, but even though it was >>> the same OS it was a different environment. >>> Ideas about getting around this please? >> >> Well, in theory the version you have installed is newer than the >> requirement, EXCEPT, that there's an epoch of 1 on yours which trumps >> the version (the 1 in front of perl-DateTime). Since BackupPC is >> provided by EPEL on Fedora, I would file a bug against it on bugzilla. >> They may reassign it to the correct perl component but you may >> eventually get things fixed. >> Any idea why this works with the same packages on Centos 6.x but not RHEL? Or why the RHEL yum wasn't resolving the dependencies itself in the first place? If it is just a yum issue, downloading the rpm and installing with rpm -i or -U should work now that the dependent packages are there. -- Les Mikesell les...@gm... |
From: Richard S. <hob...@gm...> - 2011-10-26 20:12:49
|
Just to verify it works I tried installing to a chroot environment[1] as I don't have access to an EL system. The only thing I can think of is somehow your yum data was out of date or something like that. The only thing I can think of is to try: # yum clean all It's pretty radical but non-destructive. It just means yum has to re-download all the yum metadata from all of your repositories. Richard --- [1] Gory details here: $ mock -r epel-6-x86_64 --init INFO: mock.py version 1.1.16 starting... State Changed: init plugins INFO: selinux enabled State Changed: start State Changed: lock buildroot State Changed: clean State Changed: unlock buildroot State Changed: init State Changed: lock buildroot Mock Version: 1.1.16 INFO: Mock Version: 1.1.16 INFO: calling preinit hooks INFO: enabled root cache INFO: enabled yum cache State Changed: cleaning yum metadata INFO: enabled ccache State Changed: running yum State Changed: creating cache State Changed: unlock buildroot State Changed: end $ mock -r epel-6-x86_64 --install BackupPC INFO: mock.py version 1.1.16 starting... State Changed: init plugins INFO: selinux enabled State Changed: start Mock Version: 1.1.16 INFO: Mock Version: 1.1.16 State Changed: lock buildroot INFO: installing package(s): BackupPC INFO: Ignored option -c (probably due to merging -yc != -y -c) ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: BackupPC x86_64 3.2.1-6.el6 epel 461 k Installing for dependencies: apr x86_64 1.3.9-3.el6_0.1 updates 124 k apr-util x86_64 1.3.9-3.el6_0.1 updates 87 k apr-util-ldap x86_64 1.3.9-3.el6_0.1 updates 15 k checkpolicy x86_64 2.0.22-1.el6 base 190 k dbus-glib x86_64 0.86-5.el6 base 170 k expat x86_64 2.0.1-9.1.el6 base 76 k httpd x86_64 2.2.15-5.el6.centos base 811 k httpd-tools x86_64 2.2.15-5.el6.centos base 68 k libselinux-utils x86_64 2.0.94-2.el6 base 80 k libsemanage x86_64 2.0.43-4.el6 base 104 k libtalloc x86_64 2.0.1-1.1.el6 base 19 k libtdb x86_64 1.2.1-2.el6 base 28 k mailcap noarch 2.1.31-1.1.el6 base 27 k perl-Archive-Zip noarch 1.30-2.el6 base 107 k perl-CGI x86_64 3.49-115.el6 base 191 k perl-Class-Singleton noarch 1.4-6.el6 base 17 k perl-Compress-Raw-Zlib x86_64 2.023-115.el6 base 66 k perl-Compress-Zlib x86_64 2.020-115.el6 base 42 k perl-DateTime x86_64 1:0.5300-1.el6 base 2.0 M perl-DateTime-Format-Mail noarch 0.3001-6.el6 base 159 k perl-DateTime-Format-W3CDTF noarch 0.04-8.el6 base 17 k perl-File-RsyncP x86_64 0.68-6.el6 epel 100 k perl-HTML-Parser x86_64 3.64-2.el6 base 109 k perl-HTML-Tagset noarch 3.20-4.el6 base 17 k perl-IO-Compress-Base x86_64 2.020-115.el6 base 65 k perl-IO-Compress-Zlib x86_64 2.020-115.el6 base 132 k perl-List-MoreUtils x86_64 0.22-10.el6 base 53 k perl-Net-FTP-AutoReconnect noarch 0.3-3.el6 epel 11 k perl-Net-FTP-RetrHandle noarch 0.2-3.el6 epel 16 k perl-Params-Validate x86_64 0.92-3.el6 base 75 k perl-Time-modules noarch 2006.0814-5.el6 base 38 k perl-URI noarch 1.40-2.el6 base 117 k perl-XML-Parser x86_64 2.36-7.el6 base 224 k perl-XML-RSS noarch 1.45-2.el6 base 59 k perl-libwww-perl noarch 5.833-2.el6 base 387 k policycoreutils x86_64 2.0.83-19.8.el6_0 updates 675 k redhat-logos noarch 60.0.14-10.el6 base 10 M rsync x86_64 3.0.6-5.el6_0.1 updates 335 k samba-client x86_64 3.5.4-68.el6_0.2 updates 11 M samba-common x86_64 3.5.4-68.el6_0.2 updates 13 M samba-winbind-clients x86_64 3.5.4-68.el6_0.2 updates 1.1 M ustr x86_64 1.0.4-9.1.el6 base 86 k Transaction Summary ================================================================================ Install 43 Package(s) Total download size: 43 M Installed size: 129 M Installed: BackupPC.x86_64 0:3.2.1-6.el6 Dependency Installed: apr.x86_64 0:1.3.9-3.el6_0.1 apr-util.x86_64 0:1.3.9-3.el6_0.1 apr-util-ldap.x86_64 0:1.3.9-3.el6_0.1 checkpolicy.x86_64 0:2.0.22-1.el6 dbus-glib.x86_64 0:0.86-5.el6 expat.x86_64 0:2.0.1-9.1.el6 httpd.x86_64 0:2.2.15-5.el6.centos httpd-tools.x86_64 0:2.2.15-5.el6.centos libselinux-utils.x86_64 0:2.0.94-2.el6 libsemanage.x86_64 0:2.0.43-4.el6 libtalloc.x86_64 0:2.0.1-1.1.el6 libtdb.x86_64 0:1.2.1-2.el6 mailcap.noarch 0:2.1.31-1.1.el6 perl-Archive-Zip.noarch 0:1.30-2.el6 perl-CGI.x86_64 0:3.49-115.el6 perl-Class-Singleton.noarch 0:1.4-6.el6 perl-Compress-Raw-Zlib.x86_64 0:2.023-115.el6 perl-Compress-Zlib.x86_64 0:2.020-115.el6 perl-DateTime.x86_64 1:0.5300-1.el6 perl-DateTime-Format-Mail.noarch 0:0.3001-6.el6 perl-DateTime-Format-W3CDTF.noarch 0:0.04-8.el6 perl-File-RsyncP.x86_64 0:0.68-6.el6 perl-HTML-Parser.x86_64 0:3.64-2.el6 perl-HTML-Tagset.noarch 0:3.20-4.el6 perl-IO-Compress-Base.x86_64 0:2.020-115.el6 perl-IO-Compress-Zlib.x86_64 0:2.020-115.el6 perl-List-MoreUtils.x86_64 0:0.22-10.el6 perl-Net-FTP-AutoReconnect.noarch 0:0.3-3.el6 perl-Net-FTP-RetrHandle.noarch 0:0.2-3.el6 perl-Params-Validate.x86_64 0:0.92-3.el6 perl-Time-modules.noarch 0:2006.0814-5.el6 perl-URI.noarch 0:1.40-2.el6 perl-XML-Parser.x86_64 0:2.36-7.el6 perl-XML-RSS.noarch 0:1.45-2.el6 perl-libwww-perl.noarch 0:5.833-2.el6 policycoreutils.x86_64 0:2.0.83-19.8.el6_0 redhat-logos.noarch 0:60.0.14-10.el6 rsync.x86_64 0:3.0.6-5.el6_0.1 samba-client.x86_64 0:3.5.4-68.el6_0.2 samba-common.x86_64 0:3.5.4-68.el6_0.2 samba-winbind-clients.x86_64 0:3.5.4-68.el6_0.2 ustr.x86_64 0:1.0.4-9.1.el6 State Changed: unlock buildroot State Changed: end |
From: Rick B. <rba...@gm...> - 2011-10-26 21:29:06
|
Thanks everyone, I assumed something that was not true. This problem is behind me, I truly appreciate the help here - even though it pointed out something that I failed to verify as working. Once I added EPEL and then installed BackupPC it went as smoothly as anyone would expect. I added EPEL during RHEL installation, so I thought it was done. Your messages prompted me to look and see if my assumption was correct and finding it wasn't led me to success. (humbly) Rick Bastedo On Wed, Oct 26, 2011 at 1:12 PM, Richard Shaw <hob...@gm...> wrote: > Just to verify it works I tried installing to a chroot environment[1] > as I don't have access to an EL system. The only thing I can think of > is somehow your yum data was out of date or something like that. > > The only thing I can think of is to try: > > # yum clean all > > It's pretty radical but non-destructive. It just means yum has to > re-download all the yum metadata from all of your repositories. > > Richard > --- > [1] Gory details here: > > $ mock -r epel-6-x86_64 --init > INFO: mock.py version 1.1.16 starting... > State Changed: init plugins > INFO: selinux enabled > State Changed: start > State Changed: lock buildroot > State Changed: clean > State Changed: unlock buildroot > State Changed: init > State Changed: lock buildroot > Mock Version: 1.1.16 > INFO: Mock Version: 1.1.16 > INFO: calling preinit hooks > INFO: enabled root cache > INFO: enabled yum cache > State Changed: cleaning yum metadata > INFO: enabled ccache > State Changed: running yum > State Changed: creating cache > State Changed: unlock buildroot > State Changed: end > > $ mock -r epel-6-x86_64 --install BackupPC > INFO: mock.py version 1.1.16 starting... > State Changed: init plugins > INFO: selinux enabled > State Changed: start > Mock Version: 1.1.16 > INFO: Mock Version: 1.1.16 > State Changed: lock buildroot > INFO: installing package(s): BackupPC > INFO: Ignored option -c (probably due to merging -yc != -y -c) > > > ================================================================================ > Package Arch Version Repository > > Size > > ================================================================================ > Installing: > BackupPC x86_64 3.2.1-6.el6 epel > 461 k > Installing for dependencies: > apr x86_64 1.3.9-3.el6_0.1 updates > 124 k > apr-util x86_64 1.3.9-3.el6_0.1 updates > 87 k > apr-util-ldap x86_64 1.3.9-3.el6_0.1 updates > 15 k > checkpolicy x86_64 2.0.22-1.el6 base > 190 k > dbus-glib x86_64 0.86-5.el6 base > 170 k > expat x86_64 2.0.1-9.1.el6 base > 76 k > httpd x86_64 2.2.15-5.el6.centos base > 811 k > httpd-tools x86_64 2.2.15-5.el6.centos base > 68 k > libselinux-utils x86_64 2.0.94-2.el6 base > 80 k > libsemanage x86_64 2.0.43-4.el6 base > 104 k > libtalloc x86_64 2.0.1-1.1.el6 base > 19 k > libtdb x86_64 1.2.1-2.el6 base > 28 k > mailcap noarch 2.1.31-1.1.el6 base > 27 k > perl-Archive-Zip noarch 1.30-2.el6 base > 107 k > perl-CGI x86_64 3.49-115.el6 base > 191 k > perl-Class-Singleton noarch 1.4-6.el6 base > 17 k > perl-Compress-Raw-Zlib x86_64 2.023-115.el6 base > 66 k > perl-Compress-Zlib x86_64 2.020-115.el6 base > 42 k > perl-DateTime x86_64 1:0.5300-1.el6 base > 2.0 M > perl-DateTime-Format-Mail noarch 0.3001-6.el6 base > 159 k > perl-DateTime-Format-W3CDTF noarch 0.04-8.el6 base > 17 k > perl-File-RsyncP x86_64 0.68-6.el6 epel > 100 k > perl-HTML-Parser x86_64 3.64-2.el6 base > 109 k > perl-HTML-Tagset noarch 3.20-4.el6 base > 17 k > perl-IO-Compress-Base x86_64 2.020-115.el6 base > 65 k > perl-IO-Compress-Zlib x86_64 2.020-115.el6 base > 132 k > perl-List-MoreUtils x86_64 0.22-10.el6 base > 53 k > perl-Net-FTP-AutoReconnect noarch 0.3-3.el6 epel > 11 k > perl-Net-FTP-RetrHandle noarch 0.2-3.el6 epel > 16 k > perl-Params-Validate x86_64 0.92-3.el6 base > 75 k > perl-Time-modules noarch 2006.0814-5.el6 base > 38 k > perl-URI noarch 1.40-2.el6 base > 117 k > perl-XML-Parser x86_64 2.36-7.el6 base > 224 k > perl-XML-RSS noarch 1.45-2.el6 base > 59 k > perl-libwww-perl noarch 5.833-2.el6 base > 387 k > policycoreutils x86_64 2.0.83-19.8.el6_0 updates > 675 k > redhat-logos noarch 60.0.14-10.el6 base > 10 M > rsync x86_64 3.0.6-5.el6_0.1 updates > 335 k > samba-client x86_64 3.5.4-68.el6_0.2 updates > 11 M > samba-common x86_64 3.5.4-68.el6_0.2 updates > 13 M > samba-winbind-clients x86_64 3.5.4-68.el6_0.2 updates > 1.1 M > ustr x86_64 1.0.4-9.1.el6 base > 86 k > > Transaction Summary > > ================================================================================ > Install 43 Package(s) > > Total download size: 43 M > Installed size: 129 M > > Installed: > BackupPC.x86_64 0:3.2.1-6.el6 > > Dependency Installed: > apr.x86_64 0:1.3.9-3.el6_0.1 > apr-util.x86_64 0:1.3.9-3.el6_0.1 > apr-util-ldap.x86_64 0:1.3.9-3.el6_0.1 > checkpolicy.x86_64 0:2.0.22-1.el6 > dbus-glib.x86_64 0:0.86-5.el6 > expat.x86_64 0:2.0.1-9.1.el6 > httpd.x86_64 0:2.2.15-5.el6.centos > httpd-tools.x86_64 0:2.2.15-5.el6.centos > libselinux-utils.x86_64 0:2.0.94-2.el6 > libsemanage.x86_64 0:2.0.43-4.el6 > libtalloc.x86_64 0:2.0.1-1.1.el6 > libtdb.x86_64 0:1.2.1-2.el6 > mailcap.noarch 0:2.1.31-1.1.el6 > perl-Archive-Zip.noarch 0:1.30-2.el6 > perl-CGI.x86_64 0:3.49-115.el6 > perl-Class-Singleton.noarch 0:1.4-6.el6 > perl-Compress-Raw-Zlib.x86_64 0:2.023-115.el6 > perl-Compress-Zlib.x86_64 0:2.020-115.el6 > perl-DateTime.x86_64 1:0.5300-1.el6 > perl-DateTime-Format-Mail.noarch 0:0.3001-6.el6 > perl-DateTime-Format-W3CDTF.noarch 0:0.04-8.el6 > perl-File-RsyncP.x86_64 0:0.68-6.el6 > perl-HTML-Parser.x86_64 0:3.64-2.el6 > perl-HTML-Tagset.noarch 0:3.20-4.el6 > perl-IO-Compress-Base.x86_64 0:2.020-115.el6 > perl-IO-Compress-Zlib.x86_64 0:2.020-115.el6 > perl-List-MoreUtils.x86_64 0:0.22-10.el6 > perl-Net-FTP-AutoReconnect.noarch 0:0.3-3.el6 > perl-Net-FTP-RetrHandle.noarch 0:0.2-3.el6 > perl-Params-Validate.x86_64 0:0.92-3.el6 > perl-Time-modules.noarch 0:2006.0814-5.el6 > perl-URI.noarch 0:1.40-2.el6 > perl-XML-Parser.x86_64 0:2.36-7.el6 > perl-XML-RSS.noarch 0:1.45-2.el6 > perl-libwww-perl.noarch 0:5.833-2.el6 > policycoreutils.x86_64 0:2.0.83-19.8.el6_0 > redhat-logos.noarch 0:60.0.14-10.el6 > rsync.x86_64 0:3.0.6-5.el6_0.1 > samba-client.x86_64 0:3.5.4-68.el6_0.2 > samba-common.x86_64 0:3.5.4-68.el6_0.2 > samba-winbind-clients.x86_64 0:3.5.4-68.el6_0.2 > ustr.x86_64 0:1.0.4-9.1.el6 > > > State Changed: unlock buildroot > State Changed: end > > > ------------------------------------------------------------------------------ > The demand for IT networking professionals continues to grow, and the > demand for specialized networking skills is growing even more rapidly. > Take a complimentary Learning@Cisco Self-Assessment and learn > about Cisco certifications, training, and career opportunities. > http://p.sf.net/sfu/cisco-dev2dev > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: http://backuppc.wiki.sourceforge.net > Project: http://backuppc.sourceforge.net/ > |
From: Richard S. <hob...@gm...> - 2011-10-26 23:46:52
|
On Wed, Oct 26, 2011 at 4:28 PM, Rick Bastedo <rba...@gm...> wrote: > Thanks everyone, I assumed something that was not true. > This problem is behind me, I truly appreciate the help here - even though it > pointed out something that I failed to verify as working. > Once I added EPEL and then installed BackupPC it went as smoothly as anyone > would expect. > I added EPEL during RHEL installation, so I thought it was done. > Your messages prompted me to look and see if my assumption was correct and > finding it wasn't led me to success. I'm just glad this was an easy one! (the fix anyway!) Richard |
From: Richard S. <hob...@gm...> - 2011-10-27 13:58:03
|
Just an FYI, you probably avoided this problem (the hard way) by manually installing some of the perl packages. They are available in an optional RHEL (not EPEL) repository that is disabled by default. See the following bug for details: https://bugzilla.redhat.com/show_bug.cgi?id=740276 Richard |
From: Rick B. <rba...@gm...> - 2011-10-27 14:19:21
|
Actually I read about enabling the "optional" RHEL repo while researching this problem so I had done that as well. Manually installing things made it so the system only had to get that last dependency and then it went ahead and installed BackupPC. Reading the docs, it's the final frontier... Now the real fun starts, getting BackupPC configured and working. Sometimes I wonder how I get myself into these things. Rick Bastedo On Thu, Oct 27, 2011 at 6:57 AM, Richard Shaw <hob...@gm...> wrote: > Just an FYI, you probably avoided this problem (the hard way) by > manually installing some of the perl packages. They are available in > an optional RHEL (not EPEL) repository that is disabled by default. > See the following bug for details: > > https://bugzilla.redhat.com/show_bug.cgi?id=740276 > > Richard > |
From: Les M. <les...@gm...> - 2011-10-27 15:47:21
|
On Thu, Oct 27, 2011 at 9:19 AM, Rick Bastedo <rba...@gm...> wrote: > Actually I read about enabling the "optional" RHEL repo while researching > this problem so I had done that as well. > Manually installing things made it so the system only had to get that last > dependency and then it went ahead and installed BackupPC. > Reading the docs, it's the final frontier... > > Now the real fun starts, getting BackupPC configured and working. > Sometimes I wonder how I get myself into these things. >From the RPM install it should just come up working with the the usual 'service' and 'chkconfig' operations to activate it, although iptables and/or SELinux might prevent access. See the comment in /etc/httpd/conf.d/BackupPC.conf about adding a web user password. It should be easy enough to get your data into backuppc at that point. I'm still curious about how you plan to get it to your DPM. -- Les Mikesell les...@gm... |
From: Rick B. <rba...@gm...> - 2011-10-27 16:14:52
|
"I'm still curious about how you plan to get it to your DPM." As am I. I'm taking a 'one step at a time' approach here. My network admin hasn't given me the storage share address yet nor do I have permission on the system that is to be backed up. I've got BackupPC installed, and am waiting until I get the information I need from him before I do configuration. Meanwhile I suppose getting the rest of the system ready would be a good idea at this point. Rick Bastedo On Thu, Oct 27, 2011 at 8:47 AM, Les Mikesell <les...@gm...> wrote: > On Thu, Oct 27, 2011 at 9:19 AM, Rick Bastedo <rba...@gm...> wrote: > > Actually I read about enabling the "optional" RHEL repo while researching > > this problem so I had done that as well. > > Manually installing things made it so the system only had to get that > last > > dependency and then it went ahead and installed BackupPC. > > Reading the docs, it's the final frontier... > > > > Now the real fun starts, getting BackupPC configured and working. > > Sometimes I wonder how I get myself into these things. > > From the RPM install it should just come up working with the the usual > 'service' and 'chkconfig' operations to activate it, although iptables > and/or SELinux might prevent access. See the comment in > /etc/httpd/conf.d/BackupPC.conf about adding a web user password. It > should be easy enough to get your data into backuppc at that point. > I'm still curious about how you plan to get it to your DPM. > > -- > Les Mikesell > les...@gm... > |
From: Rick B. <rba...@gm...> - 2011-11-07 18:22:39
|
The Systems Admin told me they are setting up a 5TB NFS share for me to use. Any gotcha's anyone can think of before I go ahead with configuration? There's nothing like diving right into the deep end. They will be having me back up more Linux systems after I successfully take care of the big sore point they have currently. Which is (current plan) to back up the database backups, four backups on a client machine dated from most recent to oldest copies currently about 36GB each. So (though I haven't seen it) I am thinking there will be four backup directories which will be a lot like each other with minor differences. I've been asked if there's a way to just get things that are newer than NN hours only, which I don't know - I said I'd get back to them on that. So the current thinking is that the databases are backed up using whatever our vendor uses to do backups, this is their responsibility according to their SLA. In order to provide disaster recovery (partially our responsibility) we will move those resulting database backup files to our NFS share via BackupPC and then backup that NFS to our DPM tape repository which gets sent out to Iron Mountain weekly. The restore process for disaster recovery purposes should be that of letting the vendor rebuild the system according to their SLA, then we will supply the database backup file set they request and they will perform the database restore. After we demonstrate success with this then we will identify other clients and add more to our backups, but first things first. ANY advice to this BackupPC noob is appreciated, I am learning things as I read everyone's messages. Rick Bastedo On Thu, Oct 27, 2011 at 9:14 AM, Rick Bastedo <rba...@gm...> wrote: > "I'm still curious about how you plan to get it to your DPM." > > As am I. > > I'm taking a 'one step at a time' approach here. > My network admin hasn't given me the storage share address yet nor do I > have permission on the system that is to be backed up. > I've got BackupPC installed, and am waiting until I get the information I > need from him before I do configuration. > Meanwhile I suppose getting the rest of the system ready would be a good > idea at this point. > > Rick Bastedo > > > On Thu, Oct 27, 2011 at 8:47 AM, Les Mikesell <les...@gm...>wrote: > >> On Thu, Oct 27, 2011 at 9:19 AM, Rick Bastedo <rba...@gm...> wrote: >> > Actually I read about enabling the "optional" RHEL repo while >> researching >> > this problem so I had done that as well. >> > Manually installing things made it so the system only had to get that >> last >> > dependency and then it went ahead and installed BackupPC. >> > Reading the docs, it's the final frontier... >> > >> > Now the real fun starts, getting BackupPC configured and working. >> > Sometimes I wonder how I get myself into these things. >> >> From the RPM install it should just come up working with the the usual >> 'service' and 'chkconfig' operations to activate it, although iptables >> and/or SELinux might prevent access. See the comment in >> /etc/httpd/conf.d/BackupPC.conf about adding a web user password. It >> should be easy enough to get your data into backuppc at that point. >> I'm still curious about how you plan to get it to your DPM. >> >> -- >> Les Mikesell >> les...@gm... >> > > |
From: Les M. <les...@gm...> - 2011-11-07 19:48:45
|
On Mon, Nov 7, 2011 at 12:22 PM, Rick Bastedo <rba...@gm...> wrote: > The Systems Admin told me they are setting up a 5TB NFS share for me to use. > Any gotcha's anyone can think of before I go ahead with configuration? Are you planning to mount the NFS share as the top of the backuppc archive directory, or use it as a target to export tar archives either with scripted Backuppc_tarCreate commands or 'archive host' configurations? > They will be having me back up more Linux systems after I successfully take > care of the big sore point they have currently. If the system ends up doing some type file-oriented copy from the NFS share to tape, it will likely fail at some point if that is your whole backuppc archive due to the large number of hardlinked files it will accumulate. > I've been asked if there's a way to just get things that are newer than NN > hours only, which I don't know - I said I'd get back to them on that. Maybe - if you modify the command issued to collect the copies. > So the current thinking is that the databases are backed up using whatever > our vendor uses to do backups, this is their responsibility according to > their SLA. > In order to provide disaster recovery (partially our responsibility) we will > move those resulting database backup files to our NFS share via BackupPC and > then backup that NFS to our DPM tape repository which gets sent out to Iron > Mountain weekly. > The restore process for disaster recovery purposes should be that of letting > the vendor rebuild the system according to their SLA, then we will supply > the database backup file set they request and they will perform the database > restore. I think you can make this work, but it isn't the sort of thing backuppc does best. On the other hand if you don't actually have a site disaster, it may be handy to be able to grab the copy that backuppc still has online. > After we demonstrate success with this then we will identify other clients > and add more to our backups, but first things first. You'll see much more benefit from backuppc's features from targets where you have duplication across machines, or directories where only a few files change per run. -- Les Mikesell les...@gm... |
From: Rick B. <rba...@gm...> - 2011-11-07 20:38:05
|
I am planning on mounting the share as the TopDir. I'm going to start small and see where it fails, then deal with it. This part sounds like it would be a lot simpler and way more effective to tar archive everything we want to store and then copy the results to tape. Or - there's a way to tar archive and stream it to the NFS share using BackupPC - if I can manage to do that it might just solve problems. Whatever happens it has to go straight to the NFS as that's the only storage I'll have that's big enough to take anything. Rick On Mon, Nov 7, 2011 at 11:48 AM, Les Mikesell <les...@gm...> wrote: > On Mon, Nov 7, 2011 at 12:22 PM, Rick Bastedo <rba...@gm...> wrote: > > The Systems Admin told me they are setting up a 5TB NFS share for me to > use. > > Any gotcha's anyone can think of before I go ahead with configuration? > > Are you planning to mount the NFS share as the top of the backuppc > archive directory, or use it as a target to export tar archives either > with scripted Backuppc_tarCreate commands or 'archive host' > configurations? > > > They will be having me back up more Linux systems after I successfully > take > > care of the big sore point they have currently. > > If the system ends up doing some type file-oriented copy from the NFS > share to tape, it will likely fail at some point if that is your whole > backuppc archive due to the large number of hardlinked files it will > accumulate. > > > I've been asked if there's a way to just get things that are newer than > NN > > hours only, which I don't know - I said I'd get back to them on that. > > Maybe - if you modify the command issued to collect the copies. > > > So the current thinking is that the databases are backed up using > whatever > > our vendor uses to do backups, this is their responsibility according to > > their SLA. > > In order to provide disaster recovery (partially our responsibility) we > will > > move those resulting database backup files to our NFS share via BackupPC > and > > then backup that NFS to our DPM tape repository which gets sent out to > Iron > > Mountain weekly. > > The restore process for disaster recovery purposes should be that of > letting > > the vendor rebuild the system according to their SLA, then we will supply > > the database backup file set they request and they will perform the > database > > restore. > > I think you can make this work, but it isn't the sort of thing > backuppc does best. On the other hand if you don't actually have a > site disaster, it may be handy to be able to grab the copy that > backuppc still has online. > > > After we demonstrate success with this then we will identify other > clients > > and add more to our backups, but first things first. > > You'll see much more benefit from backuppc's features from targets > where you have duplication across machines, or directories where only > a few files change per run. > > -- > Les Mikesell > les...@gm... > |
From: Les M. <les...@gm...> - 2011-11-07 20:55:46
|
On Mon, Nov 7, 2011 at 2:37 PM, Rick Bastedo <rba...@gm...> wrote: > I am planning on mounting the share as the TopDir. > I'm going to start small and see where it fails, then deal with it. Umm, is it wise to plan something that you expect to fail under typical conditions? > This part sounds like it would be a lot simpler and way more effective to > tar archive everything we want to store and then copy the results to tape. Except that (a) restoring it isn't likely to work completely correctly even if you can copy it, (b) you'll have to restore it to an identically-configured backuppc instance to access the files, and (c) you'll probably be in a hurry to access the last-saved db copy but you'll have to restore the entire backuppc archive first before you can get it. > Or - there's a way to tar archive and stream it to the NFS share using > BackupPC - if I can manage to do that it might just solve problems. A simple cron job could copy the file in it's usable form to the place the tape archiver will pick it up. And backuppc could make its own copy separately so you'd only need the tape in case of a disaster. > Whatever happens it has to go straight to the NFS as that's the only storage > I'll have that's big enough to take anything. Big disks are cheap these days. Or use one part of the NFS share for backuppc, another for what you send to tape. -- Les Mikesell les...@gm... |
From: <ha...@gm...> - 2011-11-10 01:32:30
|
>> Whatever happens it has to go straight to the NFS as that's the only storage>> I'll have that's big enough to take anything.>> Big disks are cheap these days. Or use one part of the NFS share for> backuppc, another for what you send to tape. Echo Les, what he said. You will find your BackupPC instance will come in very handy for ad-hoc restores, and its deduping will allow you to keep versioning archives long-term. But IMO don't bother having them backup the whole TopDir, or if you do try that, treat it as a completely separate run of the tape system, separate physical tape sets etc, it's just "would be nice if it works" ie "we might be able to reconstruct the BackupPC instance from tape but won't count on it." Set up a secondary mount point (let's call it DumpDir) for scheduled "dump to tape" jobs per client/target host - those are the targets for your tape jobs. This means the tape archive process won't benefit from BPC's de-duplication, but speed/ease of recovery in a true disaster is the goal there not saving media space. These are temporary dumps, have your process deleted when the next dump runs. Note that this TopDir filesystem, although it in effect will contain the contents of hundreds of DumpDir instances down the road, will actually take up a very small fraction of your allocated actual disk space, depending on #hosts and degree of duplication between them. Be very careful you don't allow your total space to fill up - ideally you want TopDir on its own dedicated filesystem. Otherwise perhaps keep more than one instance per host in your DumpDir, so you can quickly wipe them out when your monitoring system let's you know you're hitting the say 80% threshold, giving your team time to fulfill your request for more disk space. As you seem to be aware, the lack of an overall centrally coordinated plan by a BackupPC expert means you'll end up with a bit of a square-peg-round-hole end result, but if the resources are there to accommodate the resulting hodgepodge, the result can still be reliable; in fact it will result in resource-inefficient redundancy that might just come in handy down the road 8-) If it turns out the filesystem you're allocated is too precious for them to be able to accommodate the above, then make sure you put the BackupPC TopDir on the most reliable and expandable filesystem (probably the expensive one) and just use a big cheap drive for the "scratch" DumpDir filesystem targeted by their tape system. Disclaimer - above is based on background knowledge from general experience, if it conflicts with what anyone else tells you here regarding BackupPC, best to follow them rather than me. |