From: Jonathan B. <lin...@gm...> - 2014-08-19 12:59:18
|
We have a small cluster running Alfresco. The web software is on server1 and the database is on server2. We have Bacula community version 7.0.5 Bacula client is installed on server1 for now. Obviously, server1 can access server2's database. I'm wondering how others have solved this problem. I have a couple of options here: 1. Do a pg_dump on server1 (it has the space) and then backup the dump file 2. Do the same on server2 instead 3. Set up a PITR backup on server2 4. Do a simple file system backup. This requires shutting down the database while the backup is in progress. so, I'm curious. Any responses? Thanks in advance. JBB |
From: Heitor F. <he...@ba...> - 2014-08-19 13:20:42
|
> 1. Do a pg_dump on server1 (it has the space) and then backup the dump file > 2. Do the same on server2 instead > 3. Set up a PITR backup on server2 > 4. Do a simple file system backup. This requires shutting down the > database while the backup is in progress. They are all valid used solutions to backup postgresql databases. For option 3 i think you mean the hot postgresql backup, witch requires enabling db archiving, before job scripts in order to put database in backup mode and after job script to quit backup mode. This method is good for not needing aditional dump generation space, to keep the database available for some commands while doing backup and to allow pitr. But i'm not a DBA, maybe you can ask to him. =) > > so, I'm curious. Any responses? > > Thanks in advance. > > > JBB > > ------------------------------------------------------------------------------ > _______________________________________________ > Bacula-users mailing list > Bac...@li... > https://lists.sourceforge.net/lists/listinfo/bacula-users |
From: Radosław K. <rad...@ko...> - 2014-08-22 20:49:24
|
Hello, 2014-08-19 15:20 GMT+02:00 Heitor Faria <he...@ba...>: > > > 1. Do a pg_dump on server1 (it has the space) and then backup the dump > file > > 2. Do the same on server2 instead > > 3. Set up a PITR backup on server2 > > 4. Do a simple file system backup. This requires shutting down the > > database while the backup is in progress. > > They are all valid used solutions to backup postgresql databases. For > option 3 i think you mean the hot postgresql backup, witch requires > enabling db archiving, before job scripts in order to put database in > backup mode and after job script to quit backup mode. > > This method is good for not needing aditional dump generation space, to > keep the database available for some commands while doing backup and to > allow pitr. > There is a PostgreSQL plugin for Bacula available which does PITR backup and recovery. It automatically handles WAL archiving and online datafiles backup. The plugin is available with AGPLv3 license. It is especially useful for large databases where data import will take a long time or when Point-In-Time-Recovery is required. best regards -- Radosław Korzeniewski rad...@ko... |
From: Dan L. <da...@la...> - 2014-08-20 20:35:05
Attachments:
signature.asc
|
On Aug 19, 2014, at 8:59 AM, Jonathan Bayer <lin...@gm...> wrote: > We have a small cluster running Alfresco. The web software is on > server1 and the database is on server2. > > We have Bacula community version 7.0.5 > > Bacula client is installed on server1 for now. Obviously, server1 can > access server2's database. > > I'm wondering how others have solved this problem. I have a couple of > options here: > > 1. Do a pg_dump on server1 (it has the space) and then backup the dump file > 2. Do the same on server2 instead > 3. Set up a PITR backup on server2 > 4. Do a simple file system backup. This requires shutting down the > database while the backup is in progress. > > so, I'm curious. Any responses? I would run the pg_dump on server1. That way, your copy is not on server2 should server 2 die. Backup that dump file. Do not delete the file after the backup. Keep it available in case it’s needed. On a regular basis, rsync that file, and your *.conf file to a few other safe places. Document those locations. The goal: have the configuration and the database dump available should server1 die. — Dan Langille |
From: Dimitri M. <dm...@bm...> - 2014-08-20 21:23:01
Attachments:
signature.asc
|
On 08/20/2014 03:34 PM, Dan Langille wrote: > > On Aug 19, 2014, at 8:59 AM, Jonathan Bayer <lin...@gm...> wrote: > >> We have a small cluster running Alfresco. The web software is on >> server1 and the database is on server2. >> >> We have Bacula community version 7.0.5 >> >> Bacula client is installed on server1 for now. Obviously, server1 can >> access server2's database. >> >> I'm wondering how others have solved this problem. I have a couple of >> options here: >> >> 1. Do a pg_dump on server1 (it has the space) and then backup the dump file >> 2. Do the same on server2 instead >> 3. Set up a PITR backup on server2 >> 4. Do a simple file system backup. This requires shutting down the >> database while the backup is in progress. >> >> so, I'm curious. Any responses? > > > I would run the pg_dump on server1. That way, your copy is not on server2 should server 2 die. > > Backup that dump file. > > Do not delete the file after the backup. Keep it available in case it’s needed. > > On a regular basis, rsync that file, and your *.conf file to a few other safe places. > > Document those locations. > > The goal: have the configuration and the database dump available should server1 die. FWIW, I usually pg_dump the schema to a text file and run a script that does '\copy ... to csv' for each table. Then commit them to git or rcs repository right there. Then rsync the repository to a couple of other servers. No bacula needed. I don't bother with .conf because so far I didn't need to do any mods I'd want saved -- changing listen address and acl doesn't count as worth saving. (For other conf files there's etckeeper) -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu |
From: Dan L. <da...@la...> - 2014-08-22 02:38:04
Attachments:
signature.asc
|
On Aug 20, 2014, at 5:22 PM, Dimitri Maziuk <dm...@bm...> wrote: > On 08/20/2014 03:34 PM, Dan Langille wrote: >> >> On Aug 19, 2014, at 8:59 AM, Jonathan Bayer <lin...@gm...> wrote: >> >>> We have a small cluster running Alfresco. The web software is on >>> server1 and the database is on server2. >>> >>> We have Bacula community version 7.0.5 >>> >>> Bacula client is installed on server1 for now. Obviously, server1 can >>> access server2's database. >>> >>> I'm wondering how others have solved this problem. I have a couple of >>> options here: >>> >>> 1. Do a pg_dump on server1 (it has the space) and then backup the dump file >>> 2. Do the same on server2 instead >>> 3. Set up a PITR backup on server2 >>> 4. Do a simple file system backup. This requires shutting down the >>> database while the backup is in progress. >>> >>> so, I'm curious. Any responses? >> >> >> I would run the pg_dump on server1. That way, your copy is not on server2 should server 2 die. >> >> Backup that dump file. >> >> Do not delete the file after the backup. Keep it available in case it’s needed. >> >> On a regular basis, rsync that file, and your *.conf file to a few other safe places. >> >> Document those locations. >> >> The goal: have the configuration and the database dump available should server1 die. > > FWIW, I usually pg_dump the schema to a text file and run a script that > does '\copy ... to csv' for each table. Then commit them to git or rcs > repository right there. Then rsync the repository to a couple of other > servers. No bacula needed. > > I don't bother with .conf because so far I didn't need to do any mods > I'd want saved -- changing listen address and acl doesn't count as worth > saving. When I said .conf, I mean bacula-dir.conf, etc. Keep those files handy, outside Bacula backups. — Dan Langille |
From: Dmitri M. <dm...@bm...> - 2014-08-22 14:43:54
|
On 8/21/2014 9:37 PM, Dan Langille wrote: > When I said .conf, I mean bacula-dir.conf, etc. Keep those files handy, outside Bacula backups. Ah. There's etckeeper for backing those and any number of ways to copy /etc/.git elsewhere. I've been eyeballing btsync lately, pity it'll probably never get into any of the linux distros... Dima |
From: Jesper K. <je...@kr...> - 2014-08-29 18:04:09
|
On 20/08/2014, at 23.22, Dimitri Maziuk <dm...@bm...> wrote: > FWIW, I usually pg_dump the schema to a text file and run a script that > does '\copy ... to csv' for each table. Then commit them to git or rcs > repository right there. Then rsync the repository to a couple of other > servers. No bacula needed. That is not going to give you a backup gauranteed to be consistent. Jesper |
From: Dimitri M. <dm...@bm...> - 2014-08-29 18:12:32
Attachments:
signature.asc
|
On 08/29/2014 12:48 PM, Jesper Krogh wrote: > On 20/08/2014, at 23.22, Dimitri Maziuk <dm...@bm...> wrote: > >> FWIW, I usually pg_dump the schema to a text file and run a script that >> does '\copy ... to csv' for each table. Then commit them to git or rcs >> repository right there. Then rsync the repository to a couple of other >> servers. No bacula needed. > > That is not going to give you a backup gauranteed to be consistent. It will give *me* a consistent backup, but in general you're right: it won't. (Wrapping it in a single transaction can be problematic.) The other problem is, like pg_dump it only works up to a point. Once your .csv's grow to a couple of gigabytes you'll have the same problems as with one huge pg_dump file. Dep. on the data and update frequency the deltas can grow into gigabytes even faster and kill your i/o on both vcs commits and repo sync. Again, *my* csv's aren't expected to get that big anytime soon, YMMV. So buyer beware, when it breaks you get to keep the pieces, and all that. -- Dimitri Maziuk Programmer/sysadmin BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu |