From: Lord S. <lor...@gm...> - 2013-04-25 21:45:59
|
I'm currently backing up mysql by way of dumping the DB to a flat file then backing up the flat file. Which works well in most cases except when someone has a database that is bigger than 50% of the hdd. Or really bigger than around say 35% of the hdd if you account for system files and a reasonable amount of free space. I started thinking, mysqldump streams data into a file and then backuppc streams that file for backup. So why not cut out the middle man file and just stream right into backuppc? Ive been playing with the backup commands but im getting some unexpected results due to what I believe is my lack of total understanding of how backuppc actually streams things. So my overall question is. Does anyone have any clues or hints or ideas on a way to modify the backup command such that it calls either mysqldump or a script that would generate a file stream at backup time into the backuppc tunnel? So far using the tar backup method seems to be the best idea, I tried modifying the tar command but that seemed to screw with the backuppc server side. so it looks like im gonig to have to just add something to the tar command so it streams the file into the backuppc tar command(as opposed to replacing the tar command). Lawrence |
From: Sabuj P. <sa...@gm...> - 2013-04-25 21:53:08
|
Does your mysql db live on a unix system? If so, why not use automysqlbackup and just have it dump to your backup system over NFS? On Thu, Apr 25, 2013 at 4:45 PM, Lord Sporkton <lor...@gm...> wrote: > I'm currently backing up mysql by way of dumping the DB to a flat file then > backing up the flat file. Which works well in most cases except when someone > has a database that is bigger than 50% of the hdd. Or really bigger than > around say 35% of the hdd if you account for system files and a reasonable > amount of free space. > > I started thinking, mysqldump streams data into a file and then backuppc > streams that file for backup. So why not cut out the middle man file and > just stream right into backuppc? Ive been playing with the backup commands > but im getting some unexpected results due to what I believe is my lack of > total understanding of how backuppc actually streams things. > > So my overall question is. Does anyone have any clues or hints or ideas on a > way to modify the backup command such that it calls either mysqldump or a > script that would generate a file stream at backup time into the backuppc > tunnel? > > So far using the tar backup method seems to be the best idea, I tried > modifying the tar command but that seemed to screw with the backuppc server > side. so it looks like im gonig to have to just add something to the tar > command so it streams the file into the backuppc tar command(as opposed to > replacing the tar command). > > Lawrence > > > ------------------------------------------------------------------------------ > Try New Relic Now & We'll Send You this Cool Shirt > New Relic is the only SaaS-based application performance monitoring service > that delivers powerful full stack analytics. Optimize and monitor your > browser, app, & servers with just a few lines of code. Try New Relic > and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: http://backuppc.wiki.sourceforge.net > Project: http://backuppc.sourceforge.net/ > |
From: Lord S. <lor...@gm...> - 2013-04-25 22:35:29
|
It is on linux yes. That is not out of the question, but it would be preferred for management purposes to do it though the command in backuppc. Its not just one server, its dozens and growing. Its also multiple customers and departments. On 25 April 2013 14:53, Sabuj Pattanayek <sa...@gm...> wrote: > Does your mysql db live on a unix system? If so, why not use > automysqlbackup and just have it dump to your backup system over NFS? > > On Thu, Apr 25, 2013 at 4:45 PM, Lord Sporkton <lor...@gm...> > wrote: > > I'm currently backing up mysql by way of dumping the DB to a flat file > then > > backing up the flat file. Which works well in most cases except when > someone > > has a database that is bigger than 50% of the hdd. Or really bigger than > > around say 35% of the hdd if you account for system files and a > reasonable > > amount of free space. > > > > I started thinking, mysqldump streams data into a file and then backuppc > > streams that file for backup. So why not cut out the middle man file and > > just stream right into backuppc? Ive been playing with the backup > commands > > but im getting some unexpected results due to what I believe is my lack > of > > total understanding of how backuppc actually streams things. > > > > So my overall question is. Does anyone have any clues or hints or ideas > on a > > way to modify the backup command such that it calls either mysqldump or a > > script that would generate a file stream at backup time into the backuppc > > tunnel? > > > > So far using the tar backup method seems to be the best idea, I tried > > modifying the tar command but that seemed to screw with the backuppc > server > > side. so it looks like im gonig to have to just add something to the tar > > command so it streams the file into the backuppc tar command(as opposed > to > > replacing the tar command). > > > > Lawrence > > > > > > > ------------------------------------------------------------------------------ > > Try New Relic Now & We'll Send You this Cool Shirt > > New Relic is the only SaaS-based application performance monitoring > service > > that delivers powerful full stack analytics. Optimize and monitor your > > browser, app, & servers with just a few lines of code. Try New Relic > > and get this awesome Nerd Life shirt! > http://p.sf.net/sfu/newrelic_d2d_apr > > _______________________________________________ > > BackupPC-users mailing list > > Bac...@li... > > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > > Wiki: http://backuppc.wiki.sourceforge.net > > Project: http://backuppc.sourceforge.net/ > > > > > ------------------------------------------------------------------------------ > Try New Relic Now & We'll Send You this Cool Shirt > New Relic is the only SaaS-based application performance monitoring service > that delivers powerful full stack analytics. Optimize and monitor your > browser, app, & servers with just a few lines of code. Try New Relic > and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: http://backuppc.wiki.sourceforge.net > Project: http://backuppc.sourceforge.net/ > |
From: Mark R. <mro...@vi...> - 2013-04-25 22:44:27
|
>From the sound of things backuppc probably isn't the best bet for you. It works on the file level, which won't work for mysql. This leaves you with really one option for backing up db's and that is writing to a flat file. What you may need is some sort of block level backup, this way you can backup directly from the db without running a dump. I think zmanda might do what you are looking for. ----- Original Message ----- From: "Lord Sporkton" <lor...@gm...> To: "General list for user discussion, questions and support" <bac...@li...> Sent: Thursday, April 25, 2013 6:35:20 PM Subject: Re: [BackupPC-users] mysql streaming It is on linux yes. That is not out of the question, but it would be preferred for management purposes to do it though the command in backuppc. Its not just one server, its dozens and growing. Its also multiple customers and departments. On 25 April 2013 14:53, Sabuj Pattanayek < sa...@gm... > wrote: Does your mysql db live on a unix system? If so, why not use automysqlbackup and just have it dump to your backup system over NFS? On Thu, Apr 25, 2013 at 4:45 PM, Lord Sporkton < lor...@gm... > wrote: > I'm currently backing up mysql by way of dumping the DB to a flat file then > backing up the flat file. Which works well in most cases except when someone > has a database that is bigger than 50% of the hdd. Or really bigger than > around say 35% of the hdd if you account for system files and a reasonable > amount of free space. > > I started thinking, mysqldump streams data into a file and then backuppc > streams that file for backup. So why not cut out the middle man file and > just stream right into backuppc? Ive been playing with the backup commands > but im getting some unexpected results due to what I believe is my lack of > total understanding of how backuppc actually streams things. > > So my overall question is. Does anyone have any clues or hints or ideas on a > way to modify the backup command such that it calls either mysqldump or a > script that would generate a file stream at backup time into the backuppc > tunnel? > > So far using the tar backup method seems to be the best idea, I tried > modifying the tar command but that seemed to screw with the backuppc server > side. so it looks like im gonig to have to just add something to the tar > command so it streams the file into the backuppc tar command(as opposed to > replacing the tar command). > > Lawrence > > > ------------------------------------------------------------------------------ > Try New Relic Now & We'll Send You this Cool Shirt > New Relic is the only SaaS-based application performance monitoring service > that delivers powerful full stack analytics. Optimize and monitor your > browser, app, & servers with just a few lines of code. Try New Relic > and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: http://backuppc.wiki.sourceforge.net > Project: http://backuppc.sourceforge.net/ > ------------------------------------------------------------------------------ Try New Relic Now & We'll Send You this Cool Shirt New Relic is the only SaaS-based application performance monitoring service that delivers powerful full stack analytics. Optimize and monitor your browser, app, & servers with just a few lines of code. Try New Relic and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr _______________________________________________ BackupPC-users mailing list Bac...@li... List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/ ------------------------------------------------------------------------------ Try New Relic Now & We'll Send You this Cool Shirt New Relic is the only SaaS-based application performance monitoring service that delivers powerful full stack analytics. Optimize and monitor your browser, app, & servers with just a few lines of code. Try New Relic and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr _______________________________________________ BackupPC-users mailing list Bac...@li... List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/ |
From: Lord S. <lor...@gm...> - 2013-04-25 23:09:51
|
I'm aware of zmanda and several other backup options however at this time this is what we have and this is what we are trying to leverage. Perhaps it will turn out that writing to a flat file is the only option. But the nature of the backuppc commands leads me to believe there is some possibility to link streams. mysqldump streams into a file and backuppc "appears" to stream from the local file, across the tunnel. This leads me to believe(and i could of course be wrong due to some caveat that Im simply not aware of yet) that If i could redirect backuppc to stream from a stream rather than file, i could get it to work. Tar is of course capable of accepting either stream or file as input and mysqldump is capable of outputing to either stream or file. I suppose I will just have to play around with it more maybe. On 25 April 2013 15:44, Mark Rosedale <mro...@vi...> wrote: > >From the sound of things backuppc probably isn't the best bet for you. It > works on the file level, which won't work for mysql. This leaves you with > really one option for backing up db's and that is writing to a flat file. > What you may need is some sort of block level backup, this way you can > backup directly from the db without running a dump. I think zmanda might do > what you are looking for. > > ----- Original Message ----- > From: "Lord Sporkton" <lor...@gm...> > To: "General list for user discussion, questions and support" < > bac...@li...> > Sent: Thursday, April 25, 2013 6:35:20 PM > Subject: Re: [BackupPC-users] mysql streaming > > > > It is on linux yes. That is not out of the question, but it would be > preferred for management purposes to do it though the command in backuppc. > Its not just one server, its dozens and growing. Its also multiple > customers and departments. > > > > On 25 April 2013 14:53, Sabuj Pattanayek < sa...@gm... > wrote: > > > Does your mysql db live on a unix system? If so, why not use > automysqlbackup and just have it dump to your backup system over NFS? > > > > On Thu, Apr 25, 2013 at 4:45 PM, Lord Sporkton < lor...@gm... > > wrote: > > I'm currently backing up mysql by way of dumping the DB to a flat file > then > > backing up the flat file. Which works well in most cases except when > someone > > has a database that is bigger than 50% of the hdd. Or really bigger than > > around say 35% of the hdd if you account for system files and a > reasonable > > amount of free space. > > > > I started thinking, mysqldump streams data into a file and then backuppc > > streams that file for backup. So why not cut out the middle man file and > > just stream right into backuppc? Ive been playing with the backup > commands > > but im getting some unexpected results due to what I believe is my lack > of > > total understanding of how backuppc actually streams things. > > > > So my overall question is. Does anyone have any clues or hints or ideas > on a > > way to modify the backup command such that it calls either mysqldump or a > > script that would generate a file stream at backup time into the backuppc > > tunnel? > > > > So far using the tar backup method seems to be the best idea, I tried > > modifying the tar command but that seemed to screw with the backuppc > server > > side. so it looks like im gonig to have to just add something to the tar > > command so it streams the file into the backuppc tar command(as opposed > to > > replacing the tar command). > > > > Lawrence > > > > > > > ------------------------------------------------------------------------------ > > Try New Relic Now & We'll Send You this Cool Shirt > > New Relic is the only SaaS-based application performance monitoring > service > > that delivers powerful full stack analytics. Optimize and monitor your > > browser, app, & servers with just a few lines of code. Try New Relic > > and get this awesome Nerd Life shirt! > http://p.sf.net/sfu/newrelic_d2d_apr > > _______________________________________________ > > BackupPC-users mailing list > > Bac...@li... > > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > > Wiki: http://backuppc.wiki.sourceforge.net > > Project: http://backuppc.sourceforge.net/ > > > > > ------------------------------------------------------------------------------ > Try New Relic Now & We'll Send You this Cool Shirt > New Relic is the only SaaS-based application performance monitoring service > that delivers powerful full stack analytics. Optimize and monitor your > browser, app, & servers with just a few lines of code. Try New Relic > and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: http://backuppc.wiki.sourceforge.net > Project: http://backuppc.sourceforge.net/ > > > > ------------------------------------------------------------------------------ > Try New Relic Now & We'll Send You this Cool Shirt > New Relic is the only SaaS-based application performance monitoring service > that delivers powerful full stack analytics. Optimize and monitor your > browser, app, & servers with just a few lines of code. Try New Relic > and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: http://backuppc.wiki.sourceforge.net > Project: http://backuppc.sourceforge.net/ > > > ------------------------------------------------------------------------------ > Try New Relic Now & We'll Send You this Cool Shirt > New Relic is the only SaaS-based application performance monitoring service > that delivers powerful full stack analytics. Optimize and monitor your > browser, app, & servers with just a few lines of code. Try New Relic > and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: http://backuppc.wiki.sourceforge.net > Project: http://backuppc.sourceforge.net/ > |
From: Sabuj P. <sa...@gm...> - 2013-04-26 01:30:23
|
> could get it to work. Tar is of course capable of accepting either stream or > file as input and mysqldump is capable of outputing to either stream or > file. I suppose I will just have to play around with it more maybe. Please show an example of where you can stream data directly into tar |
From: Les M. <les...@gm...> - 2013-04-26 04:39:53
|
On Thu, Apr 25, 2013 at 6:09 PM, Lord Sporkton <lor...@gm...> wrote: > I'm aware of zmanda and several other backup options however at this time > this is what we have and this is what we are trying to leverage. Perhaps it > will turn out that writing to a flat file is the only option. But the nature > of the backuppc commands leads me to believe there is some possibility to > link streams. mysqldump streams into a file and backuppc "appears" to stream > from the local file, across the tunnel. This leads me to believe(and i could > of course be wrong due to some caveat that Im simply not aware of yet) that > If i could redirect backuppc to stream from a stream rather than file, i > could get it to work. Tar is of course capable of accepting either stream or > file as input and mysqldump is capable of outputing to either stream or > file. I suppose I will just have to play around with it more maybe. I don't think you are going to find a way to get backuppc to collect the output stream directly. However you could use disk space on another machine for the intermediate file copy, using either a pipe over ssh or an nfs mount and let backuppc pick it up from there. -- Les Mikesell les...@gm... |
From: Adam G. <mai...@we...> - 2013-04-26 05:15:46
|
On 26/04/13 14:39, Les Mikesell wrote: > On Thu, Apr 25, 2013 at 6:09 PM, Lord Sporkton <lor...@gm...> wrote: >> I'm aware of zmanda and several other backup options however at this time >> this is what we have and this is what we are trying to leverage. Perhaps it >> will turn out that writing to a flat file is the only option. But the nature >> of the backuppc commands leads me to believe there is some possibility to >> link streams. mysqldump streams into a file and backuppc "appears" to stream >> from the local file, across the tunnel. This leads me to believe(and i could >> of course be wrong due to some caveat that Im simply not aware of yet) that >> If i could redirect backuppc to stream from a stream rather than file, i >> could get it to work. Tar is of course capable of accepting either stream or >> file as input and mysqldump is capable of outputing to either stream or >> file. I suppose I will just have to play around with it more maybe. > I don't think you are going to find a way to get backuppc to collect > the output stream directly. However you could use disk space on > another machine for the intermediate file copy, using either a pipe > over ssh or an nfs mount and let backuppc pick it up from there. I think the issue is that backuppc expects tar to not just provide a stream of data, but a list of filenames with the data for each file included. If you pipe the data into tar, I'm not sure that tar will be able to know what the filename is. I was wondering if you could instruct backuppc to use a custom tar command to backup a named pipe something like this: mysqlbackup > somepipe backuppc is instructed to backup somepipe using the tar protocol, and tar is instructed to handle the file like a normal file instead of just backing it up as a named pipe. The theory being that backuppc will see it is backing up a file called "somepipe", but in reality, the file contents never exist on disk. The challenge here is that the backup script "mysqlbackup > somepipe" will not actually complete (because it can't write to the pipe) until the backup is started. Probably you need backuppc to run a pre-backup script which will trigger the mysqlbackup to start running (disconnected/background so that backuppc will continue), then backuppc will read the backup contents, and finish. You would need one "share" for each DB, so 50 DB's means 50 shares, or potentially, you might be able to have a directory of pipes, and the backup script can start the 50 backups in parallel (each one backgrounded, but each one doesn't really start until the previous one is finished). The challenge is to make sure tar will quit reading from the pipe at the end of the backup output, and also it won't start reading from the next pipe before the backup data starts to be sent there (possibly, depending on when/how tar decides it has finished reading the file/pipe) IMHO, this *might* work, but could also be fairly fragile, and may have many unintended side-effects. Potentially, a second option would be to use mysql replication to keep a current copy of all live databases on a 'backup' machine. Then you can simply stop the mysql server, backup the raw DB files with backuppc, and then re-start the mysql server. The mysql server will then catch up from all the remote replication partners, and continue. This also gives you a possible source of more up to date backup data if some (not all) problem happens on the live DB server. Regards, Adam -- Adam Goryachev Website Managers www.websitemanagers.com.au |
From: Lord S. <lor...@gm...> - 2013-04-26 06:24:31
|
At this point I have realized two things. 1) tar accepts a stream of filenames in, not a data stream. 2) backuppc expects specifically a tar stream, not just a file stream(with a list of files preceding the data). To that end I have thrown out the idea of using tar and I have resorted to writing my own perl script that uses Archive::Tar. So far testing is going well with dummy data but I dont have it fully working yet. On 25 April 2013 22:15, Adam Goryachev <mai...@we...>wrote: > On 26/04/13 14:39, Les Mikesell wrote: > > On Thu, Apr 25, 2013 at 6:09 PM, Lord Sporkton <lor...@gm...> > wrote: > >> I'm aware of zmanda and several other backup options however at this > time > >> this is what we have and this is what we are trying to leverage. > Perhaps it > >> will turn out that writing to a flat file is the only option. But the > nature > >> of the backuppc commands leads me to believe there is some possibility > to > >> link streams. mysqldump streams into a file and backuppc "appears" to > stream > >> from the local file, across the tunnel. This leads me to believe(and i > could > >> of course be wrong due to some caveat that Im simply not aware of yet) > that > >> If i could redirect backuppc to stream from a stream rather than file, i > >> could get it to work. Tar is of course capable of accepting either > stream or > >> file as input and mysqldump is capable of outputing to either stream or > >> file. I suppose I will just have to play around with it more maybe. > > I don't think you are going to find a way to get backuppc to collect > > the output stream directly. However you could use disk space on > > another machine for the intermediate file copy, using either a pipe > > over ssh or an nfs mount and let backuppc pick it up from there. > I think the issue is that backuppc expects tar to not just provide a > stream of data, but a list of filenames with the data for each file > included. If you pipe the data into tar, I'm not sure that tar will be > able to know what the filename is. > > I was wondering if you could instruct backuppc to use a custom tar > command to backup a named pipe something like this: > > mysqlbackup > somepipe > > backuppc is instructed to backup somepipe using the tar protocol, and > tar is instructed to handle the file like a normal file instead of just > backing it up as a named pipe. > > The theory being that backuppc will see it is backing up a file called > "somepipe", but in reality, the file contents never exist on disk. > > The challenge here is that the backup script "mysqlbackup > somepipe" > will not actually complete (because it can't write to the pipe) until > the backup is started. Probably you need backuppc to run a pre-backup > script which will trigger the mysqlbackup to start running > (disconnected/background so that backuppc will continue), then backuppc > will read the backup contents, and finish. You would need one "share" > for each DB, so 50 DB's means 50 shares, or potentially, you might be > able to have a directory of pipes, and the backup script can start the > 50 backups in parallel (each one backgrounded, but each one doesn't > really start until the previous one is finished). The challenge is to > make sure tar will quit reading from the pipe at the end of the backup > output, and also it won't start reading from the next pipe before the > backup data starts to be sent there (possibly, depending on when/how tar > decides it has finished reading the file/pipe) > > IMHO, this *might* work, but could also be fairly fragile, and may have > many unintended side-effects. > > Potentially, a second option would be to use mysql replication to keep a > current copy of all live databases on a 'backup' machine. Then you can > simply stop the mysql server, backup the raw DB files with backuppc, and > then re-start the mysql server. The mysql server will then catch up from > all the remote replication partners, and continue. This also gives you a > possible source of more up to date backup data if some (not all) problem > happens on the live DB server. > > Regards, > Adam > > -- > Adam Goryachev > Website Managers > www.websitemanagers.com.au > > > > ------------------------------------------------------------------------------ > Try New Relic Now & We'll Send You this Cool Shirt > New Relic is the only SaaS-based application performance monitoring service > that delivers powerful full stack analytics. Optimize and monitor your > browser, app, & servers with just a few lines of code. Try New Relic > and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: http://backuppc.wiki.sourceforge.net > Project: http://backuppc.sourceforge.net/ > |
From: Arnold K. <ar...@ar...> - 2013-04-26 20:28:00
Attachments:
signature.asc
|
On Thu, 25 Apr 2013 14:45:50 -0700 Lord Sporkton <lor...@gm...> wrote: > I'm currently backing up mysql by way of dumping the DB to a flat > file then backing up the flat file. Which works well in most cases > except when someone has a database that is bigger than 50% of the > hdd. Or really bigger than around say 35% of the hdd if you account > for system files and a reasonable amount of free space. > I started thinking, mysqldump streams data into a file and then > backuppc streams that file for backup. So why not cut out the middle > man file and just stream right into backuppc? Ive been playing with > the backup commands but im getting some unexpected results due to > what I believe is my lack of total understanding of how backuppc > actually streams things. I don't know about your rates, but here in europe a new 2TB-disk costs less then me thinking and trying to implement anything like this. However, the idea seems interesting (hobby isn't always about hourly rates:). Basically you have to send a stream from the client to the server that is a valid tar-file. rsync is more for transfering parts, but you want to dump the whole db every time. And smb is out for obvious reasons. But on how to trick backuppc to think the other side is sending a tar-file, I have no clue... So unless you have an academic interest in understanding how things work, its much cheaper and easier to just push more disk-space into the servers concerned. Probably its just putting two more disks on the host and then increasing several machines disks? Have fun, Arnold |
From: Timothy J M. <tm...@ob...> - 2013-04-26 20:45:23
|
Arnold Krille <ar...@ar...> wrote on 04/26/2013 04:27:44 PM: > On Thu, 25 Apr 2013 14:45:50 -0700 Lord Sporkton > <lor...@gm...> wrote: > > I'm currently backing up mysql by way of dumping the DB to a flat > > file then backing up the flat file. Which works well in most cases > > except when someone has a database that is bigger than 50% of the > > hdd. Or really bigger than around say 35% of the hdd if you account > > for system files and a reasonable amount of free space. > > I started thinking, mysqldump streams data into a file and then > > backuppc streams that file for backup. So why not cut out the middle > > man file and just stream right into backuppc? Ive been playing with > > the backup commands but im getting some unexpected results due to > > what I believe is my lack of total understanding of how backuppc > > actually streams things. > > I don't know about your rates, but here in europe a new 2TB-disk costs > less then me thinking and trying to implement anything like this. <Speculation about getting data to BackupPC snipped.> > So unless you have an academic interest in understanding how things > work, its much cheaper and easier to just push more disk-space into the > servers concerned. Probably its just putting two more disks on the host > and then increasing several machines disks? I second this. I usually have a Samba share and an NFS share on my BackupPC box for just this situation (I'm looking at *you*, Microsoft Exchange). I would have the SQL server dump its data via SMB/NFS to BackupPC, and then back up that data using the locahost host on BackupPC. Works fine. The only annoying part is having to copy the data twice (once across the network and once when BackupPC backs it up). Ideally, I would **LOVE** a mode where BackupPC simply hardlinks the source data right into the pool (obviously, my pool and the shares are on the same partition). If I could do that, there would be *ZERO* downsides to my method! :) In fact, I'd pay for that feature.... Jeff or Holger, any interest? :) Tim Massey Out of the Box Solutions, Inc. Creative IT Solutions Made Simple! http://www.OutOfTheBoxSolutions.com tm...@ob... 22108 Harper Ave. St. Clair Shores, MI 48080 Office: (800)750-4OBS (4627) Cell: (586)945-8796 |
From: <bac...@ko...> - 2013-04-26 21:04:20
|
Lord Sporkton wrote at about 13:46:25 -0700 on Friday, April 26, 2013: > As mentioned we have multiple customers and departments. It's not just one > server. Also 50g data bases aren't the largest. We have ones upto 200g > individual dbs. Also we're using scsi drives which cost a pretty penny. I don't think BackupPC is the write solution for backing up regularly changing files (like databases) that are 200GB. First, you will likely get very little pooling at the file level since even a 1 bit change in the DB will require a new pool entry. Second you will "waste" significant IO and computation resources writing the DB to a flat file, (optionally) compressing it to the cpool, and then one or more times either comparing it to an existing entry and/or computing checksums. If anything, you want to be doing block-level pooling -- either at the filesystem layer (e.g., ZFS) or by dividing each database manually into smaller less frequently changing chunks. If you are indeed talking about files in the 50-200GB range, you are not going to fit more than a handful of files per TB disk... even if you have a RAID array of multiple disks, you are still probably talking about only a small number of files. So, you are probably better off writing a simple script that just back up those few DB files and rotates them if you want to retain several older copies. |
From: Timothy J M. <tm...@ob...> - 2013-04-26 21:25:51
|
<bac...@ko...> wrote on 04/26/2013 05:04:07 PM: > If you are indeed talking about files in the 50-200GB range, you are > not going to fit more than a handful of files per TB disk... even if > you have a RAID array of multiple disks, you are still probably > talking about only a small number of files. So, you are probably > better off writing a simple script that just back up those few DB > files and rotates them if you want to retain several older copies. Which is something *else* I do (for, say, NTBACUP or Windows Server Backup), again by using that NFS or Samba share on my BackupPC server! :) Basically, I often use my BackupPC systems as a combination NAS/Backup server. Again, works very well. If you're dealing with a bunch of SCSI (Really? Not SAS?) servers, I assume you have *some* budget. I just spent less than $6,000 for a server with 12 3TB SATA hot swap hard drives (rated for 24/7 operation, not desktop drives), 2 x Intel Xeon E5 2.4GHz quad-core processors, 16GB RAM, an LSI RAID controller and 4 x GbE and 2 x 10GbE. Configured for RAID-6 with hot-spare, I still got >24TB of space. That puppy moves >800MB/s to the drives... All for $6k. If I had cut some corners on RAM and CPU (and skipped the extra GbE ports) it would have been a little over under $4k. That's enough space for more than 100 copies of a 200GB database.... And you could still use BackupPC for keeping older copies around over time, even if you don't get much help with pooling... Tim Massey Out of the Box Solutions, Inc. Creative IT Solutions Made Simple! http://www.OutOfTheBoxSolutions.com tm...@ob... 22108 Harper Ave. St. Clair Shores, MI 48080 Office: (800)750-4OBS (4627) Cell: (586)945-8796 |
From: <bac...@ko...> - 2013-04-26 22:28:06
|
Timothy J Massey wrote at about 17:20:13 -0400 on Friday, April 26, 2013: > <bac...@ko...> wrote on 04/26/2013 05:04:07 PM: > > > If you are indeed talking about files in the 50-200GB range, you are > > not going to fit more than a handful of files per TB disk... even if > > you have a RAID array of multiple disks, you are still probably > > talking about only a small number of files. So, you are probably > > better off writing a simple script that just back up those few DB > > files and rotates them if you want to retain several older copies. > with hot-spare, I still got >24TB of space. That puppy moves >800MB/s to > the drives... All for $6k. If I had cut some corners on RAM and CPU (and > skipped the extra GbE ports) it would have been a little over under $4k. > > That's enough space for more than 100 copies of a 200GB database.... And > you could still use BackupPC for keeping older copies around over time, > even if you don't get much help with pooling... > My point is that even with o(100) files/copies which assuming you are backing up multiple versions means you have far fewer distinct files -- you may be better off just writing a script... BackupPC is really targeted at backing up large trees of thousands of files. If you just have a 100 large databases, why not just use a cron script that copies each one and appends a datestamp or stores it in a different folder. It will probably be much faster too... |
From: Timothy J M. <tm...@ob...> - 2013-04-27 01:10:56
|
<bac...@ko...> wrote on 04/26/2013 06:27:32 PM: > My point is that even with o(100) files/copies which assuming you are > backing up multiple versions means you have far fewer distinct files > -- you may be better off just writing a script... I get your point, though I would ask you to define "better"... > BackupPC is really targeted at backing up large trees of thousands of > files. If you just have a 100 large databases, why not just use a cron > script that copies each one and appends a datestamp or stores it in a > different folder. It will probably be much faster too... Why not cron? There's no Web GUI, it doesn't expire old versions easily over an increasingly long period of time like BPC ( [2,2,2,2], for example), I can't easily start/stop/manage the backup jobs while they're running (short of kill -9), there's no built-in way of archiving this data to removable storage, and I'm already keeping an close eye on my BackupPC server for all of the other backups that it's doing, so why build a kluged-together script when I can use the tool I've already GOT? I agree that it's an environment where many of BackupPC's unique strengths are not used to their advantage. But I already have it for the areas where it *does* excel, and other than it's a little slower than a straight-up native rsync, I get *all* those other features for free. And as for speed: I do the localhost during the day, when the BackupPC server would be idle anyway, so who cares about the time as long as it completes before my backup window starts in the evening? Why *would* I use cron instead? Perfect is the enemy of good enough. And BackupPC is *plenty* good enough. Tim Massey Out of the Box Solutions, Inc. Creative IT Solutions Made Simple! http://www.OutOfTheBoxSolutions.com tm...@ob... 22108 Harper Ave. St. Clair Shores, MI 48080 Office: (800)750-4OBS (4627) Cell: (586)945-8796 |
From: Lord S. <lor...@gm...> - 2013-04-26 20:46:33
|
As mentioned we have multiple customers and departments. It's not just one server. Also 50g data bases aren't the largest. We have ones upto 200g individual dbs. Also we're using scsi drives which cost a pretty penny. On Apr 26, 2013 1:31 PM, "Arnold Krille" <ar...@ar...> wrote: > On Thu, 25 Apr 2013 14:45:50 -0700 Lord Sporkton > <lor...@gm...> wrote: > > I'm currently backing up mysql by way of dumping the DB to a flat > > file then backing up the flat file. Which works well in most cases > > except when someone has a database that is bigger than 50% of the > > hdd. Or really bigger than around say 35% of the hdd if you account > > for system files and a reasonable amount of free space. > > I started thinking, mysqldump streams data into a file and then > > backuppc streams that file for backup. So why not cut out the middle > > man file and just stream right into backuppc? Ive been playing with > > the backup commands but im getting some unexpected results due to > > what I believe is my lack of total understanding of how backuppc > > actually streams things. > > I don't know about your rates, but here in europe a new 2TB-disk costs > less then me thinking and trying to implement anything like this. > > However, the idea seems interesting (hobby isn't always about hourly > rates:). Basically you have to send a stream from the client to the > server that is a valid tar-file. rsync is more for transfering parts, > but you want to dump the whole db every time. And smb is out for obvious > reasons. > But on how to trick backuppc to think the other side is sending a > tar-file, I have no clue... > > So unless you have an academic interest in understanding how things > work, its much cheaper and easier to just push more disk-space into the > servers concerned. Probably its just putting two more disks on the host > and then increasing several machines disks? > > Have fun, > > Arnold > > > ------------------------------------------------------------------------------ > Try New Relic Now & We'll Send You this Cool Shirt > New Relic is the only SaaS-based application performance monitoring service > that delivers powerful full stack analytics. Optimize and monitor your > browser, app, & servers with just a few lines of code. Try New Relic > and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: http://backuppc.wiki.sourceforge.net > Project: http://backuppc.sourceforge.net/ > > |
From: Les M. <les...@gm...> - 2013-04-26 21:29:21
|
On Fri, Apr 26, 2013 at 3:46 PM, Lord Sporkton <lor...@gm...> wrote: > As mentioned we have multiple customers and departments. It's not just one > server. Also 50g data bases aren't the largest. We have ones upto 200g > individual dbs. Also we're using scsi drives which cost a pretty penny. I think you are missing what people are saying. Throw some big, cheap SATA drives somewhere, on any convenient box (possibly the backuppc server itself) and write your db dumps from other hosts there over the network using |ssh host 'cat >file', nfs or smb. Then if you want a history, let backuppc back up this holding space. However, backuppc isn't great for storing lots of copies of large files with small differences. Depending on the database contents, you may get pretty good compression on the content but it is still going to store a complete copy of the data for every version. Unlike, for example, a large directory of files where only a few small files change between copies and backuppc can hardlink the duplicates. -- Les Mikesell les...@gm... |
From: <bac...@ko...> - 2013-04-26 22:31:55
|
Les Mikesell wrote at about 16:29:10 -0500 on Friday, April 26, 2013: > On Fri, Apr 26, 2013 at 3:46 PM, Lord Sporkton <lor...@gm...> wrote: > > As mentioned we have multiple customers and departments. It's not just one > > server. Also 50g data bases aren't the largest. We have ones upto 200g > > individual dbs. Also we're using scsi drives which cost a pretty penny. > > I think you are missing what people are saying. Throw some big, cheap > SATA drives somewhere, on any convenient box (possibly the backuppc > server itself) and write your db dumps from other hosts there over the > network using |ssh host 'cat >file', nfs or smb. Then if you > want a history, let backuppc back up this holding space. However, > backuppc isn't great for storing lots of copies of large files with > small differences. Depending on the database contents, you may get > pretty good compression on the content but it is still going to store > a complete copy of the data for every version. Unlike, for example, a > large directory of files where only a few small files change between > copies and backuppc can hardlink the duplicates. > Precisely... if you just have a few (as in o(100)) large database files then you don't need all the complexity of BackupPC which comes at the cost of speed, etc. And the pooling part which is perhaps the main differentiator of BackupPC probably won't even come into play at all since even a 1 bit change to a 50-200GB database will require an entire new copy. I would do what Les suggests in conjunction with compression (if you truly are using mysqldump to output an ascii flat file) plus a filesystem like zfs with block de-duplication. BackupPC seems like the wrong nail for the hammer... |
From: Les M. <les...@gm...> - 2013-04-26 22:47:46
|
On Fri, Apr 26, 2013 at 5:31 PM, <bac...@ko...> wrote: > > > Precisely... if you just have a few (as in o(100)) large database > files then you don't need all the complexity of BackupPC which comes > at the cost of speed, etc. And the pooling part which is perhaps the > main differentiator of BackupPC probably won't even come into play at > all since even a 1 bit change to a 50-200GB database will require an > entire new copy. > > I would do what Les suggests in conjunction with compression (if you > truly are using mysqldump to output an ascii flat file) plus a > filesystem like zfs with block de-duplication. BackupPC seems like the > wrong nail for the hammer... An interesting - and simple - approach that might work would be to feed the uncompressed DB dump to split (in a unique directory per run) and then look at how much pooling backuppc can do with the resulting chunks over time. I don't have any idea what might be the ideal output file size, though. -- Les Mikesell les...@gm... |