From: <swp...@sn...> - 2006-07-28 12:23:46
|
I'm searching for a linux backup solution. And flexbackup is almost it! There are three things that make me continue looking for other solutions: 1) Crashev has asked these questions (twice): > My question is - how to restore fully latest backup ? I mean if my > system fails today and I want to have the freshest backup restored - > how to do it with flexbackup? Should I extract fullbackup with *.0.* > and then overwrite files with other days backups or how does it work? > The other thing is how to to tell apart differential from > incremental in such scenerio? These are *very* reasonable questions to be asking of my backup system. How do I reliably recreate the "last good state"? Any backup system that can't answer these questions are not up to the job, IMHO. 2) Pablo Godel asked: incremental backups and deleted files Basically, if I delete files and make incremental backups, the deletion of these files is not recorded in the incremental backups. So no, it is in fact impossible to reliably get to the last good state. Also, FAQ "How do I find out which archive(s) contain a certain file?" on http://www.edwinh.org/flexbackup/faq.html mentions: > If you don't know in which archive to find a certain file, look at > the log files. As long as you have verbose turned on (default), you > can just 'zgrep filename /var/log/flexbackup/*.gz', and that works > pretty well. No, that doesn't work very well. It *sucks*! (Especially if you have to=20 restore 12342345 files and you don't know what they are.) There seems=20 to be no way to automatically and reliably restore/extract a backup set. Adding just a little meta-data to the created backups could enable this functionality: I'm thinking a -search option to find the relevant=20 archives and -restore to extract from the relevant archives. 3) There are three Project Admins: edwinh, jjreynold and pholcomb. EdwinH, the only one to ever post to the mailing list, last did so 2004/01/30 - over two years ago. Basically the project is without contributors and maintainers. I'm tempted to take a stab at adding the meta data to the created and=20 backuped data and implementing -restore and -search. I've just never=20 used a tape backup system, only to-disk. Would anybody we willing to=20 test any such enhancements on a tape-drive? Peter --------------------------------------------- #!/bin/bash # I'd like to be able to restore src with newest file2, # file3 and file4 in it. (And no file1) # Impossible with current flexbackup, though. mkdir src echo line > src/file1 echo line > src/file2 echo line > src/file3 echo line > src/file4 flexbackup -c flexbackup.conf -set test -level 0 rm src/file1 echo anotherline > src/file2 chmod go-rwx src/file3 ls -l src sleep 60 flexbackup -c flexbackup.conf -set test -level 1 --=20 Peter Valdemar M=F8rch http://www.morch.com |
From: <fle...@wo...> - 2006-08-03 19:05:32
|
Hi A couple of comments on your *concerns* 1. Have you the slightest idea how it's done in the reality? That's standard procedure to restore full backup plus all=20 incrementals you have, if you invented some other schema, please let me know=20 (at work we're currently backing up between 15-20 TB / night) 2. What's worse? Dataloss or too much data!!! ;-)=20 It's up to you to decide that, we do prefer to have more data=20 than less data, that can always be overcome to reduce the duplicate one= s, the non-existing ones are a lot more difficult to get. Regarding finding files, so is flexbackup similar to how the=20 EMC Networker, Veritas Netbackup, HP Data Protector is working, they are all depending on your index/internal database=20 over all files to get to them quickly. 3. All help is welcome, the open-source community needs people that make helpful work. Regarding Tape management so privately I'm just backing up to disk, is a lot easier and cheaper than to buy a tape library at home. (do got several DAT Exchanger at home, but is not just using them) =2D-Robert On Fri 28 July 2006 14:23, Peter Valdemar M=F8rch wrote: > I'm searching for a linux backup solution. And flexbackup is almost it! > There are three things that make me continue looking for other solutions: > > 1) Crashev has asked these questions (twice): > > My question is - how to restore fully latest backup ? I mean if my > > system fails today and I want to have the freshest backup restored - > > how to do it with flexbackup? Should I extract fullbackup with *.0.* > > and then overwrite files with other days backups or how does it work? > > The other thing is how to to tell apart differential from > > incremental in such scenerio? > > These are *very* reasonable questions to be asking of my backup system. > How do I reliably recreate the "last good state"? Any backup system that > can't answer these questions are not up to the job, IMHO. > > 2) Pablo Godel asked: incremental backups and deleted files > > Basically, if I delete files and make incremental backups, the deletion > of these files is not recorded in the incremental backups. So no, it is > in fact impossible to reliably get to the last good state. > > Also, FAQ "How do I find out which archive(s) contain a certain file?" > > on http://www.edwinh.org/flexbackup/faq.html mentions: > > If you don't know in which archive to find a certain file, look at > > the log files. As long as you have verbose turned on (default), you > > can just 'zgrep filename /var/log/flexbackup/*.gz', and that works > > pretty well. > > No, that doesn't work very well. It *sucks*! (Especially if you have to > restore 12342345 files and you don't know what they are.) There seems > to be no way to automatically and reliably restore/extract a backup set. > > Adding just a little meta-data to the created backups could enable this > functionality: I'm thinking a -search option to find the relevant > archives and -restore to extract from the relevant archives. > > 3) There are three Project Admins: edwinh, jjreynold and pholcomb. > EdwinH, the only one to ever post to the mailing list, last did so > 2004/01/30 - over two years ago. Basically the project is without > contributors and maintainers. > > I'm tempted to take a stab at adding the meta data to the created and > backuped data and implementing -restore and -search. I've just never > used a tape backup system, only to-disk. Would anybody we willing to > test any such enhancements on a tape-drive? > > Peter > > --------------------------------------------- > #!/bin/bash > # I'd like to be able to restore src with newest file2, > # file3 and file4 in it. (And no file1) > # Impossible with current flexbackup, though. > > mkdir src > > echo line > src/file1 > echo line > src/file2 > echo line > src/file3 > echo line > src/file4 > flexbackup -c flexbackup.conf -set test -level 0 > > rm src/file1 > echo anotherline > src/file2 > chmod go-rwx src/file3 > ls -l src > > sleep 60 > flexbackup -c flexbackup.conf -set test -level 1 =2D-=20 =2D-Robert Robert Worreby Birkenweg 82 CH-3123 Belp http://counter.li.org |
From: <swp...@sn...> - 2006-08-04 07:50:20
|
flexbackup-at-worreby.ch |Lists| wrote: > 1. Have you the slightest idea how it's done in the reality? > That's standard procedure to restore full backup plus all=20 > incrementals you have, if you invented some other schema, > please let me know=20 > (at work we're currently backing up between 15-20 TB / night) and > 2. What's worse? Dataloss or too much data!!! Let us first agree that data loss is totally unacceptable. I'm not arguing that data loss is a good thing. (Am i? Where?) But for me, "too much data" is *ALSO* unacceptable, because it does not=20 represent reality. Let me illustrate with a scenario: * I start with a directory e.g. under Apache's configuration that has a single file a.conf * I make a full backup. * rm a.conf * Add a file b.conf * (Notice that at no point in time was there ever more than one file in the directory) * I make an incremental backup. * Hard disk crash * Restore full backup * Restore incremental backup Result with current flexbackup: A directory containing both a.conf AND b.conf. Wouldn't you rather end up with a directory containing *exactly* b.conf=20 - the exact contents of the directory when the incremental backup was mad= e? Ok, so in a directory where there is supposed to be 1 single file you=20 may be able to remember yourself to delete a.conf (because it is *not*=20 supposed to be there). Lets just hope I slept well and didn't remove=20 b.conf instead by accident because I was tired... ;) But with "between 15-20 TB" of data as you put it? Who can remember=20 which of the gazillion files to delete? No, I don't want that to be a guessing game. In some (most?) cases, "too much data" is just annoying - not really=20 catastrophic. But in some cases (e.g. conf.d, cron.d, *.d directories,=20 or a file containing sensitive data that was deleted) this *is*=20 catastrophic. I could end up with a system that won't boot or misbehaves=20 or is dangerous after a full+incremental restore. I will not trust my system+data to such a backup-system. Are *you*=20 comfortable with that? What is the down side of being able to restore=20 reliably? You are right, I don't know "EMC Networker, Veritas Netbackup, HP Data=20 Protector", but I'm willing to bet they can restore a directory=20 containing *exactly* b.conf. (Am I right?) Peter --=20 Peter Valdemar M=F8rch http://www.morch.com |
From: Charlie B. <cha...@e-...> - 2006-08-04 13:51:05
|
On Fri, 4 Aug 2006, Peter Valdemar M=F8rch wrote: > flexbackup-at-worreby.ch |Lists| wrote: > > 1. Have you the slightest idea how it's done in the reality? > > That's standard procedure to restore full backup plus all=20 > > incrementals you have... That is standard procedure, unless you have fancy expensive tools. > Wouldn't you rather end up with a directory containing *exactly* b.conf= =20 > - the exact contents of the directory when the incremental backup was mad= e? Sure, but to create such a thing, you need a considerably more=20 sophisticated process for creating incremental backups. The 'standard=20 procedure' backs up all data created or modified since the last full=20 backup. You only need the metadata of all the current data to do that, and= =20 that's all in the file system. You want deletion records for all data deleted since the last backup. You= =20 need both old and new metadata to construct that. Only the new metadata is= =20 contained in the file system. You also need an archive structure which contains deletion records. AFAIK= =20 tar, cpio etc don't handle that. > You are right, I don't know "EMC Networker, Veritas Netbackup, HP Data=20 > Protector", but I'm willing to bet they can restore a directory=20 > containing *exactly* b.conf. (Am I right?) I don't know. Is that question relevant to this list? Are you offering to= =20 sponsor the development of the features you find missing? Or are you complaining because the gift you received isn't exactly what=20 you want? -- Charlie |
From: <swp...@sn...> - 2006-08-05 12:21:29
|
Charlie Brady charlieb-flexbackup-at-e-smith.com |Lists| wrote: > I don't know. Is that question relevant to this list? Are you offering = to=20 > sponsor the development of the features you find missing? No, in fact, I'm strongly considering implementing them directly myself! = :D As I wrote in my original post: > I'm tempted to take a stab at adding the meta data to the created and=20 > backuped data and implementing -restore and -search. I've just never=20 > used a tape backup system, only to-disk. Would anybody we willing to=20 > test any such enhancements on a tape-drive? To business: Charlie Brady charlieb-flexbackup-at-e-smith.com |Lists| wrote: > to create such a thing, you need a considerably more=20 > sophisticated process for creating incremental backups. I'm thinking that when creating a backup, incremental, differential or=20 otherwise, all that needs to get stored is the full current "list of=20 files" that currently exist on the disk within the backup set's directori= es. From that, after restoring either a full backup set a=20 full+incremental+incremental or whatever, then the "list of files" from=20 the last restored set (incremental or full) will contain exactly which=20 files should be present. Then one can (offer to) remove the files that=20 are not in the "list of files" or just present a list of files suggested=20 for deletion. It should be really easy! (Well, we'll see... :D) From the posts in this thread I'm aware that this new functionality=20 maybe shouldn't be the default. But I need it so I think I'll introduce=20 it as optional behavior. Peter --=20 Peter Valdemar M=F8rch http://www.morch.com |
From: <fle...@wo...> - 2006-08-05 18:12:37
|
Hi I didn't mean to be rude either,=20 it's just that having a backupsystem that really works like it "should be"= =20 just doesn't exist currently (to my knowledge). =46or you're example, (a.conf, b.conf) you need some filesystem support to = be=20 able to replicate your changes to another system/backup. Currently there's no method/sw that I know of that is supporting=20 windows/unix/linux and is in common practise=20 (otherwise it's difficult to use in an enterprise environment due to the sh= ere=20 amount of servers and different OS's). What you maybe are searching maybe FAM (File Alteration Monitor)?!?=20 http://oss.sgi.com/projects/fam/faq.html And use of a FAM mirror http://www.linuxfocus.org/common/src/article199/fam_mirror Unless you set up something like that, you're not getting any solution to y= our=20 a.conf/b.conf problem, the "old" methods" of full/incremental/differential= =20 backups just doesn't provide that functionality. Also isn't a tape backup=20 system flexible enough, or fast enough to support that. > I will not trust my system+data to such a backup-system. Are *you* > comfortable with that? What is the down side of being able to restore > reliably? Rather too much than too little data protection is what keeps an enterprise= =20 datacenter running... > You are right, I don't know "EMC Networker, Veritas Netbackup, HP Data > Protector", but I'm willing to bet they can restore a directory > containing *exactly* b.conf. (Am I right?) They can, if you really have an full backup just before the crash/logical=20 accident happens. For incrementals, it's just to restore full+incrementals= =20 and get all those a.conf's as well ;-( =2D-Robert On Fri 4 August 2006 09:50, Peter Valdemar M=F8rch wrote: > flexbackup-at-worreby.ch |Lists| wrote: > > 1. Have you the slightest idea how it's done in the reality? > > > > That's standard procedure to restore full backup plus all > > incrementals you have, if you invented some other schema, > > please let me know > > (at work we're currently backing up between 15-20 TB / night) > > and > > > 2. What's worse? Dataloss or too much data!!! > > Let us first agree that data loss is totally unacceptable. > I'm not arguing that data loss is a good thing. (Am i? Where?) > > But for me, "too much data" is *ALSO* unacceptable, because it does not > represent reality. Let me illustrate with a scenario: > > * I start with a directory e.g. under Apache's configuration that > has a single file a.conf > * I make a full backup. > * rm a.conf > * Add a file b.conf > * (Notice that at no point in time was there ever more > than one file in the directory) > * I make an incremental backup. > * Hard disk crash > * Restore full backup > * Restore incremental backup > > Result with current flexbackup: > > A directory containing both a.conf AND b.conf. > > Wouldn't you rather end up with a directory containing *exactly* b.conf > - the exact contents of the directory when the incremental backup was mad= e? > > Ok, so in a directory where there is supposed to be 1 single file you > may be able to remember yourself to delete a.conf (because it is *not* > supposed to be there). Lets just hope I slept well and didn't remove > b.conf instead by accident because I was tired... ;) > > But with "between 15-20 TB" of data as you put it? Who can remember > which of the gazillion files to delete? > > No, I don't want that to be a guessing game. > > In some (most?) cases, "too much data" is just annoying - not really > catastrophic. But in some cases (e.g. conf.d, cron.d, *.d directories, > or a file containing sensitive data that was deleted) this *is* > catastrophic. I could end up with a system that won't boot or misbehaves > or is dangerous after a full+incremental restore. > > I will not trust my system+data to such a backup-system. Are *you* > comfortable with that? What is the down side of being able to restore > reliably? > > You are right, I don't know "EMC Networker, Veritas Netbackup, HP Data > Protector", but I'm willing to bet they can restore a directory > containing *exactly* b.conf. (Am I right?) > > Peter =2D-=20 =2D-Robert Robert Worreby Birkenweg 82 CH-3123 Belp http://counter.li.org |
From: <swp...@sn...> - 2006-08-05 12:03:28
|
I was wondering why this thread had developed such a harsh tone. And lo=20 and behold, as so often when one searches for the source of hostility,=20 the finger points at - well, myself! I was so happy when I found flexbackup, and so sad when I discovered=20 that it had what I characterize as shortcomings. I apologize for my original post's subject "..but I have to reject it"=20 and further down I wrote: > No, that doesn't work very well. It *sucks*! There was no need for me to use language like that. In truth, I'm not really hostile towards this project. It looks great,=20 but has a single shortcoming that I want to fix. I still want to be constructive and contribute to this great project. I hope you will accept my apology. Regards, Peter --=20 Peter Valdemar M=F8rch http://www.morch.com |