mbackup-devel Mailing List for Midnight Backup (Page 3)
Status: Alpha
Brought to you by:
jo2y
You can subscribe to this list here.
| 2000 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
(16) |
Jun
(6) |
Jul
(5) |
Aug
(19) |
Sep
(1) |
Oct
(1) |
Nov
(2) |
Dec
(1) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2001 |
Jan
(3) |
Feb
|
Mar
|
Apr
(7) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(2) |
| 2002 |
Jan
(4) |
Feb
(4) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
|
From: <ra...@Te...> - 2000-07-06 07:44:35
|
Hi! It's nice to see some movement in free backup software. One thing I always missed in every bachup software I've seen is some kind of host/subpath independent backup: Let's think of users home directories in a larger networked environment. Home dirs may be placed on different hosts and on different disks within a single host. As requirements change $HOMEs may be moved between disks or hosts. With a conventional backup system a complete full dump of a moved home directory is required and in case one want's to restore some older data, one has to remember the old location of that directory. This can be solved by using virtual locations. Other valuable uses might be ftp or web data areas or data locations in general. Another point that directly affects the above mentioned idea (in case of a file system backup -- opposed to a data base backup) is: What is the file? Is it the data behind the inode or the file name? If it's the inode, what should be done in case the backup set has been move to a new location (or restored on a new disk)? And should the data be backuped when the file name or permissions/ownership have changed? m. -- Matthias Rabe ra...@te... +49-521/5251-123 OWL-Online GmbH & Co KG Feilenstr. 31 D-33602 Bielefeld |
|
From: James O'K. <jo...@mi...> - 2000-06-30 20:54:29
|
Hi, I'm hoping to make a snapshot release today or tomorrow, and I hope to have some survey questions ready for people who visit the site. If everyone who gets this mail could take 5 mins right now and send me 5 questions that come to mind that we might ask someone. The basic tend of the questioning will be why are you using what you're using and what makes it so great. I'm trying to find out what it will take to convert people to our side. You can email me directly. thanks -james |
|
From: James O'K. <jo...@mi...> - 2000-06-25 02:03:23
|
I did some more hacking on the tar module last night. I now have it so
that it merges the input files into a single file. GNU tar will untar it
but it seems that GNU tar assumes something about the ordering of the
files and directories inside the archive. Given a directory like this:
dir1/
file1
file2
dir2/
file3
file4
dir3/
file5
file6
file7
file8
GNU tar expects the files to be in this order: file7, file8, dir1, file1,
file2, dir2, file3, file4, dir3, file5, file6 (files then directories)
Because of the way I walk the directory tree, I get them in this order:
dir1, dir2, file3, file4, file1, file2, dir3, file5, file6, file7, file8
(directories, then files)
Then I GNU un-tar something I created with my tar, it only extracts:
dir1/
dir2/
dir3/
file5
file6
What I'd really like to happen is someone else take another look at the
plain_reader.c module and rework it's logic so that it returns the
directories in the order that is better suited for tar. If anyone
volunteers I can give them some help. Tell your friends. Tell your family.
Use it as an excuse to talk to that cute redhead in your calculus class.
-james
|
|
From: James O'K. <jo...@mi...> - 2000-06-25 01:42:21
|
The example that I saw was of an NNTP server. This makes me think it would be a good thing for the network module. The other use would be to solve my problem with how to keep mbclient, mbserver, and mbrestore all links to the same program. It's worked well with mbclient and mbserver because they do the exact same thing just with different config files. mbrestore is slightly different in that it does the same thing, but in reverse. I still haven't decided on how to keep track of the filters that a file passes through. One way is to generate a file that lists all the filters and add that as the first file to be backed up. Call it .mbackup.filters. This still causes problems in how to get that file off of the backup media. I think the answer might be that we just read the mbclient config and reverse them. This leads to a problem of how to backup and restore _that_ information. This all just leads to circular dependencies. But. Back to state machines. In the normal process of backups, a filter can return a few different values, FILTER_OK, NO_BACKUP, and KEEPING. FILTER_OK means that everything went well and the file as been filtered, NO_BACKUP means that the filter decided that this file does not need to be backed up, usually because there is already a current copy on the server. KEEPING means that the filter is going to keep this file and merge it with the next file. If we get FILTER_OK we go on to the next filter, the other two we start over from the beginning of the filter list. With that description it seems to me that we have different states. -james |
|
From: James O'K. <jo...@mi...> - 2000-06-23 15:54:47
|
I just checked in some code of a working tar filter module. I ran a backup cycle and then untarred the created files with GNU tar and things worked. :) I haven't tried it yet, but if you setup the tar and then the bzip modules, you should get a mirror of the start path with all the regular files changed to .tar.bz2. Please give this a try, I need some feed back on whether my directions are clear. My next step is to work out the logic regarding splitting and joining files and passing that information between modules. -james |
|
From: Known H. N. R. <ni...@in...> - 2000-06-22 01:24:02
|
>Has anyone worked with or written a state machine? I've just learned they >exist and it seems like they would make things easier to impliment and >extend. I'm looking for suggestions or guidance before I start putting >anything into code. > >BTW: I'm at YAPC (www.yapc.org/America) if anyone else is. State machines are coolio, its also what OpenGL uses. So yes I've worked with state machines; although I'm not sure how they would apply to mbackup. as always, nick ni...@gr... * http://www.fargus.net/nick Developer - Systems Engineer - Mad System Guru - MOO Sales Keep on GRAWK'n! |
|
From: James O'K. <jo...@mi...> - 2000-06-21 23:47:16
|
Has anyone worked with or written a state machine? I've just learned they exist and it seems like they would make things easier to impliment and extend. I'm looking for suggestions or guidance before I start putting anything into code. BTW: I'm at YAPC (www.yapc.org/America) if anyone else is. -james ps. I think I'm almost done with a tar_filter.so module. |
|
From: James O'K. <jo...@mi...> - 2000-05-29 16:51:51
|
Clean up anything you're working on, I'm going to try and make a snapshot release on June 1st sometime. I'm going to try and finish the tar module and start working on a restore path. -james |
|
From: James O'K. <jo...@mi...> - 2000-05-23 18:21:04
|
I think I've finished the merger between the reading, writting and filter modules. They all now work as filters. I'm still thinking about the dialog between filters and the main program to express that each filter has finished everything. Right now, a file_tag object is passed to each of the list of filters. It makes sense that the first one would be doing reading, but that's not always true. It could be that the first 3 filters are actually reading or adding files to be filtered. I'm thinking that we should add a FILTER_DONE return code that a filter can return when it is finished with everything it has pending. A filter that is reading files from disk would only return this after it has read all the files. The bzip filter would return this with each file because it only works on one file at a time. Then, when all filters return FILTER_DONE in one pass, then we can assume that things are done and exit the loop. If no one sees a flaw in that, I'll impliment it. -james |
|
From: Known H. N. R. <ni...@in...> - 2000-05-18 02:32:23
|
>I'm looking at the source to gnu tar to see if there is a libtar. It seems
>there is one, but it's not well documented on how to use it. I also looked
>at the perl module and it seems that they do everything within perl. This
>would suggest that it's common for people to just reimpliment tar by
>following the file format (found in tar.h).
>The way I envision the module working is this.
>
>When it gets the first file to filter, it creates an archive and adds that
>file to it, and passes the work so far to the next module.
>When it gets the next file it acts as if it's appending it to the previous
>file and passes that along to the next filter.
>At some point it needs to end an archive and start a new one. The best
>time would be at the 2gig limit which is usually the filesize limit, but
>this should be configurable.
>
>We'll need to modify file_tar to keep track of when a file is inside some
>encoding. The current filtered_name won't due for very long. As a
>reminder, the way that the bzip filter works is it bzips the file and and
>changes filtered_named name to be foo.bz2. That works fine if everything
>is kept as small file size chunks. If we start globbing files into bigger
>units, we need a new way to encode this so that the indexing module can
>label the data.
Makes sense.
My thought was to use Gnutar or whatever directly as a pipe from
a file backend since it knows how to do tar in all its complexities
and thus relives us from having to reinvent the wheel.
as always,
nick
ni...@gr... * http://www.fargus.net/nick
Developer - Systems Engineer - Mad System Guru - MOO Sales
Keep on GRAWK'n!
|
|
From: James O'K. <jo...@mi...> - 2000-05-18 01:44:55
|
I'm looking at the source to gnu tar to see if there is a libtar. It seems there is one, but it's not well documented on how to use it. I also looked at the perl module and it seems that they do everything within perl. This would suggest that it's common for people to just reimpliment tar by following the file format (found in tar.h). The way I envision the module working is this. When it gets the first file to filter, it creates an archive and adds that file to it, and passes the work so far to the next module. When it gets the next file it acts as if it's appending it to the previous file and passes that along to the next filter. At some point it needs to end an archive and start a new one. The best time would be at the 2gig limit which is usually the filesize limit, but this should be configurable. We'll need to modify file_tar to keep track of when a file is inside some encoding. The current filtered_name won't due for very long. As a reminder, the way that the bzip filter works is it bzips the file and and changes filtered_named name to be foo.bz2. That works fine if everything is kept as small file size chunks. If we start globbing files into bigger units, we need a new way to encode this so that the indexing module can label the data. Does that make sense? -james |
|
From: Known H. N. R. <ni...@in...> - 2000-05-09 21:35:25
|
>On Tue, 9 May 2000, Known Human Nick Rusnov wrote:
>>
>> Storageexchange? Maybe .. is mbackup not appealing enough on its own?
>
>It could be, I was starting to brainstorm what could be the next step.
At this point I would recommend keeping storageexchange as an ultimate
backend (much like to CD and to tape, to ddv).
as always,
nick
ni...@gr... * http://www.fargus.net/nick
Developer - Systems Engineer - Mad System Guru - MOO Sales
Keep on GRAWK'n!
|
|
From: James O'K. <jo...@mi...> - 2000-05-09 21:33:46
|
On Tue, 9 May 2000, Known Human Nick Rusnov wrote: > > Storageexchange? Maybe .. is mbackup not appealing enough on its own? It could be, I was starting to brainstorm what could be the next step. |
|
From: Known H. N. R. <ni...@in...> - 2000-05-09 21:29:42
|
>Has anyone looked at this? >http://www.redhat.com/about/ventures.html > >They are looking for opensource projects to fund and help make them into >businesses. Anyone interested in working on a proposal for this? Nick, >maybe we could integrate that idea from the webpage I can't remember? Storageexchange? Maybe .. is mbackup not appealing enough on its own? as always, nick ni...@gr... * http://www.fargus.net/nick Developer - Systems Engineer - Mad System Guru - MOO Sales Keep on GRAWK'n! |
|
From: James O'K. <jo...@mi...> - 2000-05-09 20:20:18
|
Has anyone looked at this? http://www.redhat.com/about/ventures.html They are looking for opensource projects to fund and help make them into businesses. Anyone interested in working on a proposal for this? Nick, maybe we could integrate that idea from the webpage I can't remember? -james |
|
From: James O'K. <jo...@mi...> - 2000-05-08 17:32:33
|
I was playing around with Cygnus' tools, and it seems that they provide everything to do a port of mbackup to win32 without changing much of the .c files. There is some Makefile junk that I need to do, and I needed to compile a few libs that didn't come by default, but it looks like it will be possible without having to learn MFC or many microsoftisms. :) If anyone is interested I could write some directions. -james |
|
From: Known H. N. R. <ni...@in...> - 2000-05-05 07:13:46
|
>
>filter1 -- Data is read from some source (disk, network, etc)
>filter2 -- Add a metadata entry to the local index
>filter3 -- write the data to tape, with a copy of the metadata as well.
>
>So each filter gets a chance to look at the file and the metadata and can
>do with it as it pleases.
>
>However,
>
>This makes me think that maybe we need a facility for modules to schedule
>functions to be called when certain events happen, such as the last file
>has been processed. This would be where filter2 above would request that
>it's index be written to tape as well.
>
>Keep in mind that sometimes we might be writting to CDR or some other
>random access media where knowing the permissions and pathnames for a file
>would be needed, which is why the filter that writes the data needs the
>metadata as well.
I would hightly suggest a tar output. Tar is an industry standard, and blah
blah blah. You know all the good reasons -- one above that being the ability
to restore (reasonably) /without/ mbackup.
Seems like tar would be a good intermediate format before the final output
filter, like writing a tar file to cdr or tape.
as always,
nick
ni...@gr... * http://www.fargus.net/nick
Developer - Systems Engineer - Mad System Guru - MOO Sales
Keep on GRAWK'n!
|
|
From: James O'K. <jo...@mi...> - 2000-05-05 01:48:29
|
On Thu, 4 May 2000, Known Human Nick Rusnov wrote: > Oh, another feature I would suggest would be the ability to backup the meta > data as part of the backup in addition to keeping it locally (for future > backups). Since everyone else is doing this, I don't think we can get away with out doing it. :) However, it might be done as a matter of suggestion and policy to module writers. The way I envision things like this happening: (Data flows from top to bottom in this list) (non-related things deleted) filter1 -- Data is read from some source (disk, network, etc) filter2 -- Add a metadata entry to the local index filter3 -- write the data to tape, with a copy of the metadata as well. So each filter gets a chance to look at the file and the metadata and can do with it as it pleases. However, This makes me think that maybe we need a facility for modules to schedule functions to be called when certain events happen, such as the last file has been processed. This would be where filter2 above would request that it's index be written to tape as well. Keep in mind that sometimes we might be writting to CDR or some other random access media where knowing the permissions and pathnames for a file would be needed, which is why the filter that writes the data needs the metadata as well. -james Also, don't cc me and the list if you are mailing me. :) |
|
From: James O'K. <jo...@mi...> - 2000-05-05 01:33:17
|
I'm going to merge the reader/writer/filter modules into one with an interface that looks like the current 'filter' module. I can imaging a scenario where you would want to write things to several places, so the filter model is better suited. This will also force me to work out the bugs in the multiple filter code. This should leave the following as planned/implimented modules: config /* I'm rethinking this one */ logging filter userinterface -james |
|
From: Known H. N. R. <ni...@in...> - 2000-05-05 01:25:59
|
Oh, another feature I would suggest would be the ability to backup the meta
data as part of the backup in addition to keeping it locally (for future
backups).
In addition to that it'd be nice to have multiple sets of metadata on one
backup server, so taht the same server could be used to backup several disparet
systems.
as always,
nick
ni...@gr... * http://www.fargus.net/nick
Developer - Systems Engineer - Mad System Guru - MOO Sales
Keep on GRAWK'n!
|
|
From: Known H. N. R. <ni...@in...> - 2000-05-05 01:22:51
|
>As things start settling down at my new job, I hope to spend at least a
>few hours a day working on mbackup code. In that light, I'd like to
>propose that we brainstorm the features that we individually are looking
>for, and then we can prioritize them and make a plan on implimenting them.
>As a first goal, I'd like to start working on getting a restore working to
>the level that backups currently (don't) work, and then making another
>release and a first announcement on freshmeat in hope of bringing in a
>few more developers.
If this is what you want, the first features I would suggest are
getting a tar backend, a gz filter, and some metadata (tables of
checksums and dates to keep track of what version of files that
have been saved).
These are my contributions to the brainstorming.
enjoy
as always,
nick
ni...@gr... * http://www.fargus.net/nick
Developer - Systems Engineer - Mad System Guru - MOO Sales
Keep on GRAWK'n!
|
|
From: James O'K. <jo...@mi...> - 2000-05-05 01:15:28
|
As things start settling down at my new job, I hope to spend at least a few hours a day working on mbackup code. In that light, I'd like to propose that we brainstorm the features that we individually are looking for, and then we can prioritize them and make a plan on implimenting them. As a first goal, I'd like to start working on getting a restore working to the level that backups currently (don't) work, and then making another release and a first announcement on freshmeat in hope of bringing in a few more developers. -james |
|
From: James O'K. <jo...@mi...> - 2000-05-05 01:07:03
|
I just build and almost finished configuring a machine for use as a testbed for development. In some areas it might not be the best machine, but it certainly has enough for what we need. PPro 200Mhz 128M of RAM 2gig disk for system 2gig disk for a holding device and temp area 6 tape internal DDS-2 tape changer 6 tape external DDS-3 tape changer 8x CDROM 2x/4x CDR Right now, those are all on one 2940UW SCSI card. As needed I'll add more disk and a second or third SCSI card, but for the next few months I expect that will be enough. The board is also dual ready, so we *could* add a second CPU if anyone could find a PPro. When you need an account on the machine for testing let me know. More messages to follow... -james |
|
From: James O'K. <jo...@mi...> - 2000-03-15 22:30:15
|
I'm in the midst of ending one job, and starting another so I haven't had time to do any coding on this for the past 10 days or so. Am I holding anyone back from anything? -james |