From: David B. <db...@si...> - 2006-02-28 14:26:08
|
=20 > > Suppose the normal FD established a known Unix-domain socket/named=20 > > pipe as part of it's normal operation. The bpipe gadget I suggested=20 > > could connect to that Unix-domain socket, pass it the metadata from=20 > > the command line as the first line, then connect as a console and=20 > > submit a type=3DFIFO job specifying the named pipe as the=20 > > source/destination. If Bacula saw a backup job with the=20 > type FIFO, it=20 > > would look for the metadata, and use that as the file entry in the=20 > > database, with the "file" data coming over the socket until EOF. Slight refinement (after about 5 more cups of coffee):=20 If the FD provided a "well-known" Unix-domain socket, then we define a type=3DApplication job that always reads/writes the well-known socket = only (ignoring the file tree walk code in the current FD). Bpipe could take the metadata supplied on the command line, connect to the FD socket, write the metadata into the socket as the first line, and then connect to the director and submit a job template with type=3DApplication. Bpipe would then copy stdin to the socket until EOF on bpipe stdin. Bacula would see the type=3DApplication jobtype and know to expect to read it's data from the well-known socket. It would pick up the metadata record from the socket, construct the correct SD and database transaction using that metadata, and then data transfer occurs normally. Restores of a type=3DApplication job would operate similarly. "bpipe -r" would connect to the FD well-known socket, connect to the director, and then submit a restore job specifying the metadata of the "file" to be restored. Bacula would know that type=3DApplication jobs only do I/O to the socket, so data would get retrieved and written to the socket. Bpipe would copy the data to it's stdout until EOF of the stored file. The application atttached to stdout of bpipe -r would be responsible for understanding whatever the contents of the blob was and doing something appropriate with it.=20 > This is a good idea, and perhaps I could extend the current=20 > FIFO kludge to include Unix sockets, which in some respects=20 > are easier to work with. It also allows pretty much arbitrary support for storing specialized files or filesystems w/o hardcoding support for them into Bacula as long as there is a application that understands how to deal with them and produce a stream-friendly file as output. For example, this would make my VM/CMS backup client trivial to implement -- I could use existing VM utilities to package CMS file structures with all their complex attributes into VMARC archives, and then just let Bacula store the VMARC blobs via a simple socket app and a copy of netcat into bpipe.=20 =20 > Given my current list of projects (mainly Migration), I=20 > wasn't thinking of anything as sexy as what you have=20 > described. =20 I agree that migration comes first. 8-) I'm trying to organize some paying support for copypools, so we can accelerate that goodness. > My idea was more that a user could write a script=20 > that interfaces with the console and the Director and by=20 > using appropriate FileSet inclusion, the user could feed a=20 > predefined backup job an appropriate filename (currently for=20 > a FIFO) to be backed up. The script would then start a=20 > Bacula job (perhaps slightly delayed) and begin writing into=20 > the FIFO. The job would run, read the data from the FIFO, and=20 > write it to the Volume. The inverse would happen for the restore.=20 >=20 > Now that I have a patch for the FIFO bug, the above is=20 > immediately a possibility without changing any Bacula code,=20 > and I think it accomplishes what you want with just a small=20 > difference of details ... OK, I'll tinker with this a bit.=20 |