Here is the latest, with new features like listing the DB contents and a more user friendly restoration process.
Your old lists will get converted to a slightly different format the first time you backup with this version. Do NOT run an older version of HBackup on the converted list!
The -l option now allows to list DB contents, together with the -C <client> and -P <path> options.
Also, the format for the list has changed, thus allowing the user to call clients by their name, with a confusing 'file://', 'smb://' or 'nfs://' prefix. Dual boot DOS/UNIX clients are transparently dealt with.
A conversion function will convert your DB list to the new format automatically.
This one includes support for expiration of long-removed data and better command-line management thanks to TCLAP.
Also, Valgrind helped removing a few memory leaks.
Enjoy!
The expiration of data removed long ago is now enabled.
Preliminary tests show a correct behaviour, but don't try it on data you care a lot about just yet.
The command line parsing and usage information have now been much improved by the use of TCLAP.
Thanks to its developers for a great project!
Loads of new stuff, read the new HTML documentation to make your filters work again!
Compression is in this time, and the memory usage should not exceed a few megabytes anymore.
Enjoy!
There is a new filtering system enabling virtually any possible combination of conditions. It is designed so that filters can be defined globally in the configuration file, locally in the client's configuration file and/or in any path section of the later. This should save duplication.
There is a new user-mode backup. It is not well documented and has no regression test yet, but it does work. Still needs a bit of love though, like an automatic filter not to backup itself!... read more
Until now, the DB close procedure was the same as the recovery procedure: merging the current list and the backup journal into the new list.
This has now been replaced by a system that does the merge as the backup unrolls. This is likely to get a feeling of faster DB closure, although the work going on really remains the same, but done as-we-go.
For historical reasons, hbackup used to load the list of all expected files before parsing. This is no longer the case and should dramatically reduce memory usage, and to a lesser extent CPU usage too.
Use with caution until next release though, this is alpha quality! You've been warned :)
I have released version 0.2, because it adds restoration support. Use like this:
hbackup -r dest_dir prefix date path
Example:
hbackup -r tmp file://linuxpc 1189411272 /home/fache/.kde
Note: tmp MUST exist
For a long time, the focus was to backup all data but restoration had to be done manually. After a friend somehow did a 'rm -rf ~' on his machine, I implemented some support in an hour. I have now merged this work, and tested it basically. Still alpha stage though!
Works like this:
hbackup -r dest prefix [date [path]]
When correctly tested and documented, I shall release version 0.2.
The write function has been rewritten which reduces the DB close time, until merging on the fly works (which will also benefit from this improvement).