I just tried backup up some dirs with lots of files in it. The full backup
went ok, but on the incremental backup my machine choked (it's a test PC
with only 512 MB ram): the backupPC_dump process was taking up all available
memory (500 MB).
So my question is:
if I backup a mail storage server with the following stats for a full
backup: 1403556 files, 31818.0 MB,
what is the expected memory usage if 6000 files change for the next
incremental backup, using rsync as the protocol?
The thing is, I expect a backup server to need to have great disk IO and
network bandwith, but I don't expect it to need gigabytes of memory. Of
course memory is cheap nowadays (and the real backup server will have at
least 2GB), but that's beside the question here ...
I think, for debugging purposes, it would be great to be able to give a USR
kill signal of some kind, to get a memory usage info per variable used, so
one can see what variables are taking up all the memory. Or maybe, be able
to configure a limit for the memory used by the process (so it would maybe
take a bit longer, but take up less mem).
Get latest updates about Open Source Projects, Conferences and News.