maybe set a rediculously large timeout for backuppc and set the rsync option --timeout and see if rsync manages it better. i think that rsync's --timeout option looks for activity not just completion so it wont timeout on a large file while backuppc looks just for activity across the ssh link, which is quiet -q so large files eaily reach that timeout.
im not sure what removing the -q will do to your backups but it might see that verbosity as part of a file stream and corrupt the files.
anyone else have a clue on this?
i cant try this out, i have some clients use a VPN over public networks but i dont do backups over these links because backuppc cannot find these client via netbios and their IP address changes every time the connect.
I need some way for a client to broadcast it's IP address to backuppc but im not sure how to do that. maybe setup a WINS server or something.
The ClientTimeOut does what it should do: interrupt the running backup
when the TimeOut time has passed and no text is put to stdout.
But when you use rsync over SSH, the default settings are '-q' for ssh,
which makes the entire backup process go quiet until a share is finished.
Very nice for LAN setups, but unusable for Internet based backups (as
they might go like planned at eg 300kbps, but they can also run at
30kbps and then I get an error thanks to the TimeOut setting).
What is the easiest way to overcome this? Is there a plan to implement a
timeout on file transfer instead of the entire share to backup?
Or should I just remove the '-q' option for SSH, and have more overhead
on the transport (I assume it is rsync on the client who produces the
Please some advice, thanks.
Toni Van Remortel
Linux System Engineer @ Precision Operations NV
+32 3 452 92 26 - email@example.com
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
BackupPC-users mailing list