From: Craig B. <cba...@us...> - 2004-03-17 08:00:13
|
Ben writes: > Hello, I have been monitoring backups tonight and one of them was taking > an unusual longer time to backup. I noticed that it is downloading files > that are already in the pool. I know I have spoke about this before but > this is a different problem. I'm comparing the files that are being > downloaded to the files that are in the pool, and it seems that it will > download files that have been moved to another directory. Is that true? > > So for example, I have some docs in /docs or whatever, these get backed > up fine and after the full backup, they are never downloaded. If these > docs are moved to say /old-docs, then they all get downloaded again. I > maybe wrong, but that's what's happening as far as I can tell. This is normal. > I understand that after it's finished downloading all files, that the > linker will go through and remove the duplicates, creating hard links to > the original files, but I don't suppose there is way to make it not > download files that have just been moved by any chance is there? Each of the Xfer methods that BackupPC uses (smb, tar, rsync) can't detect renamed files. So if you rename a file it will be transfered. But after it is transferred it will be matched with the pool, so no additional storage will be required. So renamed files cause network overhead, but no storage overhead. > Another thing I have noticed is NewFileList - is this the file that > BackupPC looks at to determine wheather or not to send an ALM (because > the backup has timed out)? I've been monitoring this file during this > particular backup and it has just stopped producing any output to the > file even though new files have been copied. Because of stdio buffering this file won't get updated for every new file. Craig |