From: Les M. <le...@fu...> - 2005-07-01 02:40:26
|
On Thu, 2005-06-30 at 20:37, John Pettitt wrote: > That's not what I see in BackupPC::PoolWrite - it looks like it buffers > up to 1MB to do the MD5 sum and then reads candidates from the cpool - > if it gets a match it seems to just keep reading both the file stream > from tar and the cpool until either no match or EOF You are probably right - I didn't wade through the code and was guessing based on seeing the 'link pending' state and subsequent link operation that the files were stored, then linked with the pool. > - I'm guessing that > what's going on it that it's effectively alternating between disk reads > and network reads and in so doing getting a lot less throughput than it > could. > It 's probably possible to improve it with a multi threaded system [...] A typical system would probably be doing 4 concurrent runs with different processes. > Anyway I think I know why it does what it does and I don't think I can > speed it up without writing code. If you only have one box to back up you might speed it up by splitting it up with multiple host entries and the alias feature to make it look like several machines that could run concurrently. If you have separate filesystems on separate drives that might make sense. -- Les Mikesell le...@fu... |