From: Phillip S. <ps...@cf...> - 2010-04-26 15:53:14
|
The archives show almost no activity on this list for some time, so I hope this finds its way to someone. I have been trying to understand why dump uses multiple processes that seem to burn through a lot of cpu time when doing a compressed backup with a larger block size ( 512 kb ). The code is very hard to follow, but it seems like the first process reads blocks from the disk and writes them to a pipe. The second process reads from the pipe and compresses the data, then writes it to another pipe. Finally the third process reads the compressed data from the pipe, slaps a header on it, and writes it to the tape. Is this correct? Does the third process attempt to reblock the data so it always writes fixed size records? If so, how does it do this? |