From: Jason U. <jas...@sg...> - 2005-02-11 10:47:21
|
On Thu, Feb 10, 2005 at 03:15:56PM -0700, Troy Heber wrote: > I'm working on a major clean up of the lkcd kernel patch code in the 7.X.X > development branch. I am planning on removing RLE compression, unless there > is a strong objection not to. This means the 2 choices will be uncompressed > or gzip. Lcrash will continue to work with older RLE compressed dumps, this > only affects the lkcd kernel patch. There are actually quite a few people using RLE compression for performance reasons (mostly on large systems, where we have so much memory that dump times are pushing the limits of practicality). The synchronous nature of dump I/O means the CPU time required to perform gzip compression can have a huge effect on total dump time, and using the cheaper RLE algorithm can yield a net win even though we have to write more physical blocks that way. SGI quantified this at one point and found gzip dumps were taking ~6.8 times as long as RLE dumps of comparable uncompressed size, even though the compressed dump was only about ~30% as big in the gzip case as in the RLE case. Those numbers are of course highly dependent on the ratio of CPU speed to disk speed and the compressibility of the data, but I don't think we were doing anything which could be considered an extreme corner case. Part of the problem is that the dump_gzip code uses a compression level of Z_BEST_COMPRESSION (the equivalent of gzip -9), which favors size over speed to an extreme well beyond the point of diminishing returns. It was found that by using a more moderate compression level, we could get a ~300% speed boost at the cost of a very modest (~15%) increase in dump size. That change never got pushed out to the LKCD project (I'll look into doing that and verify the results against the current codebase as soon as I get a chance), but even with a ~300% boost, gzip is still considerably slower than RLE. |