From: Neal R. <ne...@ri...> - 2005-01-26 17:50:02
|
On Tue, 25 Jan 2005, David Boreham wrote: > Observation: almost all of the difference in file size between the compressed > and non-compressed cases > can be accounted for by the free bytes in leaf pages. > Therefore, if the file were to be re-built inserting all the keys > in key sort order, the resulting file (sans compression) > should be quite close in size to the compressed one. > I believe it should be possible to test this theory > using db_dump and db_load back-to-back. I think you are right.. doing a dump & reinsert would be a nice way to optimize for size. However in my tests both of them had about the same fill factor.. so the compression db files will still be smaller. The more odd result is that Jim has about a 5x speed improvement when he disables compression. I also see a speedup. Joe gets a 14% slowdown. They both seem to have similar sytems and (Dual Xenons) a crawling local site (no network delay) Jim: These numbers are for a local dig on a dual 2.8 GHz Xeon box with RAID 5 and a couple GB of RAM. Joe: My system is similar to yours, dual Xeon 2.4 GHz box RAID 5 10K RPM SCSI drives, and 2.0 GB DDR RAM FYI: David Boreham is a Berkeley DB Guru. Thanks. Neal Richter |