From: Zoltan B. <zb...@du...> - 2006-01-18 11:32:35
|
Hi, the previous patch needed a little fix. I replaced c_stream.avail_out with c_stream.total_out two places after the compressing was done, so the correct stream size is used. Updated patch is attached. Also I did a little benchmarking: Best time out of 5 runs for uncompressed streams: time for i in `seq 1 1000`; do ./test >test4.pdf ; done real 0m2.108s user 0m0.862s sys 0m1.240s Best time out of 5 runs for compressed streams: $ time for i in `seq 1 1000`; do ./test >test3.pdf ; done real 0m2.475s user 0m1.073s sys 0m1.393s Difference in total time: 367 msec or 17% increase $ ls -l test3.pdf test4.pdf -rw-rw-r-- 1 zozo zozo 2569 jan 18 12:14 test3.pdf -rw-rw-r-- 1 zozo zozo 3004 jan 18 12:12 test4.pdf About 14.5% decrease in size. Here is the results of one of my reports that has a very small recordset. Uncompressed: $ time for i in `seq 1 1000`; do ./ptgriport Bolt2 szallitolista 0=20 2005-01-01 2005-12-31 ; done real 1m3.501s user 0m32.082s sys 0m18.655s Compressed: $ time for i in `seq 1 1000`; do ./ptgriport Bolt2 szallitolista 0=20 2005-01-01 2005-12-31 ; done real 1m4.142s user 0m32.178s sys 0m18.393s The performance difference is negligible, 642 msec, about 1% increase. The database access times dominate here mostly. But look at the file size= s: $ ls -l /tmp/szallitolista.pdf* -rw-rw-r-- 1 zozo zozo 1676 jan 18 12:54 /tmp/szallitolista.pdf -rw-rw-r-- 1 zozo zozo 5402 jan 18 12:49=20 /tmp/szallitolista.pdf.uncompressed With a much larger recordset that produce more than one pages, or when the pages contain more information, the much larger decrease in size will justify the time spent in compression as less data will be written out. Also if used from PHP through a browser, the smaller file will be transferred faster. The network is slower than disks. Best regards, Zolt=C3=A1n B=C3=B6sz=C3=B6rm=C3=A9nyi |