Hi,
I have been conducting research experiments using genetic improvement, and while considering OptiPNG one of the variants we found seemed particularly interesting.
The idea was to consider a set a various images and try to see if it was possible to obtain a variant of OptiPNG that could run faster (in term of CPU instructions) while preserving the semantics of the original code (here, the image size decrease).
--- a/src/optipng/optim.c 2017-12-22 10:08:00.000000000 +0000 +++ b/src/optipng/optim.c 2020-04-29 18:41:24.680482538 +0100 @@ -1102,21 +1102,8 @@ png_set_compression_mem_level(write_ptr, memory_level); png_set_compression_strategy(write_ptr, compression_strategy); png_set_filter(write_ptr, PNG_FILTER_TYPE_BASE, filter_table[filter]); - if (compression_strategy != Z_HUFFMAN_ONLY && - compression_strategy != Z_RLE) - { - if (options.window_bits > 0) - png_set_compression_window_bits(write_ptr, - options.window_bits); - } - else - { -#ifdef WBITS_8_OK - png_set_compression_window_bits(write_ptr, 8); -#else - png_set_compression_window_bits(write_ptr, 9); -#endif - } + if (options.window_bits > 0) + png_set_compression_window_bits(write_ptr, options.window_bits); /* Override the default libpng settings. */ png_set_keep_unknown_chunks(write_ptr,
The attached patch removes the calls to png_set_compression_window_bits
and results in a variant that is consistently faster in our dataset; in average, it uses 40% less CPU instructions and 20% less running time with no impact on image size.
The "faulty" line sets the size of the compression window to its minimal value, either 8 or 9 (256 or 512 bits), while its otherwise defaults to its maximal value (15, i.e., 32768 bits).
I am by no mean an expert, and the variant was automatically generated.
Is there a reason to prefer a small window size over a large one for Z_HUFFMAN_ONLY
andZ_RLE
, or is it possibly simply historical?
Additionally, why are user-provided values ignored in that case?
Regards,
Aymeric.