[Libjpeg-turbo-devel] Further research regarding the effectiveness of SmartScale
SIMD-accelerated libjpeg-compatible JPEG codec library
Brought to you by:
dcommander
From: DRC <dco...@us...> - 2013-01-15 05:53:42
|
Since there have been questions fielded from Fedora and others regarding the potential for libjpeg-turbo to support the DCT scaling and SmartScale features of jpeg-7 and later, I felt compelled to do some research into the effectiveness of these new features. The research revealed that DCT scaling and SmartScale do not generally accomplish anything that can't already be accomplished at least as well (and typically faster) using other means: http://www.libjpeg-turbo.org/About/SmartScale Executive summary: -- For generating lossless files, libpng was much faster (3-4x) and achieved similar compression ratios to jpeg-9. -- Reducing the DCT block size (a feature of jpeg-8) did improve visual quality, but it also decreased the compression ratio, so it was necessary to reduce the JPEG quality to compensate for this. The resulting images had either the same or worse perceptual quality than equally-sized high-quality images generated using baseline encoding. -- Using a DCT block size of 1 (best quality) typically increased encoding time by a factor of 4-6 relative to baseline JPEG. Using a block size of 2 typically increased encoding time by a factor of 2-3 relative to baseline JPEG. -- Reducing the DCT block size did not allow a significantly or perceptibly higher maximum quality to be achieved relative to baseline JPEG. -- Using a DCT block size of 1 or 2 did allow maximum quality to be achieved with a higher compression ratio, but the performance of these modes was painfully slow, and as with the lossless case, much better performance and about the same peak compression ratio could be achieved using libpng. -- In no case did reducing the block size provide better compression at the same overall perceptual quality relative to "low-quality JPEG" (quality=30, 4:2:0). -- On photographic content, DCT scaling did produce better compression relative to "low-quality JPEG" at the same overall perceptual quality, but it did this by concentrating the error around sharp features, which is precisely where you don't want the error to be. I evaluated these technologies partly with a mind for their potential usefulness in VirtualGL and TurboVNC, since that's one place where funding for an implementation of them in libjpeg-turbo could come from. What I found was that probably the biggest piece of low-hanging fruit is accelerating the arithmetic codec, since arithmetic coding typically increased the compression ratio by about 50% relative to Huffman. If it could be optimized in the same way that the Huffman codec has been optimized, it might be potentially very interesting for remote display applications. Otherwise, it is my opinion that SmartScale and DCT scaling do not provide any viable substitute for or improvement upon the existing "usable" modes of baseline JPEG. I don't claim that my research is universal, but I do claim that it is probably the most thorough study out there on this topic, since my reason for writing it was partly motivated by my inability to find any such information from another source. Although I found no real usefulness for DCT scaling, in and of itself it's a harmless feature, since it works within the existing baseline JPEG standard. However, at the moment, I am opposed to any implementation of the SmartScale format, since it introduces a new, non-standard format whose usefulness has now been demonstrated to be very minimal at best. In my opinion, anyone who upgrades to jpeg-9 is doing so simply because they are blindly pulling in the latest & greatest code, not because there is any technological need for this new release. In fact, not only does it break ABI compatibility with jpeg-8, but it introduces yet another new non-standard format. Disagree? Chime in. DRC |