I'd like to optionally 'optimze' jpeg images when converting the to CBZ using the archive Editor.
Using a image browser XnView I have "cleaned" a few sets of images. (Xnview: Edit, Metadata, Clean...) This command removes Metadata but it also converts JPEGs and Optimizes the Huffman colour table.
Optimizing the Huffman table is a lossless operation. (Various guides warn that it might produce images that are incompatible with older crappy JPEG viewers but documentation is old and I suspect the problems were overstated even way back then.)
On an average set of 20 jpegs with a total size of ~10 MB I'm seeing an overall size reduction between 0.5 1 MB which might not seem like much but in my limited tests this is step alone is saving more space than RAR compression.
Why isn't this used more already? It takes a little longer to create Optimized JPEGs but I think this small expense is worthwhile since you more likely to view/read an image many more times than you create it (and read times seem to be the same, possibly better).
The Python Imaging Libarly (PIL) includes the the option to Save with an optimized Huffman table.
(python itself includes similar functionality http://docs.python.org/library/jpeg.html but it is deprected for 3.0 and PIL is suggested.)
Would be nice to have but this is low priority as I already have ways of doing this but I could do with a nicer workflow and it fits well with the conversion goals of the Archive Editor.