Menu

#2 command line options optimization

open
None
5
2009-01-30
2007-08-06
Anonymous
No

Can optipng be modified to:

1) Use an alternative png parameter search order? This helps get a first cut smaller file size. Future optmization parameter trials will stop processing when the IDAT data exceeeds the smallest size found so far. Depending on the file type, I do "optipng -o1 a.png" or "optipng -o2 a.png" before doing the full optimization of "optipng -o7 a.png".

2) Use the last 5 best search order parameters found for the last 5 files optimized. This is helpful in the case where you are optimizing a directory of files that are more or less of the same type (e.g., scanned text pages) since the optimal parameters for the current file are most likely going to be the best or nearly the best for the next file.

3) (Advanced) Allow the user to specify a size threshold to abandon optimization trials for a file. For example, optipng could skip to the next set of png input parameters when: estimated final file size > best file size found so far and 90 percent of the file was processed.

This makes sense because if you have processed 90 percent of the file and the current data is 95 percent of the file size found for the best known input parameters, you would have to compress the remaining 10 percent of the input file more than 50 percent to beat the best known input file parameters.

Discussion

  • Nobody/Anonymous

    I would suggest an LRU cache of the last N best parameters (one candidate per file, N files), rather than the top N parameters of the last single file (N candidates per file, one file). In the degenerate case (all files alike) this will result in approximately the same candidate list, but it is resiliant to cases where the style of compression alternates between several distinct image types.

    That is to say, if the current image is unlike the previous image, then it's better to have a candidate list made of several different images rather than a handful of methods all suited to the previous image. If the current image is like the previous image, then the previous image's single candidate is still likely to result in many fast-out cases, even if it turns out to not to be the perfect method.

    Also, if N=1, it's possible that doing that test up front and not bothering to eliminate it from the comprehensive search (ie., doing that test once at startup and once more in the normal course of the search) may still be faster than not implementing the feature at all.

     
  • Cosmin Truta

    Cosmin Truta - 2009-01-30
    • assigned_to: nobody --> cosmin
     
  • Cosmin Truta

    Cosmin Truta - 2009-01-30

    Hi,

    I've just realized I had not answered to this old feature request. Both the initial request and the follow-up comment are very good suggestions. Thank you very much.

    I admit the hyper-rectangular search is too dumb as it is right now. This is not an immediate project, but I am planning to make it faster somehow. There are lots of combinations that are very unlikely to be successful, and I was thinking to filter them out somehow, e.g. by using some sort of linear programming, like the "branch-and-bound" algorithm.

    My method would still be image-agnostic: no knowledge about formerly-compressed images is maintained for subsequent compression. In order to get consistent results, an offline database of successful combinations should probably be maintained; otherwise, anyone who runs optipng with a single file in the command line (e.g. in shell/batch files that contain lines like "optipng $1" or "optipng %1") would not be able to benefit from it. However, this would be a more advanced project.

    Best regards,
    Cosmin

     

Log in to post a comment.

MongoDB Logo MongoDB