How to improve compresion for similar files whose size is bigger than the dictionary size?

  • colnector

    colnector - 2014-04-15

    Solid archive greatly improves compression when there are similar files. However, seems there's a known limit when the similar files are bigger than the size of the dictionary allocated. When this happens, compression deteriorates immensely. For example, having ten similar 5gb files when dictionary size is 1gb in a solid archive would not compress very well as by the time the second file is reached, the dictionary contains nothing of the first file.

    Which workarounds would you suggest?

    The ideal solution would be to have a way to specify files are similar and thus makes 7z order them by a chunk from the first file, chunk from the second file and so on.
    A workaround could be to split the big files and then compress the split chunks separately or with some way to specify the order of the chunks.

    Related discussions:

    Thanks :)

  • colnector

    colnector - 2014-04-22



Log in to post a comment.

Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:

JavaScript is required for this form.

No, thanks