Solid archive greatly improves compression when there are similar files. However, seems there's a known limit when the similar files are bigger than the size of the dictionary allocated. When this happens, compression deteriorates immensely. For example, having ten similar 5gb files when dictionary size is 1gb in a solid archive would not compress very well as by the time the second file is reached, the dictionary contains nothing of the first file.
Which workarounds would you suggest?
The ideal solution would be to have a way to specify files are similar and thus makes 7z order them by a chunk from the first file, chunk from the second file and so on.
A workaround could be to split the big files and then compress the split chunks separately or with some way to specify the order of the chunks.
Log in to post a comment.
Sign up for the SourceForge newsletter:
You seem to have CSS turned off.
Please don't fill out this field.