I just realized that even files compressed with LZMA2 are only using one (logical) core for decompression. In times of i7 and SSDs this is very inefficient.
Any plans on this, or did I miss something?
Thanks in advance.
Now there are no plans to improve it.
LZMA/LZMA2 decompression speed is not so bad.
What size of your SSD?
What decompression speed do you need?
And why high speed is important for you?
Show some "real life" example.
Igor, no offense, but do you know the quote of Bill Gates back in 1981? "640 kB RAM ought to be enough for anybody." It seems that you bring in quite a similar scepticism. Just because you don't have a real life example doesn't mean that others don't have the need for reducing bottlenecks.
Again, no offense, but I think we both have seen other great software dying just because they haven't brought in enough foresight.
So here is my real life scenario: I'm using 7z to compress virtual machines hdd files, mainly vhd and vmdk. On average, they can be compressed down to 10% (certainly with a high variance). I need this for deploying, archiving and moving large volumes up to 500GB.
Let's say, I develop a VM on my PC (i7) and compress them with LZMA2 in eight threads - that's perfectly fine in 7z.
However, when I decompress them on one logical core, speed is around 35MB/s. The write speed of my RAID0 of two Samsung 830 SSDs is 770MB/s (measured with IOMeter).
So decompressing is really a significant bottleneck in this case.
How many GBs do you need to decompress per day?
If the file is compressed to 10%, LZMA/LZMA2 decompression speed will be larger than 35MB/s.
Please show some real numbers:
- Compressed size
- Decompressed size
- Decompression time.
Log in to post a comment.
Sign up for the SourceForge newsletter:
You seem to have CSS turned off.
Please don't fill out this field.