#317 ZPAQ is crazy slow because it uses no CPU


Tried compressing 800MB of data with ZPAQ, set everything to maximum, 520 minutes have passed and it still hasn't finished. Checking CPU utilization and while i'm playing some music, i'm getting up to 3% CPU usage. Hardly any HDD activity. Seriously, no wonder ZPAQ takes gazillion minutes to finish if it's like idling there, doing exactly nothing at all. When i run LZMA2 (with 7-zip), it's properly crunching data, CPU utilization goes very high, but also finishes ANYTHING in a matter of seconds. ZPAQ in PeaZip on the other hand, takes ridiculously long no matter what you give it to compress, even if it's a tiny 20MB thing.

I have an Intel Core i7 920 overclocked to 4GHz (with 8 freaking threads!), 6GB of RAM and a hybrid HDD+SSD storage and all this raw power is idling there, doing nothing. It could finish like 20 times faster if it would use all 8 threads and utilize CPU to at least 75%.
I don't know who's fault it is exactly, ZPAQ compressor itself or the PeaZip, but either way, it makes absolutely no sense that i have all this power at my disposal and it's not being utilized at all. I mean it's actually not being used, because at 3% CPU usage and hardly and HDD blinking, there is just no way anything is being properly processed (compressed). Use the damn CPU and compress the data faster. People might even start using ZPAQ then because it won't be entirely and completely useless like it is now...


  • Reinier Olislagers

    What happens if you manually invoke the ZPAQ compressor? If you see the same problem, please report the problem to the ZPAQ author as there's nothing that can be done in PeaZip...

    • RejZoR

      RejZoR - 2014-05-05

      Actually, no. If i use ZPAQ from Matt Mahoney's webpage, it compresses the same data in matter of minutes (but cmd is just too damn clumsy for daily usage). Even with -method 6 which is the highest compression level for ZPAQ. And even with just 2 threads. With 4 threads, it was ridiculously fast considering how people always complain how slow PAQ compressors are. With PeaZip, i'm guaranteed to have the compression process run for 12+ hours and so far i've always just canceled it because it was not going anywhere. Unreasonably slow. So, clearly, it's not compressor's fault, it has to be something with the PeaZip.

      Though i've noticed ZPAQ cmd tool often just stops compressing when i use more threads. Happened several times with 4, 3 and even 2 threads. Never with just one. On a system with 8 thread CPU (Core i7 920 @ 4GHz) and 6GB of RAM. Still, even 1 thread does the job A LOT faster than ZPAQ in PeaZip.

      • Giorgio Tani

        Giorgio Tani - 2014-05-06

        PeaZip uses ZPAQ 4.04 (plainly calls the executable), while latest ZPAQ 6.50 had received several efficiency improvements.
        Various changes in ZPAQ syntax and in archive listing prevented me to successfully integrate latest releases of ZPAQ in PeaZip, as some code must be rewritten to work with latest ZPAQ releases, but supporting ZPAQ project remains one of my goals as I really appreciate it.

        • Ceremony

          Ceremony - 2014-06-22

          maybe this issue should be merged with #246: https://sourceforge.net/p/peazip/tickets/246/

          after all, both will be fixed once we are on 6.xx

          Can't wait to get ZPAQ support back. Reading newer ZPAQ version is broken after all... :(

          Last edit: Ceremony 2014-06-22
  • Giorgio Tani

    Giorgio Tani - 2014-07-08

    ZPAQ backed updated to 6.54 in PeaZip 5.4.0

  • Giorgio Tani

    Giorgio Tani - 2014-07-08
    • status: New --> Fixed
  • RejZoR

    RejZoR - 2014-07-08

    Fantastic. Its now very quick. However, i can't seem to find the settings for it anymore (Fast, Normal, Maximum compression modes). When i'm compressing data, i want it set to max. Mostly because it compresses quite poorly now (compared to LZMA2 on Ultra). Where before ZPAQ was often quite better.


Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:

No, thanks