So it had been running for 27 hours, doing nothing but analyzing - packing a total of 0 files. It already knew the total amount of files - so why the need to analyze/read every single file, only to maybe (in an far away future) read every file AGAIN, when adding to archive?
So it had been running for 27 hours, doing nothing but anazlysing - packing a total of 0 files. It already knew the total amount of files - so why the need to analyze/read every single file, only to maybe (in an far away future) read every file AGAIN, when adding to archive?
I am wondering why the analyzing part takes so long? I have currently set it to pack little under 10 million .wav files over LAN. It has found out that there are 690GB, counted slowly up to 9412116 files...and now, it's slowly analyzing all files AGAIN. It's been going for more than 4 hours now... PC: dual xeon, 128G RAM, running on 12G SAS cached drives... 7zip benchmark: 2962% 2.460GIPS 73.416GIPS
Virtial WiFi?
SAPI event data (EVNT chunk)
I second this request :)
Hi there :) Freeimage is an impressive piece of work which is able to do many things. It already has 4 different quantizer methods for converting 16, 24 and 32bit to 8bit, but there is another rather well-known project which has taken the NeuQuant method a few steps further, and it's also open-source. The libimagequant library, available at https://github.com/ImageOptim/libimagequant does an impressive piece of work - can be seen using the compiled .exe at https://pngquant.org/ So why not include...