- priority: 5 --> 3
Not a defect but maybe an idea for optimization: When running the tool with several threads in parallel to get better execution time and utilize all cpu cores into compression the backup drive gets fragmented quite badly.
I assume this is because each (small?) compression block finishes on its own time (random, chaotic) and gets written to disk, appended to the file it belongs. So the files from each core "mix" into each other on hd layout... not so nice.
It would be great if the block size for harddisk allocations was increased. Maybe 1 GB blocks, or customizable on the command line with a 128 MB default if none given? Allocate big space on start, then fill this block with all the tiny compression results until exhausted - and claim a new big one. Truncate the last block when done to the needed size. This would keep fragmentation in check!