I am unable to apply any solid LZMS-compressed wim I made with wimlib
1.6.2. Wimlib fails with the same error message every time:
"[ERROR] Failed to decompress data!
ERROR: Exiting with error code 2:
Failed to decompress compressed data."
I guess something is not quite right with the decompression - or the
compression... - routine. Non-solid LZMS-compressed wims are OK, and
other types as well.
Tried to apply these wims with both 1.6.2 and latest 1.7.0 beta.
Source and target are both NTFS, OS is Windows7_x64.
I can't reproduce this problem. Can you provide me with one of the problematic WIM files?
BTW, Eric, thanks for this fantastic utility - and sorry for not starting with this. :)
Indeed, I just created a small wim and applied it without any problem.
However, I do have the problem with a series of 5GB+ wims I created out of my system drive ("C") to test the new LZMS capabilities of wimlib. The solid compression is very impressive, but I am unable to extract any of these solid LZMS wims (I made a series with different solid chunk sizes). There is nothing special, but these are whole system drive backups, with the usual stuff excluded in wimscript.ini.
Capture options were (e.g. 4MB solid chunk size):
--boot --check --config=wimscript.ini --strict-acls --norpfix --compress=LZMS --solid --solid-chunk-size=4194304
--check --strict-acls --norpfix --include-invalid-names
I ran wimlib with administrator privileges both for capture and apply.
I have found no way to apply any of these whole system drive wims compressed with LZMS in solid mode. As mentioned, I can successfully apply the regular - not solid - LZMS-compressed wim of the same system drive.
FYI, obviously, I booted from another partition to make the wim backups of the system drive.
It is also worth mentioning that all wims are found to be OK during verification (--check).
Any ideas what the problem can be here?
Thanks for the information. You're right that this could be a problem with the LZMS compression or decompression (only occurring on certain data).
In the testing directory I've posted a file lzms-verify.zip that contains a Windows build of wimlib 1.7.0-BETA with compression verification enabled. For LZMS, if just-compressed block of data cannot be decompressed to the original data, the original block of data will be dumped to a file "failed_block.bin" in the current directory. Can you try to capture the WIM image using this version, using the smallest solid chunk size at which you experienced the problem? Then, if it fails and produces a file "failed_block.bin", send just that file. (Note: this might contain the contents of multiple files, so only send it if you're able to!)
Thanks a lot for the quick response. I may disappoint you, but we have not yet found the root cause.
I captured the clone of my system drive from drive D with the test version of wimlib you had posted. I used the exact same capture options I had posted above. There were simply no errors during capture, but it took really loooong.
I reformatted drive D - still NTFS, default cluster size -, then applied the above captured wim to drive D with the test version of wimlib: and I got the exact same error message and failure, see above.
Wimlib verifies the integrity of the wim, prints "Applying image......" to the console, then fails with the error message within about 30s. No failed_block.bin is created during either the capture or apply phase.
One interesting tidbit: wimlib apply seems to get stuck at the same exact spot every time, precisely when it starts extracting file data. All it does it creates all folders and file names on the target drive then fails. FYI, the original drive captured contains 84,828 files, while drive D following a failed wimlib apply contains 85,431 files, all 0 byte in length, thus with a total size of 0 byte. Given that there are more files on drive D after a failed wimlib apply, one wonders whether we have a problem with links, or the treatment of links here.
What say you?
This is helpful --- it seems to indicate that the problem is not with compression or decompression. It could be with the code that divides WIM resources into "chunks", although currently I don't know what the exact problem might be --- perhaps something affecting large archives only. I will keep thinking about it and trying to reproduce the problem.
The difference in file count may not actually be a problem. The final file attributes and security descriptors are not set until after file data is extracted. If whatever program you were using to view the file count excludes some files because of their metadata (e.g. being "hidden") this could cause the discrepancy. If this is the cause, then this discrepancy would disappear if you were, in fact, able to do a successful extraction.
Hi, I think I've solved the problem. There is a bug where the size of entries in the chunk table is incorrectly set to 8 bytes, rather than 4 bytes, when creating solid blocks containing more than 4 GiB of uncompressed data. When extracting, this causes decompression to be attempted on the wrong data, causing the error. Can you try the latest v1.7.0-BETA and see if it solves the problem? Since the bug is with creating the archive, you'll need to re-capture.
Thanks a lot Eric, that was it!
I have successfully applied a solid LZMS wim of my system drive captured with the latest beta, a first! Yes, obviously, the file number discrepancy was the result of wimlib setting security descriptors and attributes last - no such problem any more either.
Wimlib is such a great piece of software, I am glad this bug is fixed now.
Thanks a lot again!
Log in to post a comment.
Sign up for the SourceForge newsletter:
You seem to have CSS turned off.
Please don't fill out this field.