From: Phillip L. <ph...@lo...> - 2013-02-06 00:14:23
|
On 03/02/13 12:11, Dr. Alexander K. Seewald wrote: > Hi all, > > I am a first time user of mksquashfs 4.2 and have experienced two > issues when compressing a 2TB (exfat) file system, ca. 5 million > files. Hi The symptoms you describe sound like you've hit this bug http://git.kernel.org/?p=fs/squashfs/squashfs-tools.git;a=commit;h=1aa52b74f978fdbf359ad9cdcf0b2c6514904000 This is fixed in the git development version. You can get this by cloning the git repository, url git://git.kernel.org/pub/scm/fs/squashfs/squashfs-tools.git Alternatively, if you're unfamiliar with git, you can get a ZIP archive of the current git repository here: https://github.com/plougher/squashfs-tools/archive/master.zip Please tell me if this fixes your problem. Phillip > > * on a 32bit Debian Wheezy system w/ 4GB ram, dual-core I got a > SEGFAULT with several attempts > - one assumption was that perhaps the duplication checksum buffer > was becoming too large so I switched to a second machine w/ more > memory (also one that runs all the time so the time-consuming > tests were easier to do ;-) > > * on a 64bit Debian Squeeze w/ 8GB ram, quad-core I got a different > error, sadly: > Lseek on destination failed because Invalid argument (errcode = 1) > > The only things I changed were > a)using the SquashFS 4.2 code from sourceforge instead of the > Debian Wheezy package > b)using a smaller harddisc as target (which may be too small) > > None of the runs created a valid squashfs filesystem on the target > device (always: SquashFS superblock not found) > > I am using a block size of 1MB and -noappend, otherwise all settings are > left at default (e.g. deduplication is switched on, etc..). I am > writing directly to a harddisc partition and no read/write errors > are in syslog after the crash. > > Now I'd be happy to look into this and fix it; however, each testrun > takes around 3-4 days so I have some questions which may help > speeding this up. In the meantime I am trying to replicate this error > with a more manageable set of files. > > Can the Lseek error simply indicate that the target harddisc is too > small? > > Are there size limitations within mksquashfs which might cause a > SEGFAULT on such a large filesystem? (when creating it) > > Can the duplication checksum buffer overrun or become too large with a > segfault or should there be an error message? However, there were no OOM > messages in syslog, and overcommit was at default 50% so perhaps it is > unlikely; also there were no visible signs that the machine went > into trashing anytime during the run. > > How likely is this a multithreading issue? (i.e. does it make sense > to run with -processors 1?) > > How likely is this related to deduplication? (i.e. does it make > sense to switch it off for testing - I would not want to switch > it off definitely since the filesystem has lots of duplicate files) > > Best, > Alex |