From: Hadar <wh...@gm...> - 2008-01-12 17:46:57
|
On Jan 11, 2008 7:06 AM, Phillip Lougher <phi...@gm...> wrote: > On Jan 10, 2008 5:51 PM, Hadar <wh...@gm...> wrote: > > > What I was asking for is to split the image from the beginning - an > > enhancement to mksquashfs. > > In that way, there will be smaller files instead of one large file which > can > > sometimes be a problem. > > Hi, > > As Douglas said there are already unix/kernel tools which allow you to > split a large image file from mksquashfs into smaller files, and then > handle them as separate files. > > If all you want to do is to split the large file into smaller files > for email or storage on a mirror you can use the split command, and > then later cat the files back together into a single file for Squashfs > loopback mounting. > > If for some reason you want to keep the large file split into smaller > files even when mounting the Squashfs filesystem you can use the Linux > kernel device mapper. This allows you to concatenate one or more > loopback devices into one virtual device which can then be given to > the mount command. > > I rather puzzled why you want to do this. From your earlier email you > mention: > > > Also, it will allow systems to load just part of the image into RAM > instead of all or nothing. > > Linux filesystems are loaded into memory (buffer and page cache) on a > demand basis, when a particular file is read. At no time when the > filesystem is mounted (from a block device or loopback file) is the > entire filesystem loaded into RAM if the files are not accessed. This > means splitting a larger Squashfs filesystem into smaller separate > files when mounting the filesystem (as opposed to emailing/storing on > mirrors) serves no useful purpose. It serves the same purpose many livecds load the entire image into RAM - it is a (simple) method to preload files into memory. The main advantage of this method over what you suggested (please correct me if I am wrong), is that the files are still compressed while in the page cache the files are after decompression and consumes ~3 times more space. > > > However, as a simple proof of concept, I knocked up the following bash > shell script which creates a virtual device from separate files. It > takes as arguments the prefix used when splitting the large filesystem > using the unix command split (i.e. part.aa, part.ab, part.ac etc) and > the desired virtual device name. > > ================ dm_concat.sh ==================== > #/bin/bash > > if [ $# -lt 2 ]; then > echo "usage: $0 prefix concat_device" > exit 1 > fi > > start=0 > for i in $1*; do > if ! loop=$(losetup -f); then > echo "losetup failed... Out of loop devices? Aborting." > exit 1 > fi > > echo associating $i with loopback device $loop > losetup $loop $i > size=$(blockdev --getsize $loop) > table=$table"$start $size linear $loop 0\n" > start=$((start + size)) > done > > echo "device mapper table" > echo -e $table > if echo -e $table | dmsetup create $2; then > echo created /dev/mapper/$2 > else > echo dmsetup failed > fi > ================================= > > The shell script iterates over the split files and associates a > loopback device with each one, it then builds a device mapper table > consisting of these loopback devices which it uses to setup a virtual > device. Obviously this is more involved than loopback mounting a > single file because in this case the mount command does all the work > for you. Right... Thanks for the example script. > > > For example: > > Script started on Fri 11 Jan 2008 02:29:40 AM GMT > # ls -hs test.hsqs > 8.7M test.hsqs > # split -b 3m test.hsqs part. > # ls -hs > total 18M > 3.1M part.aa 3.1M part.ab 2.7M part.ac 8.7M test.hsqs > # > # dm_concat.sh part. test > associating part.aa with loopback device /dev/loop5 > associating part.ab with loopback device /dev/loop6 > associating part.ac with loopback device /dev/loop7 > device mapper table > 0 6144 linear /dev/loop5 0 > 6144 6144 linear /dev/loop6 0 > 12288 5352 linear /dev/loop7 0 > > created /dev/mapper/test > # > # mount -t squashfs /dev/mapper/test /mnt > # ls /mnt > elfutils-0.131 pahole > # unsquashfs -stat /dev/mapper/test > Found a valid little endian SQUASHFS 3:1 superblock on /dev/mapper/test. > Creation or last append time Mon Dec 24 04:02:47 2007 > Filesystem is exportable via NFS > Inodes are compressed > Data is compressed > Fragments are compressed > Check data is not present in the filesystem > Fragments are present in the filesystem > Always_use_fragments option is not specified > Duplicates are removed > Filesystem size 8819.75 Kbytes (8.61 Mbytes) > Block size 1048576 > Number of fragments 30 > Number of inodes 2265 > Number of uids 3 > Number of gids 0 > # exit > exit > > Script done on Fri 11 Jan 2008 02:33:25 AM GMT > > Phillip > |