160GB ext3 filesystem on 16GB drive (or: ext3 versus huge defective areas)

  • SaGS

    SaGS - 2013-05-11

    Hi all,

    I have an unusual case where I need to create a 160GB ext3 filesystem on a 16GB USB flash drive. Of course, 144Gb of that space (all except the 16GB at the beginning) need to be marked as bad to never be used for files/ directories.

    The exact command I tried is:

            -l bads.txt
            -b 4096 -g 32768 -I 128 -N 37504 -m 1 -J size=4 -O large_file
            /dev/sdb1 38417920

    where bads.txt was obtained with:

        for ((i = 3841792; i <= 38417920-1; i++)) do
            printf "%d\n" $i; done >bads.txt

    3841792 is the 'real' number of 4k blocks that fit on this device. The 'total number of i-nodes' is computed to have only 1 block used for the i-node table in each group. Other parameters try to get the max possible space for data, but that's not important here.

    What I get is a lot of

        "Warning: the backup superblock/group descriptors at block <number> 
         contain bad blocks."

    (that I think are not fatal), then

        "Allocating group tables: 0/1173 ... 118/1173"

    followed a fatal

        "mkfs.ext3: Could not allocate block in ext2 filesystem while 
         trying to allocate filesystem tables"

    The filesystem is not created. I obtain the same thing whether I hack or not the MBR for the partition to look 10 times larger than the device.

    After trying to understand the on-disk ext2/3 structures, it looks to me these cannot cope with large defective areas. Superblock backups, group descriptor tables, various bitmaps and i-node tables are allocated at fixed positions on the disk, and if those locations are defective formatting can only fail. If there is a defective area larger than the group size, then certainly a group will fall completely within so there is no place to allocate its block/ i-node bitmaps and i-node table; I guess (but do not know for sure) this is what's happening here.Also as I understand it the group size is not unlimited (to create a single group with the whole defective area at its end), for a 4k block size the max group size is 128MB.

    Software version used: mke2fs 1.42 (29-Nov-2011) (from Ubuntu 12.04LTS).

    Did I understand right that ext2/3 cannot cope with defective areas that large? What else can I do to create a filesystem who's total size is much larger than the device it is created on? (Note: it's not that I want to write that much data there, I just want it look like being much larger than the device would allow.)


  • Theodore Ts'o

    Theodore Ts'o - 2013-05-11

    Nope, you can't do that. With ext3 the block group specific metadata have to be located in the block group.

    Why on earth would you want to do something like this in the first place? If the goal is to create a seed file system which you later can grow, why not format it as a small file system, and then use resize2fs.

    Alternatively, create the file system image as a sparse file. That is, format the USB drive as a standard ext3 file system, then mount the ext3 file system as on /mnt (for example), and then create the file system image as /mnt/fs.img, and then mount the file system image using "mount -o loop /mnt/fs.img /mnt2".

  • SaGS

    SaGS - 2013-05-11

    1st, thankyou for replying.

    Why on earth would you want to do something like this in the first place?

    I have a stubborn STB that activates its 'USB Personal Video Recorder' function if I plug-in a real harddisk (tried with 2TB, others used 500GB) but not when I plug-in any of the USB flash drives I have (max 16GB; I know 512GB sticks exist, but take a look at their price!).

    I don't know how does it decide this. It's not the r/w speed, because it does no tests. Think it's not the fixed-or-removable bit, because as I understand it Linux does not care (considers both USB [magnetic] disks and USB flash drives as removable). So it should decide on some sort of 'size'. Maybe I'm lucky and it's not the drive's 'native' size, because if it's unformatted (or formatted with somethig it does not recognize, like XFS) it displays 0/Unknown as size, not the real size of the device. So I'm trying to pass a 16GB stick as something MUCH larger...

    Nope, you can't do that. With ext3 the block group specific metadata have to be located in the block group.

    The main problem to solve it to dissalow/ avoid accessing to block groups that are beyond the drive's capacity. I took a look at the format of the ext2 group descriptors. Does the following change for the 'non-existent' groups have any chance to work:

    • first, allocate a single (drive-wise) block and fill it with all 1's; let's name this 'the stub bitmap';
    • in the group descriptors, set the 'group bitmap block #' and 'i-node bitmap block #' to point to the 'stub bitmap'; all these bitmaps will be 'merged' and located outside of the group they belong to;
    • in the group descriptors, set # of free blocks = # of free i-nodes = # of directory entries = 0, so there's no reason to allocate anything or look inthere;
    • for the i-node tables do something similar to the bitmaps, that is a single shared table that looks full, stored outside the group it belongs to, and will be essentially read-only; not sure how to make it look full.

    Also not sure how the periodic filesystem checks would cope with all af this. Can these checks be essentially disabled?

  • Theodore Ts'o

    Theodore Ts'o - 2013-05-11

    It might be possible to hack the format such that you could fool the kernel implementation of ext3 (but there are some subtleties that would depend on exactly what version of kernel, and hence the ext3 implementation) that was used by your STB.

    However, any of these changes would not pass muster with the fsck checks. Disabling them would require making changes to the boot scripts in the STB, and if you were willing to make changes to the STB's root image, you could do many other things (including changing the kernel so it would like about the size of the file system and/or the raw block device).

    The bottom line is you could probably put in the equivalent of thousands of dollars worth of effort to fool the STB, but that point, is it really worth it?

  • SaGS

    SaGS - 2013-05-11

    Thankyou for all of these informations.

    When I'll find some time I'll try to verify if these hacks work somehow. And yes, if counted in money, it's not worth the trouble. But (1) I am stubborn too not only this STB is, and (2) I don't want to give up before looking at this from ALL reasonable angles.


Log in to post a comment.

Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:

JavaScript is required for this form.

No, thanks