SourceForge has been redesigned. Learn more.

Filesystem larger than block device.

  • Anonymous

    Anonymous - 2008-01-27

    Let me start out by saying that I have absolutely NO idea how this happened.  About 1 and a half years ago my 4 disk RAID 5 array lost a disk and had to be rebuilt.  I copied all the data off, initialized the array in the 3ware BIOS, partitioned w/ fdisk(8) and mke2fs(8)'d the partition.  Upgrading the system today some change in the behavior of mount(8) brought to light that the partition & filesystem was larger than the actual block device.

    The block device is 183148128 blocks large, but the FS believes it is 183835803 blocks in size.  As such I cannot modify the blocks_count w/ dumpfs(8), nor will resize2fs(8) function, complaining:

    [mernisse@apollo: ~ (20987)]$sudo debugfs /dev/sda1
    debugfs 1.39 (29-May-2006)
    /dev/sda1: Can't read an inode bitmap while reading inode bitmap


    [mernisse@apollo: ~ (20989)]$sudo resize2fs -fp /dev/sda1
    resize2fs 1.39 (29-May-2006)
    Resizing the filesystem on /dev/sda1 to 183147016 (4k) blocks.
    resize2fs: Can't read an block bitmap while trying to resize /dev/sda1

    So there is no data beyond the 183147016'th block, never has been, never will be, is there any way to update the superblock and or otherwise repair the filesystem so it can be used again?

    • Theodore Ts'o

      Theodore Ts'o - 2008-01-28

      First of all, I would recommend doing a backup first, since this can be very dangerous.  But something which *might* work is the following:

      # debugfs -w /dev/sda1
      debugfs: set_super_value blocks_count 183148128
      debugfs: quit
      # e2fsck -fn /dev/sda1
      # e2fsck -f /dev/sda1

      E2fsck may report data loss, but that data loss probably happened when the block device was truncated.   In any case, run it with e2fsck -fn first, and make sure you're comfortable with what it reported before you allow it to make changes with the filesystem with the second e2fsck -f /dev/sda run.

      Alternatively, if you can take a snapshot of the device first (one of the advantages of using software raid setup with something like LVM/device mapper), taking a snapshot and then doing a test run on the snapshot is highly recommended.

      Of course, if there is nothing really important on the filesystem, or you're just one of those happy-go-lucky kind of folks who are naturally optimistic that everything will go right (that sort of attitude generally gets burned out of system administrators fairly quickly and is replaced with a fairly large does of paranoia and cynicism :-), you could just try doing an e2fsck -fy after the debugfs command.  Of course, that won't give you an easy way to back out if there are major problems uncovered by the e2fsck run.

      Good luck!

      • Anonymous

        Anonymous - 2008-01-28

        Unfortunately debugfs(8) refuses to open the file system for writing.  It fails reading an inode bitmap and then refuses to do anything more with that fs.  You can open the fs successfully with -c, but that is read-only and can't be used to write the superblock (afaict).

        I wouldn't say I'm happy-go-lucky, more... backed into a corner; the data is replaceable though it is way more time intensive than attempting recovery.  The real trouble is that I presently have no where to put a 751GB image of the drive to work with... otherwise I might have tried something like e2salvage, or just sucked it up, copied the data off and mkfs(8)d a new filesystem on the drive.

        If there is no way to achieve a combination of -c and -w for debugfs(8), and no other tool to modify the superblock in such a way, then I might be stuck finding 751GB of space somewhere.

        Thank you much for the attention, I appreciate it.


        • Theodore Ts'o

          Theodore Ts'o - 2008-01-28

          Well, the only way to do that would be to recompile debugfs and remove the check.  Basically, remove following from debugfs/debugfs.c, around line 75:

              if (catastrophic && (open_flags & EXT2_FLAG_RW)) {
                  com_err(device, 0,
                      "opening read-only because of catastrophic mode");
                  open_flags &= ~EXT2_FLAG_RW;

          It would require recompiling e2fsprogs, sorry....

          • Anonymous

            Anonymous - 2008-01-28

            A huge Thank You!  Amazingly enough, against all other experience it really was that easy.  No need to apologize, you totally bailed me out. 

            I assume the best course of action now, in spite of the fact that e2fsck(8) checks the fs ok would be to back up the data somewhere and mkfs(8) the drive again just to be on the safe side?  Or do you think that if e2fsck(8) says everything is ok then everything is really ok?

            Again, Thank You!


            • Theodore Ts'o

              Theodore Ts'o - 2008-01-28

              Glad it fixed it.  Yes, if you are running a reasonably recent version of e2fsprogs, and a full check by e2fsck returns a clean bill of goods, you can be reasonably confident that the filesystem is healthy.  Of course, this won't guarantee that the contents of the filesystem are clean, but at least the basic filesystem structure is O.K.

  • Seth Miller

    Seth Miller - 2009-12-13

    I tried to resize a logical partition and something went wrong. I was getting the same symptoms that Matt was and ended up recompiling the debugfs program as was suggested. I had to use the catastrophic flag along with the read/write flag as I'm sure was the case with you and ended up getting the partition to the point where I could e2fsck it. There was quite a bit of corruption and data loss. Fortunately, it wasn't critical data.

    I don't know how often this problem is run into, but I would like to suggest there should be an additional flag to bypass this security check so a recompile isn't necessary as a final ditch effort of rewriting the superblock.

    Thanks again for the info.


Log in to post a comment.