e2fsck fails, e2fsck -b 0 works. Why?

  • Matthew Exon

    Matthew Exon - 2006-02-13

    I've just spent almost a week trying to recover from a corrupted filesystem, and finally I got it.  In the end, the trick was that e2fsck wouldn't work, but e2fsck -b 0 did.

    Now, I call the results weird.  So I'm here to ask if a) someone can give me a more detailed explanation of what was wrong with my filesystem, and b) if there's some improvement I should request to make e2fsck more intelligent about these things, in case it happens to other people.

    The major thing I don't understand is: isn't superblock 0 the one that it's supposed to be looking at anyway?  How do you get a filesystem into the state where e2fsck wouldn't be looking there by default?

    One of the weird effects I encountered was that GRUB could see my filesystem perfectly.  In fact, of all the tools I tried, it was only GRUB reassuring me that there was any intact filesystem there at all.  My understanding is that GRUB's ext2 implementation is simpler and rougher than the "real" ext2 implementation.  Maybe by being a bit stupider, it's somehow lucking out?  And maybe the "real" implementation could learn something from this.

    My story is:

    I used parted under System Rescue CD to make some free space at the end of a large ext2 partition for a new Windows 2000 install.  I also created a FAT filesystem for it to go in.  At one point, parted segfaulted.  The current maintainer insists that although this can happen, it can't do anything bad to my filesystem.

    I tried to install Windows: this gave me a "filesystem error" message or something similar.

    I rebooted Linux: at this stage it was apparently still fine.  However, my gut feeling in retrospect is that something was already wrong here, and I made it worse by running Linux.

    I decided to try putting the Windows partition at the start of the disk, using a different version of parted on the R.I.P. CD.  It turns out that if you try to resize with a different start block than the current start, parted ignores that number and just leaves it where it is.  So the net effect was to return my partition to its original size.  This rather pissed me off, so I abandoned the attempt to install Windows at this point.

    I rebooted Linux, but it refused to boot.  The kernel failed to mount the root filesystem.  So from this point on I tried various rescue CDs.

    mount and e2fsck didn't work.  Example output of mount:

    mount: wrong fs type, bad option, bad superblock on /dev/hda1,
           missing codepage or other error
           In some cases useful info is found in syslog - try
           dmesg | tail  or so

    syslog shows:

    VFS: Can't find ext3 filesystem on dev hda1.

    Example output of e2fsck:

    # e2fsck /dev/hda1
    e2fsck 1.38 (30-Jun-2005)
    Group descriptors look bad... trying backup blocks...
    e2fsck: Bad magic number in super-block while trying to open /dev/hda1

    The superblock could not be read or does not describe a correct ext2
    filesystem.  If the device is valid and it really contains an ext2
    filesystem (and not swap or ufs or something else), then the superblock
    is corrupt, and you might try running e2fsck with an alternate superblock:
        e2fsck -b 8193 <device>

    I tried gpart, but it completely failed to notice my ext2 partition.  Instead, it managed to pick up some vestigial FAT partition, possibly the one I'd created and deleted earlier.

    I also tried e2retrieve, but I hit some kind of bug and it just bailed on me.  Its output might be interesting:

    99.30% (7/70/2441783 different superblocks, 94244 dir. stubs) 829128282:57:41/8
    100.00% (7/70/2441351 different superblocks, 94291 dir. stubs)

    Scan finished

    #1 (155131640 Ko) : copy 0 1 3 5 7 9 25 27 49 81 125 243 343 625 729
    #2 (155131640 Ko) : copy 0 0 0 0
    #3 (155131640 Ko) : copy 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    #4 (155131640 Ko) : copy 0
    #5 (155131640 Ko) : copy 0 0
    #6 (155131640 Ko) : copy 0 0
    #7 (155131640 Ko) : copy 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    #8 (60000 Ko) : copy 0 1 3 5 7
    Superblock #7 has been choose.

    *** glibc detected *** double free or corruption (!prev): 0x0805b428 ***

    After talking to the parted maintainer, I used dumpe2fs to find the backup superblocks, and tried e2fsck with them.  Mostly that produced the same error message as before, although sometimes:

    e2fsck: Invalid argument while trying to open /dev/hda1

    I also tried debugfs, which failed as so:

    root@1[~]# debugfs /dev/hda1
    debugfs 1.38 (30-Jun-2005)
    /dev/hda1: Can't read an block bitmap while reading block bitmap

    Attempting to run debugfs on a different superblock gave:

    root@1[~]# debugfs /dev/hda1 -s 23887872 -b 4096
    debugfs 1.38 (30-Jun-2005)
    /dev/hda1: Bad magic number in super-block while opening filesystem

    By this stage I was resorting to using FreeSBIE, to see if different ext2 drivers might make a difference.  They didn't.  However, eventually I was using a magic long command line to automatically attempt to e2fsck using every backup superblock, but I got lazy and forgot to include the word "backup" in my grep.  This made it attempt the "e2fsck -b 0" line, and hey presto, it worked!

    Unfortunately, it didn't occur to me until too late to attempt to make a dd of the affected part of my disk before letting e2fsck finish (I don't have enough disk space for a proper backup: yeah yeah, I know...).  So I don't have the real problem data any more.  However, I am going to try to retrace my steps: once I've made some backups!

    Anyway, if anyone can explain what's going on here, please let me know.  This seems like a use case that all of e2fsck, mount, e2retrieve, gpart and debugfs seem to have failed to take into account, so it'd be nice if at least one of them could be improved to deal with this situation.

    Matthew Exon

    PS, it's long, but possibly useful, so here's the first part of debugfs's output which shows what's the status of my main superblock:

    Filesystem volume name:   <none>
    Last mounted on:          <not available>
    Filesystem UUID:          3d0f69d2-fb43-4728-be2c-fd9db26054d2
    Filesystem magic number:  0xEF53
    Filesystem revision #:    1 (dynamic)
    Filesystem features:      has_journal filetype sparse_super large_file
    Default mount options:    (none)
    Filesystem state:         clean with errors
    Errors behavior:          Continue
    Filesystem OS type:       Linux
    Inode count:              19284992
    Block count:              38782910
    Reserved block count:     1939145
    Free blocks:              17731330
    Free inodes:              18289054
    First block:              0
    Block size:               4096
    Fragment size:            4096
    Blocks per group:         32768
    Fragments per group:      32768
    Inodes per group:         16288
    Inode blocks per group:   509
    Last mount time:          Sat Feb  4 14:36:11 2006
    Last write time:          Sun Feb  5 14:44:02 2006
    Mount count:              23
    Maximum mount count:      30
    Last checked:             Thu Jan 19 16:47:57 2006
    Check interval:           0 (<none>)
    Reserved blocks uid:      0 (user root)
    Reserved blocks gid:      0 (group root)
    First inode:              11
    Inode size:               128
    Journal inode:            8
    Journal backup:           inode blocks

    • Vitaly Oratovsky

      I don't have an explanation for why -b 0 works for you.  Seems like your primary superblock was corrupt, so I don't understand how using -b 0 makes it "work".  Instead it should have worked just fine with any properly chosen backup superblock (assuming it is in tact).

      I think one of your difficulties is not knowing where the backup superblocks really are. For example, you tried:

      root@1[~]# debugfs /dev/hda1 -s 23887872 -b 4096

      which resulted in:

      /dev/hda1: Bad magic number in super-block while opening filesystem

      That pretty much tells me that 23887872 is not where a backup superblock is stored.  When you created your filesystem with mke2fs it told you where it wrote backup superblocks (but of course nobody ever records this information).  Here is a trick you can use to figure out where backup superblocks are:  run mke2fs (with -n option!) on the partition that contains your broken filesystem.  The algorithm for picking alternate superblock locations is deterministic, and if you invoke it on the same size partition it will always pick the same locations.  The -n option prevents actual modification of your filesystem, but even with -n option mke2fs will still print alternate sb loctions.  Now you can use that information with debugfs to verify that there is indeed a legitimate backup superblock there.  Or you could try it with e2fsck (again with -n option at first).

    • Leslie P. Polzer

      /* check whether group contains a superblock copy on file systems
      * where not all groups have one (sparse superblock feature) */
      int ext2_is_group_sparse(struct ext2_fs *fs, int group)
              if (!fs->sparse)
                      return 1;
              if (is_root(group, 3) || is_root(group, 5) || is_root(group, 7))
                      return 1;

              return 0;


          is_root(x,y) = TRUE
          y is root of x

    • Matthew Exon

      Matthew Exon - 2006-02-24

      Well, sorry for the delay in responding, but I've had various stupid issues trying to get the right files on the right machines at a time when I have time to work on them.  Anyway...


      I tried mke2fs, and it printed this result:

      Superblock backups stored on blocks:
          32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
          4096000, 7962624, 11239424, 20480000, 23887872

      This is the exact same result as dumpe2fs gave on the damaged filesystem.  I kinda expected this actually, especially given that my filesystem was eventually recovered.  But it was worth checking.


      Maybe I'm being stupid here, but I don't know what that code is meant to be.  Is it from e2fsck, or parted, or is it something I'm supposed to compile and run to debug my filesystem?  I don't know what a "root" is in this case.  Certainly my filesystem appears to be sparse, since the output of debuge2fs only shows some of my groups having a backup superblock.  Uh... what does that tell me?

      Finally: I've been trying to retrace my steps and reproduce the corruption, but parted (under Knoppix) is now refusing to resize.  I can't tell if this is something I'm doing wrong, or if something in all of this saga really has turned my filesystem into something with a strange layout:

      (parted) p
      Disk geometry for /UNIONFS/dev/hda: 0.000-152627.835 megabytes
      Disk label type: msdos
      Minor    Start       End     Type      Filesystem  Flags
      1          0.031 151495.773  primary   ext3        boot
      2     151495.774 152625.344  extended
      5     151495.805 152625.344  logical   linux-swap
      (parted) resize 1 0.031 140000
      No Implementation: This ext2 filesystem has a rather strange layout!  Parted
      can't resize this (yet).

      Googling for that (not particularly helpful) error message didn't explain what it's complaining about.  It's entirely possible that I'm doing something stupid here.  I also tried using just "0" as the start block, but that didn't help.  If there's still something wrong with my filesystem, and e2fsck is ignoring it, obviously I'd like to know!

      If anyone wants me to continue trying to pursue this, let me know.  Otherwise, given that I do now have my data back, I'm close to forgetting about the whole thing...

      Thanks for your help so far!


Log in to post a comment.

Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:

JavaScript is required for this form.

No, thanks