On Fri, Oct 12, 2012 at 02:49:21PM -0500, Dave Kleikamp wrote:
> On 10/12/2012 10:33 AM, Robert Henney wrote:
> > dd of=/dev/sdc if=/dev/zero seek=640 count=1 bs=4096
> > dd of=/dev/sdc if=/dev/zero seek=935 count=1 bs=4096
> > dd of=/dev/sdc if=/dev/zero seek=1283 count=1 bs=4096
> > I would like to find out which files on the disk were in those blocks and
> > thus are now damaged. Since I have the logical block numbers I assume I
> > can get inode numbers from them somehow.
> The data structure in the inode describes the blocks of a file, but
> nothing points from the block back to the inode. Assuming that the
> blocks are part of allocated files, one could recursively run "hdparm
> --fibmap" on every file in the filesystem and parse the output to
> identify which files contain those blocks.
Trying --fibmap sounded worthwhile so I did that and it seems the smallest
begin_LBA of any file currently in the filesystem is 479688, which seems far
above the highest of those 3 unreadable sectors. I checked those sectors and
all 3 no longer appear to contain all zeros, but by this point I had run
jfs_fsck (now wish I had saved the output). This makes me think, or hope,
that while jfs was making use of those particular sectors, they were being
used for something that jfs_fsck was able to regenerate.
> There should be an easy way to determine from the block maps whether
> those blocks are allocated or not, but I can't think of one. There are
> 8192 blocks represented in each dmap page, so those three blocks are
> mapped in the first dmap page. Looking at the dmaps in jfs_debugfs is
> awkward. Maybe it would be useful to add some new command to jfs_debugfs
> to walk the metadata and tell you how a block is used. I might play with
> that over the weekend if I find time.
Finding out just what jfs is using those blocks for would be peace of mind
I suppose, but doesn't seem as critical at this point for this particular
recovery. I would definitely be interested in such a command if it were
to become available though.