[Jfs-discussion] Need help. Unable to mount jfs again after power failure (JFS state: mounted??)
Brought to you by:
blaschke-oss,
shaggyk
From: <wa...@gm...> - 2004-03-26 16:52:08
|
Hi, i ran into a strange but serious problem when my raid restartet after a = short power loss. i am using a 3ware raid controller. the device it provides is /dev/sdb = (the raid5 partition) i am using 2.4.21 linux kernel with the build in jfs and 3ware drivers doing a lowlevel check of the raid5 with the 3ware tools confirms = integrity and functionality of the raid itself. on /dev/sdb is a jfs after the powerloss "mount /dev/sdb /mnt" tells me: host:~# mount /dev/sdb /mnt mount: wrong fs type, bad option, bad superblock on /dev/sdb, or too many mounted file systems (aren't you trying to mount an extended partition, instead of some logical partition inside?) "mount -t jfs /dev/sdb /mnt" tells the same thing. the device does exist: host:~# cat /proc/partitions major minor #blocks name rio rmerge rsect ruse wio wmerge wsect = wuse running use aveq 8 0 17921835 sda 1478 3173 37028 7840 865 221 8794 25100 0 8710 = 32940 8 1 32098 sda1 6 18 48 30 1 0 2 0 0 30 30 8 2 996030 sda2 1 0 8 10 0 0 0 0 0 10 10 8 3 16892347 sda3 1470 3152 36964 7790 864 221 8792 25100 0 = 8670 32890 8 16 1715814464 sdb 56982 344926 3214952 446240 0 0 0 0 0 432040 = 446220 my first action was doing a read-only check with the jfs-utils: host:~# jfs_fsck -v -n /dev/sdb jfs_fsck version 1.1.4, 30-Oct-2003 processing started: 3/26/2004 14.32.21 The current device is: /dev/sdb Open(...READONLY...) returned rc =3D 0 Primary superblock is valid. The type of file system for the device is JFS. Block size in bytes: 4096 Filesystem size in blocks: 428953616 **Phase 1 - Check Blocks, Files/Directories, and Directory Entries **Phase 2 - Count links **Phase 3 - Duplicate Block Rescan and Directory Connectedness **Phase 4 - Report Problems **Phase 5 - Check Connectivity **Phase 6 - Perform Approved Corrections **Phase 7 - Verify File/Directory Allocation Maps **Phase 8 - Verify Disk Allocation Maps Filesystem Summary: Blocks in use for inodes: 94568 Inode count: 756544 File count: 700866 Directory count: 37027 Block count: 428953616 Free block count: 81007537 1715814464 kilobytes total disk space. 158237 kilobytes in 37027 directories. 1391320657 kilobytes in 700866 user files. 0 kilobytes in extended attributes 0 kilobytes in access control lists 621896 kilobytes reserved for system use. 324030148 kilobytes are available for use. File system checked READ ONLY. Filesystem is clean. processing terminated: 3/26/2004 14:40:28 with return code: 0 = exit code: 0. after some looking around i found that "jfs_tune -l /dev/sdb" tells me: host:~# jfs_tune -l /dev/sdb jfs_tune version 1.1.4, 30-Oct-2003 JFS filesystem superblock: JFS magic number: 'JFS1' JFS version: 1 JFS state: mounted JFS flags: JFS_LINUX JFS_COMMIT JFS_GROUPCOMMIT = JFS_INLINELOG Aggregate block size: 4096 bytes Aggregate size: 3431458256 blocks Physical block size: 512 bytes Allocation group size: 4194304 aggregate blocks Log device number: 0x810 Filesystem creation: Fri Oct 17 02:18:13 2003 Volume label: 'STORAGE' maybe the problem is that the jfs on /dev/sdb still "thinks" it is = mounted? because of the "JFS state: mounted"? if that might be the reason, is there any way to reset the state to "not = mounted" or whatever it might should be? or if anybody else encounterd the same problem. how do i get rid of it?? there is lots of data on that raid which is (as usual of course) vital = to me... it looks like there is still hope. Please help? Regards, Henk Birkholz Bremen / Germany |