Menu

#48 Fails to understand md setup.

Version_2.0
open
5
2004-01-21
2004-01-21
Rich Walker
No

2 md arrays -
Personalities : [linear] [raid0] [raid1] [raid5]
[multipath]
read_ahead 1024 sectors
md1 : active raid5 hdb2[4] hdd1[2] hdf1[1] hdg1[0]
359293248 blocks level 5, 64k chunk, algorithm 2
[4/3] [UUU_]
md0 : active raid1 hdg2[0] hde1[1]
19583168 blocks [2/2] [UU]

mdadm --detail /dev/md0:
/dev/md0:
Version : 00.90.00
Creation Time : Mon Jan 6 06:07:44 2003
Raid Level : raid1
Array Size : 19583168 (18.68 GiB 20.05 GB)
Device Size : 19583168 (18.68 GiB 20.05 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Wed Jan 21 21:57:34 2004
State : dirty, no-errors
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 1

Number Major Minor RaidDevice State
0 34 2 1 active sync
/dev/hdg2
1 33 1 0 active sync
/dev/hde1
UUID : a061d4aa:672bcb2d:bcfe98f7:ddaf2c70
Events : 0.265

mdadm --detail /dev/md1:
/dev/md1:
Version : 00.90.00
Creation Time : Wed Jan 21 22:30:33 2004
Raid Level : raid5
Array Size : 359293248 (342.65 GiB 367.92 GB)
Device Size : 119764416 (114.22 GiB 122.64 GB)
Raid Devices : 4
Total Devices : 5
Preferred Minor : 1
Persistence : Superblock is persistent

Update Time : Wed Jan 21 22:30:33 2004
State : dirty, no-errors
Active Devices : 3
Working Devices : 4
Failed Devices : 1
Spare Devices : 1

Layout : left-symmetric
Chunk Size : 64K

Number Major Minor RaidDevice State
0 34 1 0 active sync
/dev/hdg1
1 33 65 1 active sync
/dev/hdf1
2 22 65 2 active sync
/dev/hdd1
3 0 0 3 faulty
4 3 66 4 spare /dev/hdb2
UUID : 31ac3ac8:03f6b6ce:d537fc8a:8bd1e4b7
Events : 0.1

/dev/md1 was produced using
mdadm --create /dev/md1 -n 4 -c 64 -l 5 /dev/hdg1
/dev/hdf1 /dev/hdd1 /dev/hdb2

/dev/md0 has had several partitions moved in and out of
it over a period of time.

Neither drive can be managed by evms 2.2.2:

Jan 21 23:17:53 MDRaid1RegMgr: Region md/md0 : MD
superblock has spare_disks=1, found 0.

Jan 21 23:17:53 MDRaid5RegMgr: RAID5 array md/md1 is
missing the member with RAID index 3. The array is
running in degrade mode.

It has offered to fix md0, but that hasn't made any
difference.

At present, I can't manage either of the MD arrays in EVMS.

Any ideas?

Discussion

  • Rich Walker

    Rich Walker - 2004-01-21

    Logged In: YES
    user_id=846991

    Oh, one thing I should have mentioned: the RAID array md1 is
    currently being reconstructed...

     
  • Mike Tran

    Mike Tran - 2004-01-26

    Logged In: YES
    user_id=209052

    Rich,

    Due to SourceForge problem last week, I replied to the evms-devel
    mailing list (I guess that you have not subscribed to the list) Anyway,
    how is your md1 array now after the reconstruction? If you still have
    problem, please run evmsgui or evmsn with "-d everything" and send
    the detail /var/log/evms-engine.log file to me at mhtran@us.ibm.com.

    Regards,
    Mike T.

     
  • Orion Poplawski

    Orion Poplawski - 2004-05-17

    Logged In: YES
    user_id=17422

    I'm seeing something similar:

    May 17 15:15:22 MDRaid1RegMgr: Region md/md1 : MD
    superblock has active_disks=2, found 0.

    May 17 15:15:22 MDRaid1RegMgr: Region md/md1 : MD
    superblock has working_disks=2, found 0.

    May 17 15:15:22 MDRaid1RegMgr: Region md/md1 is corrupt.
    Using the Fix... function, it may be possible to bring it
    back to normal state.

    May 17 15:15:22 MDRaid1RegMgr: Region md/md0 : MD
    superblock has active_disks=2, found 0.

    May 17 15:15:22 MDRaid1RegMgr: Region md/md0 : MD
    superblock has working_disks=2, found 0.

    May 17 15:15:22 MDRaid1RegMgr: Region md/md0 is corrupt.
    Using the Fix... function, it may be possible to bring it
    back to normal state.

    System is a freshly installed Fedora Core 1 install, with
    the raid/lvm setup done by kickstart.

    kernel is: 2.4.27-pre2-evms2.3.2

    evms program version is 2.3.3

    md0 is a raid1 hosting /boot
    md1 is a raid1 hosting the root volume group with the /,
    /usr, and /var filesystems as logical volumes
    [root@alexandria root]# mdadm --detail /dev/md0
    /dev/md0:
    Version : 00.90.00
    Creation Time : Mon May 17 07:27:52 2004
    Raid Level : raid1
    Array Size : 6144768 (5.86 GiB 6.29 GB)
    Device Size : 6144768 (5.86 GiB 6.29 GB)
    Raid Devices : 2
    Total Devices : 2
    Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Mon May 17 08:50:44 2004
    State : dirty, no-errors
    Active Devices : 2
    Working Devices : 2
    Failed Devices : 0
    Spare Devices : 0

    Number Major Minor RaidDevice State
    0 3 2 0 active sync /dev/hda2
    1 22 2 1 active sync /dev/hdc2
    UUID : 7570eaa5:43e1e0b4:3f0604df:b6ae4be1
    Events : 0.5
    [root@alexandria root]# mdadm --detail /dev/md1
    /dev/md1:
    Version : 00.90.00
    Creation Time : Mon May 17 07:27:58 2004
    Raid Level : raid1
    Array Size : 48064 (46.94 MiB 49.22 MB)
    Device Size : 48064 (46.94 MiB 49.22 MB)
    Raid Devices : 2
    Total Devices : 2
    Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Mon May 17 08:50:44 2004
    State : dirty, no-errors
    Active Devices : 2
    Working Devices : 2
    Failed Devices : 0
    Spare Devices : 0

    Number Major Minor RaidDevice State
    0 3 1 0 active sync /dev/hda1
    1 22 1 1 active sync /dev/hdc1
    UUID : 0154dae8:576cff18:3eeb295e:60aff867
    Events : 0.5

     
  • Mike Tran

    Mike Tran - 2004-05-18

    Logged In: YES
    user_id=209052

    Hello opoplawski,

    Could you please run evmsgui (or evmsn) -d everything, then
    bzip the evms log file /var/log/evms-engine.log. I need the
    evms log file to see what is going. You may send it
    directly to me mhtran@us.ibm.com

    Thanks,
    Mike T.

     
  • Orion Poplawski

    Orion Poplawski - 2004-06-09

    Logged In: YES
    user_id=17422

    Did the logs help?

     

Log in to post a comment.