|
From: James L. <jam...@ho...> - 2005-11-30 01:06:34
|
>From: Mike Tran <mh...@us...> >To: evm...@li... >Subject: Re: [Evms-devel] Possible to create a degraded RAID5 array >with EVMS? >Date: Tue, 29 Nov 2005 18:50:34 -0600 > >Mike Tran wrote: > >>On Sat, 2005-11-26 at 14:42, James Lee wrote: >> >> >>>Hi there, >>> >>>I'm having some more trouble with getting EVMS and mdadm to play nicely >>>together (even after upgrading EVMS)... >>> >>>The steps I'm taking are: >>> >>>1. Starting with an empty drive (wiped it by zero-filling the start and >>>end of drive to make sure there's no residual partition table >>>information). Create two logical partitions (/dev/sdb5 and /dev/sdb6). >>> >>>2. Use mdadm to create a degraded "3-drive" RAID5 array called /dev/md0: >>>"mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb5 /dev/sdb6 >>>missing". >>> >>>3. Start EVMS. It correctly detects the degraded RAID5 array on >>>/dev/md0. >>> >>>4. In EVMS, create 4 partitions on /dev/sdb, each one just over half the >>>size of the partitions in the RAID5 array. Create two 2-drive RAID0 >>>stripes, from these partitions. >>> >>>5. Add one of these RAID0 arrays to the RAID5 array. Wait for it to >>>resync. RAID5 array is now active and non-degraded. >>> >>>6. Create an EXT3 filesystem and bung some files on it. >>> >>>7. Everything working fine so far. Now expand the RAID5 array with the >>>other RAID0 array. This seems to work fine. No data lost on the >>>partition and no errors. >>> >>>8. Reboot. When next starting EVMS, I get the following errors: >>> >>> >>> >> >>In theory, this should work!!! I will try your scenario to find out what >>went wrong. >> >> >> >I could not reproduce this problem using evms 2.5.3. Did you wait for the >raid5 expand to complete before rebooting the machine? > >-- >Mike T. > Thanks for looking into this Mike. Yes, the RAID5 expand had completed successfully (and the machine was idle for a few hours before rebooting, with the array working fine). Is it possible that these problems are caused by having some residual superblock left over from a previous array? To save time, I wiped the drive by doing a wipe of the first and last million sectors of the drive (rather than zeroing the entire 320GB, which takes several hours). Maybe I should try with a completely clean drive. The version of mdadm I'm using doesn't support version 1 superblocks AFAIK, which is why I've had to use the older version 0.9 superblocks. I can't see myself having more than 27 devices in this array, or moving it over to a byte-swapped (i.e. Sparc?) machine, so I should be OK. Presumably support for the older superblock will continue into the future? |