So, I'm attempting to set up a RAID10 array with 6 identical 600GB 15k SAS drives. I am doing this on an Ubuntu 12.04 server installation living on a Dell PowerEdge 2950 server with an SAS6/iR controller (unused, but not disabled).
Summary:
Physical Slots 0-5 (total 6)
Original system disk in slot 0, others in 1-5
raider -R10 drives....
shutdown
swap slot 5 disk with slot 0 original disk
boot
raider --run .... FATAL ALERT!: /dev/sda is not in the correct slot (or something similar)
blkid: first 5 drives are /dev/sda-e and all have same UUID and are all on the array /dev/md0; last drive is /dev/sdf and has UUID of original disk.
unable to sync since the last drive is getting assigned /dev/sdf instead of /dev/sda.
More info and description:
Initially, I used my main system disk in the 0 slot (total 0-5), ran raider -R10 sda sdb etc... and everything seemed to work fine (except I did get a few errors at the end about not being able to create a folder /var/lock/dmraid....hopefully that doesn't lead to more problems). Anyway, I shutdown the system and swapped my system disk (/dev/sda) in slot 0 with the last disk in slot 5 (/dev/sdf at the time of raider -R10, etc...). Now, when I boot, I am able to boot just fine and the raid array is mounted as /dev/md0 on /. The problem is that when I run raider --run, I receive a FATAL ALERT!: /dev/sda is not in the correct slot!
The original system disk is most definitely in the last physical slot (verified by checking the UUID), but it is listed in fdisk -l and blkid as /dev/sdf instead of /dev/sda so I'm guessing raider is unable to recognize it as the original disk. Apparently this happens because each drive is assigned a /dev/sdX variable based on position in the slots... There's no way for me to get /dev/sda to appear as the drive in the last slot while the first slot is occupied after swapping the disks.
Is there any way to force the sync to /dev/sdf since I know without a doubt that it is the correct original disk?
I should add that I do not have the Linux RAID module installed in webmin, so the previous post by Steve may not apply to my problem.
Last edit: Tyler J 2013-04-20
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
So, I'm attempting to set up a RAID10 array with 6 identical 600GB 15k SAS drives. I am doing this on an Ubuntu 12.04 server installation living on a Dell PowerEdge 2950 server with an SAS6/iR controller (unused, but not disabled).
Summary:
More info and description:
Initially, I used my main system disk in the 0 slot (total 0-5), ran raider -R10 sda sdb etc... and everything seemed to work fine (except I did get a few errors at the end about not being able to create a folder /var/lock/dmraid....hopefully that doesn't lead to more problems). Anyway, I shutdown the system and swapped my system disk (/dev/sda) in slot 0 with the last disk in slot 5 (/dev/sdf at the time of raider -R10, etc...). Now, when I boot, I am able to boot just fine and the raid array is mounted as /dev/md0 on /. The problem is that when I run raider --run, I receive a FATAL ALERT!: /dev/sda is not in the correct slot!
The original system disk is most definitely in the last physical slot (verified by checking the UUID), but it is listed in fdisk -l and blkid as /dev/sdf instead of /dev/sda so I'm guessing raider is unable to recognize it as the original disk. Apparently this happens because each drive is assigned a /dev/sdX variable based on position in the slots... There's no way for me to get /dev/sda to appear as the drive in the last slot while the first slot is occupied after swapping the disks.
Is there any way to force the sync to /dev/sdf since I know without a doubt that it is the correct original disk?
I should add that I do not have the Linux RAID module installed in webmin, so the previous post by Steve may not apply to my problem.
Last edit: Tyler J 2013-04-20