From: Mike M. <mik...@ya...> - 2007-06-29 06:22:45
|
Hi. I have two RAID5 arrays that I was trying to combine into a single volume using LVM2 in EVMS. I had everything working great, but then a reboot seemed to cause a lot of issues. I had to set device_seize_prompt to no to clear it. But now I have another problem. I get this error message: LVM2: The PV with index 1 was not found when discovering container lvm2/media. An "error" object will created in it's place. Any regions in this container that map to this PV will return I/O errors if they attempt to read or write to this PV. Regions that don't map to this PV will work normally. So the main volume which is 2.5TB in size (1.2TB in md2 and 1.3 TB in md1) can't be mounted. It doesn't show that XFS (the filesystem I'm using) is there, and can't find a superblock. For awhile I thought I lost all the data (1.1 TB's worth), but discovered an entry in my partitioner tool in yast (this is a SUSE 10.2 system) called /dev/mapper/media-media2, and if that I can mount fine, and access all the data in the filesystem. I have this mounted and service restored, but if I reboot it goes away and I have to futz around a lot to get it to work again (futz means run evmsgui, exit, then the partitioner tool, etc...) and then remount. How do I clean this mess up? I can't seem to delete the error object that was created (it's the size of md2 the 1.2 TB array). I really really would like to avoid having to move all this stuff off the arrays and rebuilding this from scratch. It'll be offline for a couple days as I'll have to move it across a LAN to storage on other systems. Thanks, Mike ____________________________________________________________________________________ Got a little couch potato? Check out fun summer activities for kids. http://search.yahoo.com/search?fr=oni_on_mail&p=summer+activities+for+kids&cs=bz |