From: Feizhou <fe...@gr...> - 2005-07-27 17:00:51
|
>>>>unless you have free hotswap SCSI bays available, adding another >>>>software raid will have to mean another box with iet. >>>> >>>> >>>> >>>> >>>but raid5 can not dynamic add disk and expand size right? >>> >>> >>> >>> >>> >>no, hence the need for lvm...but at the same time, lvm does not tolerate >>disk loss. >> >> >so that is why i am thinking lvm on top of several raid5 or 1 > > > where the raid5/1 is composed of devices that are available through iscsi. >>> >>> >>> >>> >>>>At same blinking time, lvm does not tolerate losing physical volumes so >>>>if a iet box goes down.... >>>> >>>> >>>> >>>> >>>iet is on top of lv, so even iet box crash, hopefully lvm is still >>>there. >>> >>> >>> >>> >>> >>you still need lvm on the database box itself to do dynamic resizing. >> >> >why? i resize the lv iet exports and then ini side see new size scsi >disk. > > > > this does not work. you cannot just resize a device. this will drive the kernel/filesystem on the initiator side nuts. >>> >>> >>> >>> >>>>very murky this one >>>> >>>> >>>> >>>> >>>:P any suggestion? >>> >>> >>> >>> >>i had a hard think on this one...linux lvm don't do fault tolerance but >>md does...so where do you use these pieces on the initiator box and or >>the target box and how to combine them is a bit sketchy at the moment. >> >> >we need a complete solution... > > > > which comes down to multiple iet boxes. iet locally raided for redundancy. then the initiator box will have to use raid on multiple iet exported devices and treat those as a PV for the initiator LVM. Adding to the VG will mean adding a set of iet boxes. |