Thread: Re: [SSI-users] RH & LVM/RAID?
Brought to you by:
brucewalker,
rogertsang
From: SAMPATHKUMAR K. K. <sam...@in...> - 2004-03-01 08:51:47
|
Hi Jiann, I tried setting up LVM on non-root partitions (and hence non-root filesystems) on a 2-node OpenSSI cluster, creating a filesystem on the Logical Volumes created, and using the filesystem from different nodes in the OpenSSI cluster. That works! I have attached the details below. Also, please look at my notes and observations, towards the end of this email: [root@Node 1]# cluster -v 1: UP 2: UP [root@Node 1]# clusternode_num 1 [root@Node 1]# pvcreate /dev/sdb1 pvcreate -- physical volume "/dev/sdb1" successfully created [root@Node 1]# vgcreate vg_ssi_test /dev/sdb1 vgcreate -- INFO: using default physical extent size 4 MB vgcreate -- INFO: maximum logical volume size is 255.99 Gigabyte vgcreate -- doing automatic backup of volume group "vg_ssi_test" vgcreate -- volume group "vg_ssi_test" successfully created and activated [root@Node 1]# lvcreate -L 200 -n lv_ssi_test1 vg_ssi_test [root@Node 1]# ls /dev/vg_ssi_test group lv_ssi_test1 [root@Node 1]# ls /proc/lvm/VGs vg_ssi_test [root@Node 1]# ls /proc/lvm/VGs/vg_ssi_test group LVs PVs [root@Node 1]# ls /proc/lvm/VGs/vg_ssi_test/LVs lv_ssi_test1 [root@Node 1]# ls -l /dev/vg_ssi_test total 0 crw-r----- 1 root disk 109, 0 Mar 1 12:27 group brw-r----- 1 root root 58, 0 Dec 31 1969 lv_ssi_test1 [root@Node 1]# vgdisplay -v --- Volume group --- VG Name vg_ssi_test VG Access read/write VG Status available/resizable VG # 0 MAX LV 256 Cur LV 1 Open LV 1 MAX LV Size 255.99 GB Max PV 256 Cur PV 1 Act PV 1 VG Size 8.46 GB PE Size 4 MB Total PE 2167 Alloc PE / Size 50 / 200 MB Free PE / Size 2117 / 8.27 GB VG UUID OsIt6W-O85A-10p7-rhpF-VClV-3j18-d5ec1H --- Logical volume --- LV Name /dev/vg_ssi_test/lv_ssi_test1 VG Name vg_ssi_test LV Write Access read/write LV Status available LV # 1 # open 1 LV Size 200 MB Current LE 50 Allocated LE 50 Allocation next free Read ahead sectors 1024 Block device 58:0 --- Physical volumes --- PV Name (#) /dev/sdb1 (1) PV Status available / allocatable Total PE / Free PE 2167 / 2117 [root@Node 1]# mkfs -t ext2 /dev/vg_ssi_test/lv_ssi_test1 mke2fs 1.32 (09-Nov-2002) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) 51200 inodes, 204800 blocks 10240 blocks (5.00%) reserved for the super user First data block=1 25 block groups 8192 blocks per group, 8192 fragments per group 2048 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729 Writing inode tables: done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 28 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@Node 1]# mount /dev/vg_ssi_test/lv_ssi_test1 /mnt1 [root@Node 1]# mount /dev/1/sda1 on / type ext3 (rw) none on /proc type proc (rw) usbdevfs on /proc/bus/usb type usbdevfs (rw) devfs on /dev type devfs (rw) usbdevfs on /proc/bus/usb type usbdevfs (rw) /dev/1/vg_ssi_test/lv_ssi_test1 on /mnt1 type ext2 (rw) [root@Node 1]# onnode 2 vgdisplay -v vgdisplay -- "vg_ssi_test" is NOT active; try -D [root@Node 1]# onnode 2 vgdisplay -D vgdisplay -- no volume groups found [root@Node 1]# onnode 2 mount /dev/1/sda1 on / type ext3 (rw) none on /proc type proc (rw) usbdevfs on /proc/bus/usb type usbdevfs (rw) devfs on /dev type devfs (rw) usbdevfs on /proc/bus/usb type usbdevfs (rw) /dev/1/vg_ssi_test/lv_ssi_test1 on /mnt1 type ext2 (rw) [root@Node 1]# onnode 2 touch /mnt1/touch_from_node2 [root@Node 1]# onnode 2 ls /mnt1 lost+found touch_from_node2 [root@Node 1]# ls /mnt1 lost+found touch_from_node2 Please note the following: - I did all the LVM setup and configuration changes only from the init node. - "directly" accessing LVM volumes (or for that matter, any LVM object) from the other nodes did not work. Also, please note that: - I have not tried setting up root partition under LVM control. I have done some investigation into LVM implementation. ALL the LVM user level commands rely on the ROOT file system for various configuration files etc; OpenSSI already provides that transparently! Supporting LVM from other nodes, as well as supporting Shared disk support for LVM from within OpenSSI seem to provide some new, and possibly exciting, implementation approaches due to the "Single System Image" model - at the outset, it appears that OpenSSI may provide a convenient framework that can considerably simplify the LVM adaptation for these configurations. Regards, - Kishore -----Original Message----- From: ssi...@li... [mailto:ssi...@li...]On Behalf Of Jiann-Ming Su Sent: Monday, March 01, 2004 10:48 AM To: ssi...@li... Subject: [SSI-users] RH & LVM/RAID? Does OpenSSI work with LVM and md devices on RH now? The last time I tried openssi on RH (0.9.96), OpenSSI did not like the root partition being on anything other than a standard device. -- Jiann-Ming Su js...@em... 404-712-2603 Development Team Systems Administrator General Libraries Systems Division ------------------------------------------------------- SF.Net is sponsored by: Speed Start Your Linux Apps Now. Build and deploy apps & Web services for Linux with a free DVD software kit from IBM. Click Now! http://ads.osdn.com/?ad_id=1356&alloc_id=3438&op=click _______________________________________________ Ssic-linux-users mailing list Ssi...@li... https://lists.sourceforge.net/lists/listinfo/ssic-linux-users |
From: SAMPATHKUMAR, K. K. <kis...@hp...> - 2004-03-01 08:43:24
|
Hi Jiann, I tried setting up LVM on non-root partitions (and hence non-root filesystems) on a 2-node OpenSSI cluster, creating a filesystem on the Logical Volumes created, and using the filesystem from different nodes in the OpenSSI cluster. That works! I have attached the details below. Also, please look at my notes and observations, towards the end of this email: [root@Node 1]# cluster -v 1: UP 2: UP [root@Node 1]# clusternode_num 1 [root@Node 1]# pvcreate /dev/sdb1 pvcreate -- physical volume "/dev/sdb1" successfully created [root@Node 1]# vgcreate vg_ssi_test /dev/sdb1 vgcreate -- INFO: using default physical extent size 4 MB vgcreate -- INFO: maximum logical volume size is 255.99 Gigabyte vgcreate -- doing automatic backup of volume group "vg_ssi_test" vgcreate -- volume group "vg_ssi_test" successfully created and activated [root@Node 1]# lvcreate -L 200 -n lv_ssi_test1 vg_ssi_test [root@Node 1]# ls /dev/vg_ssi_test group lv_ssi_test1 [root@Node 1]# ls /proc/lvm/VGs vg_ssi_test [root@Node 1]# ls /proc/lvm/VGs/vg_ssi_test group LVs PVs [root@Node 1]# ls /proc/lvm/VGs/vg_ssi_test/LVs lv_ssi_test1 [root@Node 1]# ls -l /dev/vg_ssi_test total 0 crw-r----- 1 root disk 109, 0 Mar 1 12:27 group brw-r----- 1 root root 58, 0 Dec 31 1969 lv_ssi_test1 [root@Node 1]# vgdisplay -v --- Volume group --- VG Name vg_ssi_test VG Access read/write VG Status available/resizable VG # 0 MAX LV 256 Cur LV 1 Open LV 1 MAX LV Size 255.99 GB Max PV 256 Cur PV 1 Act PV 1 VG Size 8.46 GB PE Size 4 MB Total PE 2167 Alloc PE / Size 50 / 200 MB Free PE / Size 2117 / 8.27 GB VG UUID OsIt6W-O85A-10p7-rhpF-VClV-3j18-d5ec1H --- Logical volume --- LV Name /dev/vg_ssi_test/lv_ssi_test1 VG Name vg_ssi_test LV Write Access read/write LV Status available LV # 1 # open 1 LV Size 200 MB Current LE 50 Allocated LE 50 Allocation next free Read ahead sectors 1024 Block device 58:0 --- Physical volumes --- PV Name (#) /dev/sdb1 (1) PV Status available / allocatable Total PE / Free PE 2167 / 2117 [root@Node 1]# mkfs -t ext2 /dev/vg_ssi_test/lv_ssi_test1 mke2fs 1.32 (09-Nov-2002) Filesystem label=3D OS type: Linux Block size=3D1024 (log=3D0) Fragment size=3D1024 (log=3D0) 51200 inodes, 204800 blocks 10240 blocks (5.00%) reserved for the super user First data block=3D1 25 block groups 8192 blocks per group, 8192 fragments per group 2048 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729 Writing inode tables: done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 28 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@Node 1]# mount /dev/vg_ssi_test/lv_ssi_test1 /mnt1 [root@Node 1]# mount /dev/1/sda1 on / type ext3 (rw) none on /proc type proc (rw) usbdevfs on /proc/bus/usb type usbdevfs (rw) devfs on /dev type devfs (rw) usbdevfs on /proc/bus/usb type usbdevfs (rw) /dev/1/vg_ssi_test/lv_ssi_test1 on /mnt1 type ext2 (rw) [root@Node 1]# onnode 2 vgdisplay -v vgdisplay -- "vg_ssi_test" is NOT active; try -D [root@Node 1]# onnode 2 vgdisplay -D vgdisplay -- no volume groups found [root@Node 1]# onnode 2 mount /dev/1/sda1 on / type ext3 (rw) none on /proc type proc (rw) usbdevfs on /proc/bus/usb type usbdevfs (rw) devfs on /dev type devfs (rw) usbdevfs on /proc/bus/usb type usbdevfs (rw) /dev/1/vg_ssi_test/lv_ssi_test1 on /mnt1 type ext2 (rw) [root@Node 1]# onnode 2 touch /mnt1/touch_from_node2 [root@Node 1]# onnode 2 ls /mnt1 lost+found touch_from_node2 [root@Node 1]# ls /mnt1 lost+found touch_from_node2 Please note the following: - I did all the LVM setup and configuration changes only from the init node. - "directly" accessing LVM volumes (or for that matter, any LVM object) from the other nodes did not work. Also, please note that: - I have not tried setting up root partition under LVM control. I have done some investigation into LVM implementation. ALL the LVM user level commands rely on the ROOT file system for various configuration files etc; OpenSSI already provides that transparently! Supporting LVM from other nodes, as well as supporting Shared disk support for LVM from within OpenSSI seem to provide some new, and possibly exciting, implementation approaches due to the "Single System Image" model - at the outset, it appears that OpenSSI may provide a convenient framework that can considerably simplify the LVM adaptation for these configurations. Regards, - Kishore -----Original Message----- From: ssi...@li... [mailto:ssi...@li...]On Behalf Of Jiann-Ming Su Sent: Monday, March 01, 2004 10:48 AM To: ssi...@li... Subject: [SSI-users] RH & LVM/RAID? Does OpenSSI work with LVM and md devices on RH now? The last time I tried openssi on RH (0.9.96), OpenSSI did not like the root=20 partition being on anything other than a standard device. =20 --=20 Jiann-Ming Su js...@em... 404-712-2603 Development Team Systems Administrator General Libraries Systems Division ------------------------------------------------------- SF.Net is sponsored by: Speed Start Your Linux Apps Now. Build and deploy apps & Web services for Linux with a free DVD software kit from IBM. Click Now! http://ads.osdn.com/?ad_id=3D1356&alloc_id=3D3438&op=3Dclick _______________________________________________ Ssic-linux-users mailing list Ssi...@li... https://lists.sourceforge.net/lists/listinfo/ssic-linux-users |