From: Gordan B. <go...@bo...> - 2007-10-10 13:45:42
|
On Wed, 10 Oct 2007, Marc Grimme wrote: >>>> 2.1) Extracting rpms...Cannot find rpm "comoonics-cs". Skipping. >>>> Cannot find rpm "GFS". Skipping. >>>> Cannot find rpm "ccs". Skipping. >>>> Cannot find rpm "dlm". Skipping. >>>> Cannot find rpm "fence". Skipping. >>>> Cannot find rpm "magma". Skipping. >>>> Cannot find rpm "magma-plugins". Skipping. >>>> Cannot find rpm "OpenIPMI-tools". Skipping. >>>> >>>> Where can these be found, and where should they be put for this step to >>>> find them? >>> >>> Follow the instructions on >>> http://www.opensharedroot.org/documentation/rhel5-gfs-shared-root-mini-ho >>> wto >> >> I did. Nowhere does it manage yum installing packages GFS, ccs, dlm, >> fence, magma, magma-plugins or OpenIPMI-tools. > > Hm. you still seem to have old depfiles. Remove the comoonics-bootimage rpms > and delete /etc/comoonics/bootimage and reinstall. This should go away then: > [root@axqa03_1 ~]# mkinitrd /boot/initrd_sr-2.6.18-52.el5xen.img > 2.6.18-52.el5xen > Extracting rpms...Cannot find rpm "comoonics-cs". Skipping. > Cannot find rpm "OpenIPMI-tools". Skipping. > [ OK ] > Retrieving dependent > files.../opt/atix/comoonics-bootimage/boot-scripts/etc/chroot-lib.sh: line > 333: [: !=: unary operator expected > found 4306 [ OK ] > Copying files...cp: cannot stat `/etc/profile.d/krb5.csh': No such file or > directory > cp: cannot stat `/etc/profile.d/krb5.sh': No such file or directory > cp: cannot stat `/etc/sysconfig/comoonics-chroot': No such file or directory > [ OK ] > Copying kernelmodules (2.6.18-52.el5xen)... [ OK ] > builddate_file [ OK ] > [ OK ] > Post settings .. [ OK ] > Cpio and compress.. [ OK ] > Cleaning up (/tmp/initrd.mnt.G24645, )... [ OK ] > -rw-r--r-- 1 root root 42731 Oct 10 14:42 /boot/initrd_sr-2.6.18-52.el5xen.img The problem went away when I manually yum installed comoonics-cs and OpenIPMI-tools RPMs. >>>> 3) iSCSI - this doesn't appear to get included on the initrd. This is >>>> kind of important. How do I add it, so that I can mount the SAN being >>>> used for the shared GFS root? >>> >>> We did not implement it yet. Feel free to do it. Use the latest rpms and >>> adapt the code. Latest HEAD rpms are attached. >>> Please edit against them. And send either patches or the changed files >>> directly to me. >> >> OK. Presumably, this would mean: >> >> 1) Install standard iscsi-initiator-utils rpm into the OSR initrd. > > yes. >> 2) Copy the /etc/iscsi/iscsid.conf /var/lib/iscsi/* into the OSR initrd > > no the iscsid.conf should be created from cluster.conf. Because of the ip of > the iscsi-source. It might be one server gets it root from on iscsi-device > and another from another iscsi-device. > Some customers have clusters with different roots for different nodes. Err - I thought this was about _shared_ roots, not unshared-roots. :-p What makes iscsid.conf from cluster.conf, then? And what would configure the /var/lib/iscsi/*? I've found I have to jump through a few hoops to get this working even without the shared GFS. I have to set up connections to manual, and set the username/password up in iscsid.conf, then do the iscsi discovery, then just change the volume I want to mount to automatic. Then it correctly mounts on the correct /dev/sd? device. >> 3) Add iscsi and iscsid init scripts to the relevant run-level on OSR >> initrd. > > there are no run-levels inside the initrd ;-( . So the starting has to be done > manually within iscsi-lib and the iscsiLoad Function. Well, run-levels aren't what I meant as such. Are init scripts handled in a similar way to the full setup? Even if it is a startup script invoking: iscsid start iscsi start >> On a separate note - the instructions say that /etc/sysconfig/network is >> to be unshared. > Yes. Because of the hostsname. >> I think /etc/sysconfig/network-scripts should be unshared, >> too, unless all IPs are assigned via DHCP - which may or may not be the >> case. Although if they were assigned by DHCP, that would be kind of neat > > Yes, every ifcfg-eth? has to be hostdependent. Unless using dhcp. >> because it would mean you can purely network boot them via PXE (even if >> the initrd is 50MB+!). > > It once worked that customers had the ip=dhcp in the cluster.conf and it got > its clusterip from a dhcp server. If you like give it a try. I have yet to get a single node booting off GFS as root. I'm some way off having a whole cluster running at the moment. Need to get the iSCSI working first. >>> Find more information here: >>> http://www.opensharedroot.org/Members/marc/blog/comoonics-bootimage/conce >>> pt-of-integrating-iscsi-support-to-latest-bootimage You should be able to >>> add comments or just send me an email on what you think. >> >> Hmm. Do we need this specified in cluster.conf? >> Are the rootsource, rootvolume and chrootenv tags the standard way to >> achieve this? > > rootvolume has to be given always. It tells where to mount the rootfs from. > rootsource is optional: I would use it for telling the cluster that the > rootdevice itself is an iscsione and the ipadress is x.y.z. Right, OK. But before this can work, surely the iscsi daemons have to be running first? >>> If you need more bins or rpms included in the initrd (which I expect to >>> be) have a look at /etc/comoonics/bootimage/files.initrd.d >>> and /etc/comoonics/bootimage/rpms.initrd.d. >> >> Thanks for that. Do you envisage anything over and above what I listed >> above to be required? I'm not quite up to speed on the cluster.conf stuff, >> but would the 3 steps to get iSCSI going be sufficient to get the initrd >> connectable to the SAN at least? Once it gets that far, even mounting the >> root manually wouldn't be too bad. > > To do anything manually you can always use two or better three nice options at > bootcmdline: > com-debug: switches on debugging. > com-step: adds steps to the bootprocess where you can jump into a shell with > break and leave and continue the bootprocess with exit. > com-expert: imediately starts a shell after initrd is loaded. Well, at the moment it fails with the message that it can't find root and drops me to the initrd shell. What I need at that point is functioning iscsi components. I think that is logically the next thing I need to add to the initrd. >> Incidentally, how is the fencing/cluster stuff handled in the initrd? I >> didn't notice cman being included. > > It is but RHEL5 has slightly different packages. Right - so in theory, once there are multiple machines running, they'll all wait in initrd for enough machines to reach quorum, and then mount the root and proceed from there? Gordan |