From: Gordan B. <go...@bo...> - 2008-02-10 12:48:02
|
Marc Grimme wrote: >>>> I'm trying to add support for DRBDs, and trying to stick with doing it >>>> in a way consistent with iSCSI support as far as it is possible. I'm >>>> seeing a few weird things, though. >>>> >>>> In linuxrc.generic.sh: >>>> [...] >>>> # start iscsi if apropriate >>>> isISCSIRootsource $rootsource >>>> if [ $? -eq 0 ]; then >>>> loadISCSI >>>> startISCSI $rootsource $nodename >>>> fi >>>> >>>> # start drbd if appropriate >>>> isDRBDRootSource $rootsource >>>> if [ $? -eq 0 ]; then >>>> loadDRBD >>>> startDRBD $rootsource $nodename >>>> fi >>>> [...] >>>> >>>> and in cluster.conf: >>>> [...] >>>> <clusternodes> >>>> <clusternode name="sentinel1c" nodeid="1" votes="2"> >>>> <com_info> >>>> <rootsource name="drbd"/> >>>> >>>> [...] >>> Agreed. That's the place.You also have a bootoption rootsource to >>> overwrite it. Normally we would suspect rootsource to be some kind of url >>> like for example iscsi://iscsi-host/export-name or it might be >>> drbd://slave/export or something. I cannot remember the semantics there >>> too long ago ;-) . >>> >>> Are you also aware of the com-expert bootoption? This one is quite handy >>> as it brings you to a shell before the process of the initrd is started. >>> And you can change some things to see if they work or not. >>> >>> Answer to the iscsi questioning stuff. >>> [marc@mobilix-07 boot-scripts]$ source etc/iscsi-lib.sh >>> [marc@mobilix-07 boot-scripts]$ isISCSIRootsource drbd >>> [marc@mobilix-07 boot-scripts]$ echo $? >>> 1 >>> [marc@mobilix-07 boot-scripts]$ isISCSIRootsource iscsi >>> [marc@mobilix-07 boot-scripts]$ echo $? >>> 0 >>> [marc@mobilix-07 boot-scripts]$ >>> >>> So if we are going iscsi we'll return a '0'. As a successfully executed >>> shellscript does. >>> >>>> From this, one would assume that isISCSIRootsource would return false >>>> and iSCSI not get initialized, and isDRBDRootSource would return true >>>> and get initialized. >>>> >>>> This is the opposite of what I'm seeing. iSCSI gets started, and the >>>> modules loaded. DRBD doesn't. >>>> >>>> Am I misunderstanding where $nodename is coming from? I'm not >>>> particularly bothered by iSCSI trying to start (and failing because the >>>> iscsi packages aren't in the rpm list), but that makes the drbd not >>>> starting all the more puzzling. :-/ >>> $nodename is the nodename that belongs to the node in bootprocess. But it >>> is not necessarily the hostname. The function >>> cluster-lib.sh/getClusterFSParameters returns it. It's a little bit >>> calling other functions to make it more independent from the cluster.conf >>> (that was the idea in the first place). >> Ah, I see. I think it may be necessary to do >> hostname $nodename >> in that case, since DRBD requires hostname to be matching the node name >> in the DRBD configuration. > > Hmm. That would be a way. Yes, I have that working. I just have another bug to sort out (DRBD not getting initialized correctly during boot-up, but works fine if I issue the same command after the boot fails due to DRBD not being initialized). >>>> Further on (unrelated to this), I get cman starting, but ccsd failing, >>>> so the boot-up aborts. But ps -aux | grep ccsd shows that ccsd is in >>>> fact already running. I haven't seen this behaviour before. The only >>>> thing I can think of that is different is that this is a 2-node cluster, >>>> which is much smaller than what I usually work with. (Yes I did set >>>> <cman two_node="1" expected_votes="2"/>) >>> Enable syslogging to a syslog server. <syslog name="server"/> does the >>> trick and see what it tells you. >> I'm feeling a bit silly now. The answer was in the line I pasted. For >> two_node=1, expected_votes needs to be 1 as well. :-/ >> >> On a separate note, is there a configurable way to disable LVM/CLVM, >> even if only at mkinitrd time? All it is achieving for me is throwing up >> ugly "FAILED" startup messages because I'm not using LVM. I could >> comment it out of the boot scripts, but that seems a bit nasty. > > Ok so what we did the last time was to introduce the lvm_check (Wasn't it > because of your request?! ;-) ) function in linux.generic.rc line 288. That > function checks for your rootdevice (specified in the cluster.conf) for it > being a lv or not (seeing if it exists and if it has lvm major/minors). > Doesn't that functionality work? Good point, I only saw that after I sent the last post. The problem is possibly to do with the fact that DRBD isn't getting initialized properly. I'll get that sorted out first and see if it makes the problem go away. Thanks. Gordan |