Re: [SSI-users] Issues of Booting the Second Potential Root Node First with Separated /boot Partiti
Brought to you by:
brucewalker,
rogertsang
From: Brian J. W. <Bri...@hp...> - 2004-07-20 21:19:47
|
Denny Chou [周岱民] wrote: > It's OK to have the first (original) node boot up and the 2nd node join > the cluster. Root failover also works fine. Great! > It should be reasonable when > first node totally damaged or need to be maintained then the second node > maybe need to boot up alone. Certainly. > 1. The second node boot up OK but without mounting the local /boot > partition. That's not good. There should always be a /boot mounted, so that you are able to edit your grub.conf, upgrade your kernel, rebuild your ramdisk and run ssi-ksync. There's no trivial solution I can think of for this, so it won't be fixed right away (at least not in the stable branch). Can you enter a bug report on our SourceForge page, so that we don't forget about it? > 2. Boot up the first node to join the cluster will hang when mounting > local file system and there are panic messages in the (already booted) > second node. What does the panic message say? > For the issue 2, I'd fixed it by modifying the /dev/pts entry in the > /etc/fstab to > > none /dev/pts devpts gid=5,mode=620,node=* > 0 0 > > I don't know whether this is a right way to fix the problem or not but > it does work. John Byrne, who implemented the clusterwide devpts, said this is a good solution. I've modified the installation scripts to do this by default. > I'd tried to label the second node /boot partition just like in > the first node and modify the fstab to > > LABEL=/boot /boot ext3 chard,defaults,node=1:2 > 1 2 > > This fixed the problem of not mounting /boot when root failover to the > second node. After failover, the local /boot partition of second node > mounted on /boot directory. The chard mounts were not designed for doing something like this, but perhaps it's a reasonable work-around. > But after I boot up the second node first > and then have the first node boot up to join the cluster, the 'df -k' > shows a strange result below. > > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/2/sdb1 4127204 1540532 2377020 40% / > /dev/2/sda1 506833 28909 451756 7% /boot > /dev/1/sda1 506833 28909 451756 7% /boot This could be because of bad informaton in /etc/mtab. What does `onall cat /proc/mounts' tell you? Thanks for the feedback, Brian |