Just Launched: You can now import projects and releases from Google Code onto SourceForge
We are excited to release new functionality to enable a 1-click import from Google Code onto the Allura platform on SourceForge. You can import tickets, wikis, source, releases, and more with a few simple steps. Read More
From: Goetzman, Dan <Dan_G<oetzman@bm...> - 2003-03-21 12:59:56
> Thanks to the input I received, I have the first node booted on my Linux
> SSI test cluster.
> I pulled down the new 0.9.5 release to fix the problems with booting via
> I also had to make the following manual changes to boot before /usr was
> 1) Remove sym-link /sbin/cluster_config and replace with a copy of
> cluster_config in /sbin
> 2) Copy chroot from /usr/sbin to /sbin as linuxrc needed it.
> And, with that the cluster booted up on the first node.
> I have a question on booting the second node. I have a 2 node system
> using a shared SCSI disk farm. Seems like I should be able to boot the
> second node up off the direct path to the shared disks (instead of all
> that EtherBoot/dhcp/tfpt stuff). Is this possible?
> How would I setup the stuff that "addnode" normally would do?
> Any tips? Or is this not possible?
From: Brian J. Watson <Brian.J.W<atson@hp...> - 2003-03-21 18:09:13
Goetzman, Dan wrote:
>> I have a question on booting the second node. I have a 2 node system
>>using a shared SCSI disk farm. Seems like I should be able to boot the
>>second node up off the direct path to the shared disks (instead of all
>>that EtherBoot/dhcp/tfpt stuff). Is this possible?
>> How would I setup the stuff that "addnode" normally would do?
The addnode command command is mostly a front-end for adding lines to
the /etc/clustertab file. You can do this manually. Read the commentay
at the top of the file to see what each field means.
You probably don't want to specify a boot device for the second node.
The boot device is mostly for the benefit of cluster_lilo, to tell it
where it needs to write kernels, ramdisks, and a boot block. The first
node is doing this already on the shared drive.
After editing /etc/clustertab, build your ramdisk. If you've already
built one, you can rebuild it with the 'tabonly' option:
cluster_mkinitrd --tabonly <name of ramdisk image>
Note that you don't need to specify a kernel version number, like you
usually have to for mkinitrd/cluster_mkinitrd.
I think addnode does some other stuff, too. It creates a
/cluster/nodenum<nodenum>/dev directory on the shared root for ssidevfs.
You might also want to create an
/etc/sysconfig/network-scripts-<nodenum> directory to support external
Finally, make sure the only boot block each node sees is the one on the
shared disk. If one exists on its internal drive, a node might boot from
Hope this helps,