XCAT NeXtScale Clusters

From xcat

Jump to: navigation, search

Image:Official-xcat-doc.png

This document describes the steps necessary to quickly set up a cluster with IBM NeXtScale servers.

Contents

Introduction

IBM NeXtScale combines networking, storage, and compute nodes in a single offering. It's consist of an IBM NeXtScale Chassis, one Fan Power Controller and compute nodes. The compute nodes include the IBM NeXtScale nx360 M4 servers.

Terminology

The following terms will be used in this document:

  • MN - the xCAT management node.
  • Fan Power Controller (FPC) - The FPC is installed in the rear of the chassis and connected by ethernet to the MN. The FPC is used to list power and fan settings as well as logically reseat the NeXtScale servers.
  • Blade - the NeXtScale compute nodes within the chassis.
  • IMM - the Integrated Management Module in each node that is use to control the node hardware out-of-band. Also known as the BMC (Baseboard Management Controller).
  • Switch Modules - the ethernet and IB switches within the chassis.

Overview of Cluster Setup Process

Here is a summary of the steps required to set up the cluster and what this document will take you through:

  1. Prepare the management node - doing these things before installing the xCAT software helps the process to go more smoothly.
  2. Install the xCAT software on the management node.
  3. Configure some cluster wide information
  4. Define a little bit of information in the xCAT database about the ethernet switches and nodes - this is necessary to detect the node in the discovery process.
  5. Have xCAT configure and start several network daemons - this is necessary for both node discovery and node installation.
  6. Discovery the nodes - during this phase, xCAT configures the FPCs and BMC's and collects many attributes about each node and stores them in the database.
  7. Set up the OS images and install the nodes.


Distro-specific Steps

  • [RH] indicates that step only needs to be done for RHEL and Red Hat based distros (CentOS, Scientific Linux, and in most cases Fedora).
  • [SLES] indicates that step only needs to be done for SLES.

Command Man Pages and Database Attribute Descriptions

Prepare the Management Node for xCAT Installation

Install the Management Node OS

Install one of the supported distros on the Management Node (MN). It is recommended to ensure that dhcp, bind (not bind-chroot), httpd, nfs-utils, and perl-XML-Parser are installed. (But if not, the process of installing the xCAT software later will pull them in, assuming you follow the steps to make the distro RPMs available.)

Hardware requirements for your xCAT management node are dependent on your cluster size and configuration. A minimum requirement for an xCAT Management Node or Service Node that is dedicated to running xCAT to install a small cluster ( < 16 nodes) should have 4-6 Gigabytes of memory. A medium size cluster, 6-8 Gigabytes of memory; and a large cluster, 16 Gigabytes or more. Keeping swapping to a minimum should be a goal.

Supported OS and Hardware

For a list of supported OS and Hardware, refer to XCAT_Features.

[RH] Ensure that SELinux is Disabled

Note: you can skip this step in xCAT 2.8.1 and above, because xCAT does it automatically when it is installed.

To disable SELinux manually:

echo 0 > /selinux/enforce
sed -i 's/^SELINUX=.*$/SELINUX=disabled/' /etc/selinux/config

Disable the Firewall

Note: you can skip this step in xCAT 2.8 and above, because xCAT does it automatically when it is installed.

The management node provides many services to the cluster nodes, but the firewall on the management node can interfere with this. If your cluster is on a secure network, the easiest thing to do is to disable the firewall on the Management Mode:

For RH:

service iptables stop
chkconfig iptables off

If disabling the firewall completely isn't an option, configure iptables to allow the following services on the NIC that faces the cluster: DHCP, TFTP, NFS, HTTP, DNS.

For SLES:

SuSEfirewall2 stop

Set Up the Networks

The xCAT installation process will scan and populate certain settings from the running configuration. Having the networks configured ahead of time will aid in correct configuration. (After installation of xCAT, all the networks in the cluster must be defined in the xCAT networks table before starting to install cluster nodes.) When xCAT is installed on the Management Node, it will automatically run makenetworks to create an entry in the networks table for each of the networks the management node is on. Additional network configurations can be added to the xCAT networks table manually later if needed.

The networks that are typically used in a cluster are:

  • Management network - used by the management node to install and manage the OS of the nodes. The MN and in-band NIC of the nodes are connected to this network. If you have a large cluster with service nodes, sometimes this network is segregated into separate VLANs for each service node. See Setting Up a Linux Hierarchical Cluster for details.
  • Service network - used by the management node to control the nodes out of band via the BMC. If the BMCs are configured in shared mode, then this network can be combined with the management network.
  • Application network - used by the HPC applications on the compute nodes. Usually an IB network.
  • Site (Public) network - used to access the management node and sometimes for the compute nodes to provide services to the site.

In our example, we only deal with the management network because:

  • the BMCs are in shared mode, so they don't need a separate service network
  • we are not showing how to have xCAT automatically configure the application network NICs. See Configuring Secondary Adapters if you are interested in that.
  • under normal circumstances there is no need to put the site network in the networks table

For more information, see Setting Up a Linux xCAT Mgmt Node#Appendix A: Network Table Setup Example.

Configure NICS

Configure the cluster facing NIC(s) on the management node. For example edit the following files:

[RH]: /etc/sysconfig/network-scripts/ifcfg-eth1

[SLES]: /etc/sysconfig/network/ifcfg-eth1

DEVICE=eth1
ONBOOT=yes
BOOTPROTO=static
IPADDR=172.20.0.1
NETMASK=255.240.0.0

Prevent DHCP client from overwriting DNS configuration (Optional)

If the public facing NIC on your management node is configured by DHCP, you may want to set PEERDNS=no in the NIC's config file to prevent the dhclient from rewriting /etc/resolv.conf. This would be important if you will be configuring DNS on the management node (via makedns - covered later in this doc) and want the management node itself to use that DNS. In this case, set PEERDNS=no in each /etc/sysconfig/network-scripts/ifcfg-* file that has BOOTPROTO=dhcp.

On the other hand, if you want dhclient to configure /etc/resolv.conf on your management node, then don't set PEERDNS=no in the NIC config files.

Configure hostname

The xCAT management node hostname should be configured before installing xCAT on the management node. The hostname or its resolvable ip address will be used as the default master name in the xCAT site table, when installed. This name needs to be the one that will resolve to the cluster-facing NIC. Short hostnames (no domain) are the norm for the management node and all cluster nodes. Node names should never end in "-enx" for any x.

To set the hostname, edit /etc/sysconfig/network to contain, for example:

HOSTNAME=mgt

If you run hostname command, if should return the same:

# hostname
mgt

Setup basic hosts file

Ensure that at least the management node is in /etc/hosts:

127.0.0.1               localhost.localdomain localhost
::1                     localhost6.localdomain6 localhost6
###
172.20.0.1 mgt mgt.cluster

Setup the TimeZone

When using the management node to install compute nodes, the timezone configuration on the management node will be inherited by the compute nodes. So it is recommended to setup the correct timezone on the management node. To do this on RHEL, see http://www.redhat.com/advice/tips/timezone.html. The process is similar, but not identical, for SLES. (Just google it.)

You can also optionally set up the MN as an NTP for the cluster. See Setting up NTP in xCAT.


Create a Separate File system for /install (optional)

It is not required, but recommended, that you create a separate file system for the /install directory on the Management Node. The size should be at least 30 meg to hold to allow space for several install images.

Restart Management Node

Note: in xCAT 2.8 and above, you do not need to restart the management node. Simply restart the cluster-facing NIC, for example: ifdown eth1; ifup eth1

For xCAT 2.7 and below, though it is possible to restart the correct services for all settings, the simplest step would be to reboot the Management Node at this point.

Configure Ethernet Switches

It is recommended that spanning tree be set in the switches to portfast or edge-port for faster boot performance. Please see the relevant switch documentation as to how to configure this item.

It is recommended that lldp protocol in the switches is enabled to collect the switch and port information for compute node during discovery process.

Note: this step is necessary if you want to use xCAT's automatic switch-based discovery (described later on in this document) for IPMI-controlled rack-mounted servers (including iDataPlex) and Flex chassis. If you have a small cluster and prefer to use the sequential discover method (described later) or manually enter the MACs for the hardware, you can skip this section. Although you may want to still set up your switches for management so you can use xCAT tools to manage them, as described in Managing Ethernet Switches.

xCAT will use the ethernet switches during node discovery to find out which switch port a particular MAC address is communicating over. This allows xCAT to match a random booting node with the proper node name in the database. To set up a switch, give it an IP address on its management port and enable basic SNMP functionality. (Typically, the SNMP agent in the switches is disabled by default.) The easiest method is to configure the switches to give the SNMP version 1 community string called "public" read access. This will allow xCAT to communicate to the switches without further customization. (xCAT will get the list of switches from the switch table.) If you want to use SNMP version 3 (e.g. for better security), see the example below. With SNMP V3 you also have to set the user/password and AuthProto (default is 'md5') in the switches table.

If for some reason you can't configure SNMP on your switches, you can use sequential discovery or the more manual method of entering the nodes' MACs into the database. See #Discover the Nodes for a description of your choices.

SNMP V3 Configuration Example:

xCAT supports many switch types, such as BNT and Cisco. Here is an example of configuring SNMP V3 on the Cisco switch 3750/3650:

1. First, user should switch to the configure mode by the following commands:

[root@x346n01 ~]# telnet xcat3750
Trying 192.168.0.234...
Connected to xcat3750.
Escape character is '^]'.
User Access Verification
Password:
xcat3750-1>enable
Password:
xcat3750-1#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
xcat3750-1(config)#

2. Configure the snmp-server on the switch:

Switch(config)# access-list 10 permit 192.168.0.20    # 192.168.0.20 is the IP of MN
Switch(config)# snmp-server group xcatadmin v3 auth write v1default
Switch(config)# snmp-server community public RO 10
Switch(config)# snmp-server community private RW 10
Switch(config)# snmp-server enable traps license?

3. Configure the snmp user id (assuming a user/pw of xcat/passw0rd):

Switch(config)# snmp-server user xcat xcatadmin v3 auth SHA passw0rd access 10

4. Check the snmp communication to the switch :

  • On the MN: make sure the snmp rpms have been installed. If not, install them:
yum install net-snmp net-snmp-utils
  • Run the following command to check that the snmp communication has been setup successfully (assuming the IP of the switch is 192.168.0.234):
snmpwalk -v 3 -u xcat -a SHA -A passw0rd -X cluster -l authnoPriv 192.168.0.234 .1.3.6.1.2.1.2.2.1.2

Later on in this document, it will explain how to make sure the switch and switches tables are setup correctly.

Install xCAT on the Management Node

There are two options for installation of xCAT:

  1. download the software first
  2. or install directly from the internet-hosted repository

Pick either one, but not both.

Option 1: Prepare for the Install of xCAT without Internet Access

If not able to, or not wishing to, use the live internet repository, choose this option.

Go to the Download xCAT site and download the level of xCAT tarball you desire. Go to the xCAT Dependencies Download page and download the latest snap of the xCAT dependency tarball. (The latest snap of the xCAT dependency tarball will work with any version of xCAT.)

Copy the files to the Management Node (MN) and untar them:

mkdir /root/xcat2
cd /root/xcat2
tar jxvf xcat-core-2.*.tar.bz2     # or core-rpms-snap.tar.bz2
tar jxvf xcat-dep-*.tar.bz2

Setup YUM repositories for xCAT and Dependencies

Point YUM to the local repositories for xCAT and its dependencies:

cd /root/xcat2/xcat-dep/<release>/<arch>
./mklocalrepo.sh
cd /root/xcat2/xcat-core
./mklocalrepo.sh

[SLES 11]:

 zypper ar file:///root/xcat2/xcat-dep/sles11/<arch> xCAT-dep 
 zypper ar file:///root/xcat2/xcat-core  xcat-core

You can check a zypper repository using "zypper lr -d", or remove a zypper repository using "zypper rr".

[SLES 10.2+]:

zypper sa file:///root/xcat2/xcat-dep/sles10/<arch> xCAT-dep
zypper sa file:///root/xcat2/xcat-core xcat-core

You can check a zypper repository using "zypper sl -d", or remove a zypper repository using "zypper sd".

Option 2: Prepare to Install xCAT Directly from the Internet-hosted Repository

When using the live internet repository, you need to first make sure that name resolution on your management node is at least set up enough to resolve sourceforge.net. Then make sure the correct repo files are in /etc/yum.repos.d:

To get the current official release:

wget http://sourceforge.net/projects/xcat/files/yum/<xCAT-release>/xcat-core/xCAT-core.repo

for example:

cd /etc/yum.repos.d
wget http://sourceforge.net/projects/xcat/files/yum/2.8/xcat-core/xCAT-core.repo

To get the deps package:

wget http://sourceforge.net/projects/xcat/files/yum/xcat-dep/<OS-release>/<arch>/xCAT-dep.repo

for example:

wget http://sourceforge.net/projects/xcat/files/yum/xcat-dep/rh6/x86_64/xCAT-dep.repo

To setup to use SLES with zypper:

[SLES11]:

zypper ar -t rpm-md http://sourceforge.net/projects/xcat/files/yum/<xCAT-release>/xcat-core xCAT-core

zypper ar -t rpm-md http://sourceforge.net/projects/xcat/files/yum/xcat-dep/<OS-release>/<arch> xCAT-dep

for example:

zypper ar -t rpm-md http://sourceforge.net/projects/xcat/files/yum/2.8/xcat-core xCAT-core
zypper ar -t rpm-md http://sourceforge.net/projects/xcat/files/yum/xcat-dep/sles11/x86_64 xCAT-dep

[SLES10.2+]:

zypper sa http://sourceforge.net/projects/xcat/files/yum/<xCAT-release>/xcat-core xCAT-core 
zypper sa http://sourceforge.net/projects/xcat/files/yum/xcat-dep/<OS-release>/<arch> xCAT-dep

For Both Options: Make Required Packages From the Distro Available

xCAT uses on several packages that come from the Linux distro. Follow this section to create the repository of the OS on the Management Node.

See the following documentation:

Setting_Up_the_OS_Repository_on_the_Mgmt_Node

Install xCAT Packages

[RH]: Use yum to install xCAT and all the dependencies:

yum clean metadata
yum install xCAT

[SLES]: Use zypper to install xCAT and all the dependencies:

zypper install xCAT

Using the New Sysclone Deployment Method

Note: in xCAT 2.8.2 and above, xCAT supports cloning new nodes from a pre-installed/pre-configured node, we call this provisioning method as sysclone. It leverages the opensource tool systemimager. If you will be installing stateful(diskful) nodes using the sysclone provmethod, you need to install systemimager and all the dependencies (using sysclone is optional):

[RH]: Use yum to install systemimager and all the dependencies:

yum install systemimager-server

[SLES]: Use zypper to install systemimager and all the dependencies:

zypper install systemimager-server

Quick Test of xCAT Installation

Add xCAT commands to the path by running the following:

source /etc/profile.d/xcat.sh

Check to see the database is initialized:

tabdump site

The output should similar to the following:

key,value,comments,disable
"xcatdport","3001",,
"xcatiport","3002",,
"tftpdir","/tftpboot",,
"installdir","/install",,
     .
     .
     .

If the tabdump command does not work, see Debugging xCAT Problems.

Updating xCAT Packages Later

If you need to update the xCAT RPMs later:

  • If the management node does not have access to the internet: download the new version of xCAT from Download xCAT and the dependencies from xCAT Dependencies Download and untar them in the same place as before.
  • If the management node has access to the internet, the yum command below will pull the updates directly from the xCAT site.

To update xCAT:

[RH]:

yum clean metadata
yum update '*xCAT*'

[SLES]:

zypper refresh
zypper update -t package '*xCAT*'

Note: this will not apply updates that may have been made to some of the xCAT deps packages. (If there are brand new deps packages, they will get installed.) In most cases, this is ok, but if you want to make all updates for xCAT rpms and deps, run the following command. This command will also pick up additional OS updates.

[RH]:

yum update

[SLES]:

zypper refresh
zypper update

Note: Sometimes zypper refresh fails to refresh zypper local repository. Try to run zypper clean to clean local metadata, then use zypper refresh.

Note: If you are updating from xCAT 2.7.x (or earlier) to xCAT 2.8 or later, there are some additional migration steps that need to be considered:

  1. Switch from xCAT IBM HPC Integration support to using Software Kits - see Switching_from_xCAT_IBM_HPC_Integration_Support_to_Using_Software_Kits for details.
  2. (Optional) Use nic attibutes to replace the otherinterfaces attribute to configure secondary adapters - see otherinterfaces vs nic attributes for details.
  3. Convert non-osimage based system to osimage based system - see Convert non-osimage based system to osimage based system for details.

Use xCAT to Configure Services on the Management Node

Networks Table

All networks in the cluster must be defined in the networks table. When xCAT was installed, it ran makenetworks, which created an entry in this table for each of the networks the management node is connected to. Now is the time to add to the networks table any other networks in the cluster, or update existing networks in the table.

For a sample Networks Setup, see the following example: Setting_Up_a_Linux_xCAT_Mgmt_Node#Appendix_A:_Network_Table_Setup_Example

passwd Table

The password should be set in the passwd table that will be assigned to root when the node is installed. You can modify this table using tabedit. To change the default password for root on the nodes, change the system line. To change the password to be used for the BMCs, change the ipmi line.

tabedit passwd
#key,username,password,cryptmethod,comments,disable
"system","root","cluster",,,
"ipmi","USERID","PASSW0RD",,,


Setup DNS

To get the hostname/IP pairs copied from /etc/hosts to the DNS on the MN:

  • Ensure that /etc/sysconfig/named does not have ROOTDIR set
  • Set site.forwarders to your site-wide DNS servers that can resolve site or public hostnames. The DNS on the MN will forward any requests it can't answer to these servers.
chdef -t site forwarders=1.2.3.4,1.2.5.6
  • Edit /etc/resolv.conf to point the MN to its own DNS. (Note: this won't be required in xCAT 2.8 and above, but is an easy way to test that your DNS is configured properly.)
search cluster
nameserver 10.1.0.1
  • Run makedns
makedns

For more information about name resolution in an xCAT Cluster, see Cluster Name Resolution.


Setup conserver

makeconservercf

Define the FPCs and Switches

First just add the list of FPCs and the groups they belong to:

nodeadd fpc[01-15] groups=fpc,all

Now define attributes that are the same for all FPCs. These can be defined at the group level. For a description of the attribute names, see the node object definition.

chdef -t group fpc mgt=ipmi bmcpassword=PASSW0RD bmcusername=USERID cons=ipmi mgt=ipmi nodetype=fpc


Next define the attributes that vary for each FPC. There are 2 different ways to do this. Assuming your naming conventions follow a regular pattern, the fastest way to do this is use regular expressions at the group level:

chdef -t group fpc  bmc='|(.*)|($1)|' ip='|fpc(\d+)|10.0.50.($1+0)|'

Note: The Flow for FPC IP addressing is 1) initially each FPC has a default IP address of 192.168.0.100, 2) You run a new command "configfpc" to discover each FPC 3) configfpc will change each discovered FPC default IP address to the permanent static IP address which is specified here as the ip attribute.

This chdef might look confusing at first, but once you parse it, it's not too bad. The regular expression syntax in xcat database attribute values follows the form:

|pattern-to-match-on-the-nodename|value-to-give-the-attribute|

You use parentheses to indicate what should be matched on the left side and substituted on the right side. So for example, the bmc attribute above is:

|(.*)|($1)|

This means match the entire nodename (.*) and substitute it as the value for mpa. This is what we want because for FPCs the mpa attribute should be set to itself.

For the ip attribute above, it is:

|fpc(\d+)|10.0.50.($1+0)|

This means match the number part of the node name and use it as the last part of the IP address. (Adding 0 to the value just converts it from a string to a number to get rid of any leading zeros, i.e. change 09 to 9.) So for fpc07, the ip attribute will be 10.0.50.7.

For more information on xCAT's database regular expressions, see http://xcat.sourceforge.net/man5/xcatdb.5.html . To verify that the regular expressions are producing what you want, run lsdef for a node and confirm that the values are correct.

If you don't want to use regular expressions, you can create a stanza file containing the node attribute values:

fpc01:
  bmc=fpc01
  ip=10.0.50.1
fpc02:
  bmc=fpc02
  ip=10.0.50.2
...

Then pipe this into chdef:

cat <stanzafile> | chdef -z

When you are done defining the FPCs, listing one should look like this:

# lsdef fpc07
Object name: feihu-fpc
   bmc=fpc07
   bmcpassword=PASSW0RD
   bmcusername=USERID
   groups=fpc
   mgt=ipmi
   ip=10.0.50.7
   postbootscripts=otherpkgs
   postscripts=syslog,remoteshell,syncfiles

FPC Discovery and Configuration

In this section you will perform the FPC discovery and configuration of for the FPC.

During the FPC discovery process all FPCs are discovered using the xCAT configfpc command.

In large clusters the configfpc automated method for discovering is used to map each FPC MAC to the FPC port defined in the Ethernet switch SNMP data from which each chassis FPC is connected.

To use this method the xCAT switch and switches tables must be configured. The xCAT switch table will need to be updated with the switch port that each FPC is connected. The xCAT switches table must contain the SNMP access information.

Add the FPC switch/port information to the switch table.

# tabdump switch
#node,switch,port,vlan,interface,comments,disable
"fpc01","switch","0/1",,,,
"fpc02","switch","0/2",,,,

where: node is the fpc node object name switch is the hostname of the switch port is the switch port id. Note that xCAT does not need the complete port name. Preceding non numeric characters are ignored.

If you configured your switches to use SNMP V3, then you need to define several attributes in the switches table. Assuming all of your switches use the same values, you can set these attributes at the group level:

tabch switch=switch switches.snmpversion=3 switches.username=xcatadmin switches.password=passw0rd switches.auth=SHA
# tabdump switches
#switch,snmpversion,username,password,privacy,auth,linkports,sshusername,sshpassword,switchtype,comments,disable
"switch","3","xcatadmin","passw0rd",,"SHA",,,,,,

Note: It might also be necessary to allow authentication at the VLAN level

snmp-server group xcatadmin v3 auth context vlan-230

Discover and configure each the xCAT FPC node.

configfpc -i eth0
Found FPC with default IP 192.168.0.100 and MAC 6c:ae:8b:08:20:35
Configured FPC with MAC 6c:ae:8b:08:20:35 as fpc01 (10.1.147.170)
Verified the FPC with MAC 6c:ae:8b:08:20:35 is responding to the new IP 10.1.147.170 as node fpc01
There are no more FPCs with the default IP address to process

Check the internet IP parameters values to make sure they were enabled properly on each FPC.

rspconfig cmm01 netmask gateway ip 

fpc: BMC Netmask: 255.255.0.0
fpc: BMC Gateway: 10.1.1.171
fpc: BMC IP: 10.1.147.170


Update the FPC firmware (optional)

This section specifies how to update the FPC firmware. You can run the xCAT "rinv fpc01 firmware" command to list the fpc firmware level.

rinv  fpc01 firmware
fpc01: BMC Firmware: 2.01

The FPC firmware can be updated by running the rflash command and providing the FPC node name and the location of the file preceded by "http://" and the IP address of the xCAT MN interface which is on the same VLAN as the FPC.

Once the firmware is unzipped and the ibm_fw_fpc_<fw level>.rom file is placed in the /install/firmware directory, or another directory within /install on the xCAT MN, you can use the rflash command to update the firmware on one chassis at a time or on all chassis managed by xCAT MN.

The format of the rflash command is:

rflash fpc01 http://10.1.147.171/install/firmware/ibm_fw_fpc_fhet17a-2.02_anyos_noarch.rom

Note: The firmware file ibm_fw_fpc_fhet17a-2.02_anyos_noarch.rom was downloaded to the /install/firmware directory in this example.

You can run the xCAT "rinv fpc01 firmware" command to list the new fpc01 firmware.

 rinv  fpc01 firmware
fpc01: BMC Firmware: 2.02

Node Definition and Discovery

Declare a dynamic range of addresses for discovery

If you want to run a discovery process, a dynamic range must be defined in the networks table. It's used for the nodes to get an IP address before xCAT knows their MAC addresses.

In this case, we'll designate 172.20.255.1-172.20.255.254 as a dynamic range:

chdef -t network 172_16_0_0-255_240_0_0 dynamicrange=172.20.255.1-172.20.255.254

Create the node definitions

Now you can define the NeXtScale node definitions:

nodeadd n[001-167] groups=ipmi,compute,all bmc=70.1.147.173 bmcpassword=PASSW0RD bmcusername=USERID cons=ipmi ip=70.1.147.163 mgt=ipmi 

To change the list of nodes you just defined to a shared BMC port:

chdef -t group -o ipmi bmcport="0"

If the BMCs are configured in shared mode, then this network can be combined with the management network. The bmcport attribute is used by bmcsetup in discovery to configure the BMC port. The bmcport values are "0"=shared, "1"=dedicated, or blank to leave the BMC port unchanged.

To see the list of nodes you just defined:

nodels

To see all of the attributes that the combination of the templates and your nodelist have defined for a few sample nodes:

lsdef n100,n101,n104

This is the easiest way to verify that the regular expressions in the templates are giving you attribute values you are happy with. (Or, if you modified the regular expressions, that you did it correctly.)

Declare use of SOL

If not using a terminal server, SOL is recommended, but not required to be configured. To instruct xCAT to configure SOL in installed operating systems on NeXtScale systems:

chdef -t group -o compute serialport=0 serialspeed=115200 serialflow=hard


Setup /etc/hosts File

Since the map between the xCAT node names and IP addresses have been added in the xCAT database, you can run the makehosts xCAT command to create the /etc/hosts file from the xCAT database. (You can skip this step if you are creating /etc/hosts manually.)

makehosts switch,blade,cmm

Verify the entries have been created in the file /etc/hosts.

Add the node ip mapping to the DNS.

makedns

Switch Discovery

This method of discovery assumes that you have the nodes plugged into your ethernet switches in an orderly fashion. So we use each nodes switch port number to determine where it is physically located in the racks and therefore what node name it should be given.

To use this discovery method, you must have already configured the switches as described in #Configure Ethernet Switches

Switch-related Tables

The table templates already put group-oriented regular expression entries in the switch table. Use lsdef for a sample node to see if the switch and switchport attributes are correct. If not, use chdef or tabedit to change the values.

If you configured your switches to use SNMP V3, then you need to define several attributes in the switches table. Assuming all of your switches use the same values, you can set these attributes at the group level:

tabch switch=switch switches.snmpversion=3 switches.username=xcat switches.password=passw0rd switches.auth=sha


To initiate any discover walk over to systems and hit the power buttons. For the switch you can power on all of the nodes at the same time.

On the MN watch nodes being discovered by:

tail -f /var/log/messages

Look for the dhcp requests, the xCAT discovery requests, and the "<node> has been discovered" messages.

A quick summary of what is happening during the discovery process is:

  • the nodes request a DHCP IP address and PXE boot instructions
  • the DHCP server on the MN responds with a dynamic IP address and the xCAT genesis boot kernel
  • the genesis boot kernel running on the node sends the MAC and MTMS to xcatd on the MN
  • xcatd asks the switches which port this MAC is on so that it can correlate this physical node with the proper node entry in the database. (Switch Discovery only)
  • xcatd uses specified node name pool to get the proper node entry. (Sequential Discovery only)
    • stores the node's MTMS in the db
    • puts the MAC/IP pair in the DHCP configuration
    • sends several of the node attributes to the genesis kernel on the node
  • the genesis kernel configures the BMC with the proper IP address, userid, and password, and then just drops into a shell

After a successful discovery process, the following attributes will be added to the database for each node. (You can verify this by running lsdef <node> ):

  • mac - the MAC address of the in-band NIC used to manage this node
  • mtm - the hardware type (machine-model)
  • serial - the hardware serial number

If you cannot discover the nodes successfully, see the next section #Manually Discover Nodes.

If at some later time you want to force a re-discover of a node, run:

makedhcp -d <noderange>

and then reboot the node(s).

Monitoring Node Discovery

When the bmcsetup process completes on each node (about 5-10 minutes), xCAT genesis will drop into a shell and wait indefinitely (and change the node's currstate attribute to "shell"). You can monitor the progress of the nodes using:

watch -d 'nodels ipmi chain.currstate|xcoll'

Before all nodes complete, you will see output like:

====================================
n1,n10,n11,n75,n76,n77,n78,n79,n8,n80,n81,n82,n83,n84,n85,n86,n87,n88,n89,n9,n90,n91
====================================
shell

====================================
n31,n32,n33,n34,n35,n36,n37,n38,n39,n4,n40,n41,n42,n43,n44,n45,n46,n47,n48,n49,n5,n50,n51,n52,
 n53,n54,n55,n56,n57,n58,n59,n6,n60,n61,n62,n63,n64,n65,n66,n67,n68,n69,n7,n70,n71,n72,n73,n74
====================================
runcmd=bmcsetup

When all nodes have made it to the shell, xcoll will just show that the whole nodegroup "ipmi" has the output "shell":

====================================
ipmi
====================================
shell

When the nodes are in the xCAT genesis shell, you can ssh or psh to any of the nodes to check anything you want.

Verfiy HW Management Configuration

At this point, the BMCs should all be configured and ready for hardware management. To verify this:

# rpower ipmi stat | xcoll
====================================
ipmi
====================================
on


HW Settings Necessary for Remote Console

To get the remote console working for each node, some uEFI hardware settings must have specific values. First check the settings, and if they aren't correct, then set them properly. This can be done via the ASU utility.

Create a file called asu-show with contents:

show DevicesandIOPorts.Com1ActiveAfterBoot
show DevicesandIOPorts.SerialPortSharing
show DevicesandIOPorts.SerialPortAccessMode
show DevicesandIOPorts.RemoteConsole

And create a file called asu-set with contents:

set DevicesandIOPorts.Com1ActiveAfterBoot Enable
set DevicesandIOPorts.SerialPortSharing Enable
set DevicesandIOPorts.SerialPortAccessMode Dedicated
set DevicesandIOPorts.RemoteConsole Enable

Then use the pasu tool to check these settings:

pasu -b asu-show ipmi | xcoll    # Or you can check just one node and assume the rest are the same

If the settings are not correct, then set them:

pasu -b asu-set ipmi | xcoll

For alternate ways to set the ASU settings, see xCAT iDataPlex Advanced Setup#Using ASU to Update CMOS, uEFI, or BIOS Settings on the Nodes.

Now the remote console should work. Verify it on one node by running:

rcons <node>

To verify that you can see the genesis shell prompt (after hitting enter). To exit rcons type: ctrl-shift-E (all together), then "c", the ".".

You are now ready to choose an operating system and deployment method for the nodes....


Deploying Nodes

  • In you want to install your nodes as stateful (diskful) nodes, follow the next section#Installing Stateful Nodes.
  • If you want to define one or more stateless (diskless) OS images and boot the nodes with those, see section #Deploying Stateless Nodes. This method has the advantage of managing the images in a central place, and having only one image per node type.
  • If you want to have nfs-root statelite nodes, see xCAT Linux Statelite. This has the same advantage of managing the images from a central place. It has the added benefit of using less memory on the node while allowing larger images. But it has the drawback of making the nodes dependent on the management node or service nodes (i.e. if the management/service node goes down, the compute nodes booted from it go down too).
  • If you have a very large cluster (more than 500 nodes), at this point you should followSetting Up a Linux Hierarchical Cluster to install and configure your service nodes. After that you can return here to install or diskless boot your compute nodes.

Installing Stateful Nodes

There are two options to install your nodes as stateful (diskful) nodes:

  1. use ISOs or DVDs, follow the section #Option 1: Installing Stateful Nodes Using ISOs or DVDs
  2. or clone new nodes from a pre-installed/pre-configured node, follow the section #Option 2: Installing Stateful Nodes Using Sysclone

Option 1: Installing Stateful Nodes Using ISOs or DVDs

This section describes the process for setting up xCAT to install nodes; that is how to install an OS on the disk of each node.

Create the Distro Repository on the MN

The copycds command copies the contents of the linux distro media to /install/<os>/<arch> so that it will be available to install nodes with or create diskless images.

  • Obtain the Redhat or SLES ISOs or DVDs.
  • If using an ISO, copy it to (or NFS mount it on) the management node, and then run:
copycds <path>/RHEL6.2-*-Server-x86_64-DVD1.iso
  • If using a DVD, put it in the DVD drive of the management node and run:
copycds /dev/dvd       # or whatever the device name of your dvd drive is

Tip: if this is the same distro version as your management node, create a .repo file in /etc/yum.repos.d with content similar to:

[local-rhels6.2-x86_64]
name=xCAT local rhels 6.2
baseurl=file:/install/rhels6.2/x86_64
enabled=1
gpgcheck=0

This way, if you need some additional RPMs on your MN at a later, you can simply install them using yum. Or if you are installing other software on your MN that requires some additional RPMs from the disto, they will automatically be found and installed.

Select or Create an osimage Definition

The copycds command also automatically creates several osimage defintions in the database that can be used for node deployment. To see them:

lsdef -t osimage          # see the list of osimages
lsdef -t osimage <osimage-name>          # see the attributes of a particular osimage

From the list above, select the osimage for your distro, architecture, provisioning method (in this case install), and profile (compute, service, etc.). Although it is optional, we recommend you make a copy of the osimage, changing its name to a simpler name. For example:

lsdef -t osimage -z rhels6.2-x86_64-install-compute | sed 's/^[^ ]\+:/mycomputeimage:/' | mkdef -z

This displays the osimage "rhels6.2-x86_64-install-compute" in a format that can be used as input to mkdef, but on the way there it uses sed to modify the name of the object to "mycomputeimage".

Initially, this osimage object points to templates, pkglists, etc. that are shipped by default with xCAT. And some attributes, for example otherpkglist and synclists, won't have any value at all because xCAT doesn't ship a default file for that. You can now change/fill in any osimage attributes that you want. A general convention is that if you are modifying one of the default files that an osimage attribute points to, copy it into /install/custom and have your osimage point to it there. (If you modify the copy under /opt/xcat directly, it will be over-written the next time you upgrade xCAT.)

But for now, we will use the default values in the osimage definition and continue on. (If you really want to see examples of modifying/creating the pkglist, template, otherpkgs pkglist, and sync file list, see the section #Deploying Stateless Nodes. Most of the examples there can be used for stateful nodes too.)

Install a New Kernel on the Nodes (Optional)

Create a postscript file called (for example) updatekernel:

vi /install/postscripts/updatekernel

Add the following lines to the file:

#!/bin/bash
rpm -Uivh data/kernel-*rpm

Change the permission on the file:

chmod 755 /install/postscripts/updatekernel

Make the new kernel RPM available to the postscript:

mkdir /install/postscripts/data
cp <kernel> /install/postscripts/data

Add the postscript to your compute nodes:

chdef -p -t group compute postscripts=updatekernel

Now when you install your nodes (done in a step below), it will also update the kernel.

Alternatively, you could install your nodes with the stock kernel, and update the nodes afterward using updatenode and the same postscript above, in this case, you need to reboot your nodes to make the new kernel be effective.

Update the Distro at a Later Time

After the initial install of the distro onto nodes, if you want to update the distro on the nodes (either with a few updates or a new SP) without reinstalling the nodes:

  • create the new repo using copycds:
copycds <path>/RHEL6.3-*-Server-x86_64-DVD1.iso
Or, for just a few updated rpms, you can copy the updated rpms from the distributor into a directory under /install and run createrepo in that directory.
  • add the new repo to the pkgdir attribute of the osimage:
chdef -t osimage rhels6.2-x86_64-install-compute -p pkgdir=/install/rhels6.3/x86_64
Note: the above command will add a 2nd repo to the pkgdir attribute. This is only supported for xCAT 2.8.2 and above. For earlier versions of xCAT, omit the -p flag to replace the existing repo directory with the new one.
  • run the ospkgs postscript to have yum update all rpms on the nodes
updatenode compute -P ospkgs

Option 2: Installing Stateful Nodes Using Sysclone

This section describes how to install or configure a diskful node (we call it as golden-client), capture an osimage from this golden-client, the osimage can be used to clone other nodes later.

Note: this support is available in xCAT 2.8.2 and above.

Install or Configure the Golden Client

If you want to use the sysclone provisioning method, you need a golden-client. In this way, you can customize and tweak the golden-client’s configuration according to your needs, verify it’s proper operation, so once the image is captured and deployed, the new nodes will behave in the same way as the golden-client.

To install a golden-client, follow the section #Option 1: Installing Stateful Nodes Using ISOs or DVDs.

To install the systemimager rpms onto the golden-client:

  • Download the xcat-dep tarball which includes systemimager rpms.
Go to xcat-dep and get the latest xCAT dependency tarball. Copy the file to the management node and untar it in the appropriate sub-directory of /install/post/otherpkgs. For example:

[RH/Fedora/CentOS]:

mkdir -p /install/post/otherpkgs/rhels6.3/x86_64/xcat
cd /install/post/otherpkgs/rhels6.3/x86_64/xcat
tar jxvf xcat-dep-*.tar.bz2

[SLES]:

mkdir -p /install/post/otherpkgs/sles11.3/x86_64/xcat
cd /install/post/otherpkgs/sles11.3/x86_64/xcat
tar jxvf xcat-dep-*.tar.bz2
  • Add the sysclone otherpkglist file and otherpkgdir to osimage definition and run the install. For example:

[RH/Fedora/CentOS]:

chdef -t osimage -o <osimage-name> otherpkglist=/opt/xcat/share/xcat/install/rh/sysclone.rhels6.x86_64.otherpkgs.pkglist
chdef -t osimage -o <osimage-name> -p otherpkgdir=/install/post/otherpkgs/rhels6.3/x86_64
updatenode <my-golden-cilent> -S

[SLES]:

chdef -t osimage -o <osimage-name> otherpkglist=/opt/xcat/share/xcat/install/rh/sysclone.sles11.x86_64.otherpkgs.pkglist
chdef -t osimage -o <osimage-name> -p otherpkgdir=/install/post/otherpkgs/sles11.3/x86_64
updatenode <my-golden-cilent> -S

Capture image from the Golden Client

Using imgcapture to capture an osimage from the golden-client.

imgcapture <my-golden-client> -t sysclone -o <mycomputeimage>

Tip: when imgcapture is run, it pulls an osimage from the golden-client, and creates an osimage definition on xcat management node. Use lsdef -t osimage <mycomputeimage> to check the osimage attributes.

Begin Installation

The nodeset command tells xCAT what you want to do next with this node, rsetboottells the node hardware to boot from the network for the next boot, and powering on the node using rpower starts the installation process:

nodeset compute osimage=mycomputeimage
rsetboot compute net
rpower compute boot

Tip: when nodeset is run, it processes the kickstart or autoyast template associated with the osimage, plugging in node-specific attributes, and creates a specific kickstart/autoyast file for each node in /install/autoinst. If you need to customize the template, make a copy of the template file that is pointed to by the osimage.template attribute and edit that file (or the files it includes).

Monitor installation

It is possible to use the wcons command to watch the installation process for a sampling of the nodes:

wcons n1,n20,n80,n100

or rcons to watch one node

rcons n1

Additionally, nodestat may be used to check the status of a node as it installs:

nodestat n20,n21
n20: installing man-pages - 2.39-10.el5 (0%)
n21: installing prep

Note: the percentage complete reported by nodestat is not necessarily reliable.

You can also watch nodelist.status until it changes to "booted" for each node:

nodels compute nodelist.status | xcoll

Once all of the nodes are installed and booted, you should be able ssh to all of them from the MN (w/o a password), because xCAT should have automatically set up the ssh keys (if the postscripts ran successfully):

xdsh compute date

If there are problems, see Debugging xCAT Problems.


Deploying Stateless Nodes

Note: this section describes how to create a stateless image using the genimage command to install a list of rpms into the image. As an alternative, you can also capture an image from a running node and create a stateless image out of it. See Capture Linux Image for details.

Create the Distro Repository on the MN

The copycds command copies the contents of the linux distro media to /install/<os>/<arch> so that it will be available to install nodes with or create diskless images.

  • Obtain the Redhat or SLES ISOs or DVDs.
  • If using an ISO, copy it to (or NFS mount it on) the management node, and then run:
copycds <path>/RHEL6.2-Server-20080430.0-x86_64-DVD.iso
  • If using a DVD, put it in the DVD drive of the management node and run:
copycds /dev/dvd       # or whatever the device name of your dvd drive is

Tip: if this is the same distro version as your management node, create a .repo file in /etc/yum.repos.d with content similar to:

[local-rhels6.2-x86_64]
name=xCAT local rhels 6.2
baseurl=file:/install/rhels6.2/x86_64
enabled=1
gpgcheck=0

This way, if you need some additional RPMs on your MN at a later, you can simply install them using yum. Or if you are installing other software on your MN that requires some additional RPMs from the disto, they will automatically be found and installed.

Using an osimage Definition

Note: To use an osimage as your provisioning method, you need to be running xCAT 2.6.6 or later.

The provmethod attribute of your nodes should contain the name of the osimage object definition that is being used for those nodes. The osimage object contains paths for pkgs, templates, kernels, etc. If you haven't already, run copycds to copy the distro rpms to /install. Default osimage objects are also defined when copycds is run. To view the osimages:

lsdef -t osimage          # see the list of osimages
lsdef -t osimage <osimage-name>          # see the attributes of a particular osimage

Select or Create an osimage Definition

From the list found above, select the osimage for your distro, architecture, provisioning method (install, netboot, statelite), and profile (compute, service, etc.). Although it is optional, we recommend you make a copy of the osimage, changing its name to a simpler name. For example:

lsdef -t osimage -z rhels6.3-x86_64-netboot-compute | sed 's/^[^ ]\+:/mycomputeimage:/' | mkdef -z

This displays the osimage "rhels6.3-x86_64-netboot-compute" in a format that can be used as input to mkdef, but on the way there it uses sed to modify the name of the object to "mycomputeimage".

Initially, this osimage object points to templates, pkglists, etc. that are shipped by default with xCAT. And some attributes, for example otherpkglist and synclists, won't have any value at all because xCAT doesn't ship a default file for that. You can now change/fill in any osimage attributes that you want. A general convention is that if you are modifying one of the default files that an osimage attribute points to, copy it into /install/custom and have your osimage point to it there. (If you modify the copy under /opt/xcat directly, it will be over-written the next time you upgrade xCAT.)

Set up pkglists

You likely want to customize the main pkglist for the image. This is the list of rpms or groups that will be installed from the distro. (Other rpms that they depend on will be installed automatically.) For example:

mkdir -p /install/custom/netboot/rh
cp -p /opt/xcat/share/xcat/netboot/rh/compute.rhels6.x86_64.pkglist /install/custom/netboot/rh
vi /install/custom/netboot/rh/compute.rhels6.x86_64.pkglist
chdef -t osimage mycomputeimage pkglist=/install/custom/netboot/rh/compute.rhels6.x86_64.pkglist

The goal is to install the fewest number of rpms that still provides the function and applications that you need, because the resulting ramdisk will use real memory in your nodes.

Also, check to see if the default exclude list excludes all files and directories you do not want in the image. The exclude list enables you to trim the image after the rpms are installed into the image, so that you can make the image as small as possible.

cp /opt/xcat/share/xcat/netboot/rh/compute.exlist /install/custom/netboot/rh
vi /install/custom/netboot/rh/compute.exlist 
chdef -t osimage mycomputeimage exlist=/install/custom/netboot/rh/compute.exlist

Make sure nothing is excluded in the exclude list that you need on the node. For example, if you require perl on your nodes, remove the line "./usr/lib/perl5*".

Installing OS Updates By Setting linuximage.pkgdir(only support for rhels and sles)

The linuximage.pkgdir is the name of the directory where the distro packages are stored. It can be set multiple paths. The multiple paths must be separated by ",". The first path is the value of osimage.pkgdir and must be the OS base pkg directory path, such as pkgdir=/install/rhels6.2/x86_64,/install/updates/rhels6.2/x86_64 . In the os base pkg path, there is default repository data. In the other pkg path(s), the users should make sure there is repository data. If not, use "createrepo" command to create them.

If you have additional os updates rpms (rpms may be from the os website, or the additional os distro) that you also want installed, make a directory to hold them, create a list of the rpms you want installed, and add that information to the osimage definition:

  • Create a directory to hold the additional rpms:
mkdir -p /install/updates/rhels6.2/x86_64 
cd /install/updates/rhels6.2/x86_64 
cp /myrpms/* .

If there is no repository data in the directory, you can run "createrepo" to create it:

createrepo .

The createrepo command is in the createrepo rpm, which for RHEL is in the 1st DVD, but for SLES is in the SDK DVD.

NOTE: when the management node is rhels6.x, and the otherpkgs repository data is for rhels5.x, we should run createrepo with "-s md5". Such as:

createrepo -s md5 .
  • Append the additional rpms into the corresponding pkglist. For example, in /install/custom/install/rh/compute.rhels6.x86_64.pkglist, append:
...
myrpm1
myrpm2
myrpm3
  • Add both the directory and the file to the osimage definition:
chdef -t osimage mycomputeimage pkgdir=/install/rhels6.2/x86_64,/install/updates/rhels6.2/x86_64  pkglist=/install/custom/install/rh/compute.rhels6.x86_64.pkglist

If you add more rpms at a later time, you must run createrepo again.

Note: After the above setting,

  • For diskfull install, run "nodeset <noderange> mycomputeimage" to pick up the changes, and then boot up the nodes
  • For diskless, run genimage to install the packages into the image, and then packimage and boot up the nodes.
  • If the nodes are up, run "updatenode <noderange> ospkgs" to update the packages.
  • These functions are only support for rhels6.x and sles11.x

Installing Additional Packages Using an Otherpkgs Pkglist

If you have additional rpms (rpms not in the distro) that you also want installed, make a directory to hold them, create a list of the rpms you want installed, and add that information to the osimage definition:

  • Create a directory to hold the additional rpms:
mkdir -p /install/post/otherpkgs/rh/x86_64
cd /install/post/otherpkgs/rh/x86_64
cp /myrpms/* .
createrepo .

NOTE: when the management node is rhels6.x, and the otherpkgs repository data is for rhels5.x, we should run createrepo with "-s md5". Such as:

createrepo -s md5 .
  • Create a file that lists the additional rpms that should be installed. For example, in /install/custom/netboot/rh/compute.otherpkgs.pkglist put:
myrpm1
myrpm2
myrpm3
  • Add both the directory and the file to the osimage definition:
chdef -t osimage mycomputeimage otherpkgdir=/install/post/otherpkgs/rh/x86_64 otherpkglist=/install/custom/netboot/rh/compute.otherpkgs.pkglist

If you add more rpms at a later time, you must run createrepo again. The createrepo command is in the createrepo rpm, which for RHEL is in the 1st DVD, but for SLES is in the SDK DVD.

If you have multiple sets of rpms that you want to keep separate to keep them organized, you can put them in separate sub-directories in the otherpkgdir. If you do this, you need to do the following extra things, in addition to the steps above:

  • Run createrepo in each sub-directory
  • In your otherpkgs.pkglist, list at least 1 file from each sub-directory. (During installation, xCAT will define a yum or zypper repository for each directory you reference in your otherpkgs.pkglist.) For example:
xcat/xcat-core/xCATsn
xcat/xcat-dep/rh6/x86_64/conserver-xcat

There are some examples of otherpkgs.pkglist in /opt/xcat/share/xcat/netboot/<distro>/service.*.otherpkgs.pkglist that show the format.

Note: the otherpkgs postbootscript should by default be associated with every node. Use lsdef to check:

lsdef node1 -i postbootscripts

If it is not, you need to add it. For example, add it for all of the nodes in the "compute" group:

chdef -p -t group compute postbootscripts=otherpkgs

Set up a postinstall script (optional)

Postinstall scripts for diskless images are analogous to postscripts for diskfull installation. The postinstall script is run by genimage near the end of its processing. You can use it to do anything to your image that you want done every time you generate this kind of image. In the script you can install rpms that need special flags, or tweak the image in some way. There are some examples shipped in /opt/xcat/share/xcat/netboot/<distro>. If you create a postinstall script to be used by genimage, then point to it in your osimage definition. For example:

chdef -t osimage mycomputeimage postinstall=/install/custom/netboot/rh/compute.postinstall

Set up Files to be synchronized on the nodes

Note: This is only supported for stateless nodes in xCAT 2.7 and above.

Sync lists contain a list of files that should be sync'd from the management node to the image and to the running nodes. This allows you to have 1 copy of config files for a particular type of node and make sure that all those nodes are running with those config files. The sync list should contain a line for each file you want sync'd, specifying the path it has on the MN and the path it should be given on the node. For example:

/install/custom/syncfiles/compute/etc/motd -> /etc/motd
/etc/hosts -> /etc/hosts

If you put the above contents in /install/custom/netboot/rh/compute.synclist, then:

chdef -t osimage mycomputeimage synclists=/install/custom/netboot/rh/compute.synclist

For more details, see Sync-ing_Config_Files_to_Nodes.

Configure the nodes to use your osimage

You can configure any noderange to use this osimage. In this example, we define that the whole compute group should use the image:

 chdef -t group compute provmethod=mycomputeimage

Now that you have associated an osimage with nodes, if you want to list a node's attributes, including the osimage attributes all in one command:

lsdef node1 --osimage

Generate and pack your image

There are other attributes that can be set in your osimage definition. See the osimage man page for details.

Building an Image for a Different OS or Architecture

If you are building an image for a different OS/architecture than is on the Management node, you need to follow this process: Building a Stateless Image of a Different Architecture or OS. Note: different OS in this case means, for example, RHEL 5 vs. RHEL 6. If the difference is just an update level/service pack (e.g. RHEL 6.0 vs. RHEL 6.3), then you can build it on the MN.

Building an Image for the Same OS and Architecture as the MN

If the image you are building is for nodes that are the same OS and architecture as the management node (the most common case), then you can follow the instructions here to run genimage on the management node.

Run genimage to generate the image based on the mycomputeimage definition:

genimage mycomputeimage

Before you pack the image, you have the opportunity to change any files in the image that you want to, by cd'ing to the rootimgdir (e.g. /install/netboot/rhels6/x86_64/compute/rootimg). Although, instead, we recommend that you make all changes to the image via your postinstall script, so that it is repeatable.

The genimage command creates /etc/fstab in the image. If you want to, for example, limit the amount of space that can be used in /tmp and /var/tmp, you can add lines like the following to it (either by editing it by hand or via the postinstall script):

tmpfs   /tmp     tmpfs    defaults,size=50m             0 2
tmpfs   /var/tmp     tmpfs    defaults,size=50m       0 2

But probably an easier way to accomplish this is to create a postscript to be run when the node boots up with the following lines:

logger -t xcat "$0: BEGIN"
mount -o remount,size=50m /tmp/
mount -o remount,size=50m /var/tmp/
logger -t xcat "$0: END"

Assuming you call this postscript settmpsize, you can add this to the list of postscripts that should be run for your compute nodes by:

chdef -t group compute -p postbootscripts=settmpsize

Now pack the image to create the ramdisk:

packimage mycomputeimage


Installing a New Kernel in the Stateless Image

The kerneldir attribute in linuximage table is used to assign one directory to hold the new kernel to be installed into the stateless/statelite image. Its default value is /install/kernels, you need to create the directory named <kernelver> under the kerneldir, and genimage will pick them up from there.

Assuming you have the kernel in RPM format in /tmp, the value of kerneldir is not set (which will take the default value: /install/kernels).

This procedure assumes you are using xCAT 2.6.1 or later. The rpm names are an example and you can substitute your level and architecture. The kernel will be installed directly from the rpm package.


  • For RHEL:

The kernel RPM package is usually named kernel-<kernelver>.rpm, for example: kernel-2.6.32.10-0.5.x86_64.rpm is the kernel package for 2.6.32.10-0.5.x86_64.


cp /tmp/kernel-2.6.32.10-0.5.x86_64.rpm /install/kernels/
createrepo /install/kernels/


  • For SLES:

Usually, the kernel files for SLES are separated into two parts: kernel-<arch>-base and kernel, and the naming of kernel RPM packages are different. For example, there's two RPM packages in /tmp:

kernel-ppc64-base-2.6.27.19-5.1.x86_64.rpm
kernel-ppc64-2.6.27.19-5.1.x86_64.rpm

2.6.27.19-5.1.x86_64 is NOT the kernel version. 2.6.27.19-5-x86_64 is the kernel version . Follow this naming convention to determine the kernel version.

After the kernel version is determined for SLES, then:


cp /tmp/kernel-ppc64-base-2.6.27.19-5.1.x86_64.rpm /install/kernels/
cp /tmp/kernel-ppc64-2.6.27.19-5.1.x86_64.rpm /install/kernels/


Run genimage/packimage to update the image with the new kernel: (Use sles as example)


Since the kernel version is different from the rpm package version, the -g flag needs to be specified on the genimage command for the rpm version of kernel packages.

genimage -i eth0 -n ibmveth -o sles11.1 -p compute -k 2.6.27.19-5-x86_64 -g 2.6.27.19-5.1

Installing New Kernel Drivers to Stateless Initrd

The kernel drivers in the stateless initrd are used for the devices during the netboot. If you are missing one or more kernel drivers for specific devices (especially for the network device), the netboot process will fail. xCAT offers two approaches to add additional drivers to the stateless initrd during the running of genimage.

  • Use the '-n' flag to add new drivers to the stateless initrd
genimage <imagename> -n <new driver list>

Generally, the genimage command has a default driver list which will be added to the initrd. But if you specify the '-n' flag, the default driver list will be replaced with your <new driver list>. That means you need to include any drivers that you need from the default driver list into your <new driver list>.

The default driver list:

rh-x86:   tg3 bnx2 bnx2x e1000 e1000e igb mlx_en virtio_net be2net
rh-ppc:   e1000 e1000e igb ibmveth ehea
sles-x86: tg3 bnx2 bnx2x e1000 e1000e igb mlx_en be2net
sels-ppc: tg3 e1000 e1000e igb ibmveth ehea be2net

Note: With this approach, xCAT will search for the drivers in the rootimage. You need to make sure the drivers have been included in the rootimage before generating the initrd. You can install the drivers manually in an existing rootimage (using chroot) and run genimage again, or you can use a postinstall script to install drivers to the rootimage during your initial genimage run.

  • Use the driver rpm package to add new drivers from rpm packages to the stateless initrd

Refer to the doc Using_Linux_Driver_Update_Disk#Driver_RPM_Package.

Boot the nodes

nodeset compute osimage=mycomputeimage

(If you need to update your diskless image sometime later, change your osimage attributes and the files they point to accordingly, and then rerun genimage, packimage, nodeset, and boot the nodes.)

Now boot your nodes...

rsetboot compute net
rpower compute boot


Where Do I Go From Here?

Now that your basic cluster is set up, here are suggestions for additional reading:

Personal tools