<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Recent changes to XCAT_pLinux_Clusters_775</title><link>https://sourceforge.net/p/xcat/wiki/XCAT_pLinux_Clusters_775/</link><description>Recent changes to XCAT_pLinux_Clusters_775</description><atom:link href="https://sourceforge.net/p/xcat/wiki/XCAT_pLinux_Clusters_775/feed" rel="self"/><language>en</language><lastBuildDate>Tue, 12 Aug 2014 12:43:01 -0000</lastBuildDate><atom:link href="https://sourceforge.net/p/xcat/wiki/XCAT_pLinux_Clusters_775/feed" rel="self" type="application/rss+xml"/><item><title>XCAT_pLinux_Clusters_775 modified by Lissa Valletta</title><link>https://sourceforge.net/p/xcat/wiki/XCAT_pLinux_Clusters_775/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v2
+++ v3
@@ -4,7 +4,7 @@

 ## Introduction

-This cookbook provides instructions on how to use xCAT to create and deploy a Linux cluster on IBM power 775 system machines.
+This cookbook provides instructions on how to use xCAT to create and deploy a Linux cluster on IBM power 775 system machines.  For other power hardware use [XCAT_pLinux_Clusters].

 The power system machines have the following characteristics:

&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Lissa Valletta</dc:creator><pubDate>Tue, 12 Aug 2014 12:43:01 -0000</pubDate><guid>https://sourceforge.net1c7e8113c89c93cf85a767c3c1875eba0cf81c91</guid></item><item><title>XCAT_pLinux_Clusters_775 modified by Lissa Valletta</title><link>https://sourceforge.net/p/xcat/wiki/XCAT_pLinux_Clusters_775/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v1
+++ v2
@@ -24,7 +24,7 @@
 Here is a summary of the steps required to set up the cluster and what this document will take you through:

   1. Prepare the management node - doing these things before installing the xCAT software helps the process to go more smoothly.
-  2. Install the xCAT software on the management node.
+  2. Install the xCAT software on the management node (EMS).
   3. Configure some cluster wide information
   4. Define a little bit of information in the xCAT database about the ethernet switches and nodes - this is necessary to direct the node discovery process.
   5. Have xCAT configure and start several network daemons - this is necessary for both node discovery and node installation.
@@ -55,15 +55,7 @@

 #### **(Optional)Setup the DHCP interfaces in site table**

-To set up the site table dhcp interfaces for your system p cluster, identify the correct interfaces that xCAT should listen to on your management node and service nodes:
-
-~~~~
-     chdef -t site dhcpinterfaces='pmanagenode|eth1;service|eth0'
-     makedhcp -n
-     service dhcpd restart
-~~~~
-
-For Power 775 cluster:
+

 ~~~~
      chdef -t site dhcpinterfaces='pmanagenode|eth1;service|hf0'
@@ -71,21 +63,19 @@
      service dhcpd restart
 ~~~~

-## Discover and Define Your System p Hardware
-
-The next steps are to discover your hardware on the nextwork, defined it in the xCAT database, configure xCAT's hardware control, and do the initial definition of the LPARs as the compute nodes. These steps are explained in the following 2 documents. Use the one that applies to your environment:
-
-  * For Power 775 clusters, use [XCAT_Power_775_Hardware_Management]
-  * For PowerLinux cluster, use [XCAT_PowerLinux_Hardware_Management]
-  * For all other system p hardware, use [XCAT_System_p_Hardware_Management]
-
-After performing the steps in one of those documents, return here and continue on in this document.
-
-## Additional Setup on the MN for Power 775 Clusters Only
-
-If you are not setting up a Power 775 cluster, skip this section.
-
-The xCAT Management Node must be configured and running on the DB2 database, before installing the following cluster hardware components.
+## Discover and Define Your System p 775 Hardware
+
+The next steps are to discover your hardware on the nextwork, defined it in the xCAT database, configure xCAT's hardware control, and do the initial definition of the LPARs as the compute nodes. These steps are explained in the following document :
+
+  *  [XCAT_Power_775_Hardware_Management]
+ 
+
+After performing those steps  return here and continue on in this document.
+
+## Additional Management Node (EMS) setup
+
+
+The xCAT Management Node must be configured and running on the DB2 database on P775,  before installing the following cluster hardware components.

 This section describes the additional setup required for the Power 775 support. This includes the setup of the cluster hardware components and the installation of TEAL, ISNM, and LoadLeveler on the xCAT management node. TEAL, ISNM and LoadLeveler have dependencies on each other so all three must be installed.

@@ -542,7 +532,7 @@

 The following section (and its subsections) is the standard xCAT procedure for building and deploying a linux stateless image. Some of the example commands refer to the x86_64 architecture, but the procedure is the same on ppc64. Just replace x86_64 with ppc64. Also, when it comes time to boot the nodes, use rnetboot instead of rpower.

-In addition, if you are building/deploying a stateless image on a **p775** cluster, do these additional things when following the procedure below:
+In addition, to build and deploy  a stateless image on a **p775** cluster, do these additional things when following the procedure below:

   * In the section for installing other packages, add the [powerpc-utils rpm](ftp://linuxpatch.ncsa.uiuc.edu/PERCS/powerpc-utils-1.2.2-18.el6.ppc64.rpm) to the otherpkgs directory and the otherpkglist.
   * In the section for using postinstall files, add the following lines to your postinstall script (the location of the rootimg should be changed to your location):
@@ -568,9 +558,7 @@

-### Use network boot to start the installation for non-p775 nodes
-
-    rnetboot compute
+

 ### Use network boot to start the installation for p775 nodes
@@ -698,8 +686,8 @@
   * Back up the xcat database using xcatsnap, important config files and other system config files for reference and for restore later. Prune some of the larger tables:
   *     * tabprune eventlog -a
     * tabprune auditlog -a
-    * tabprune isnm_perf -a (Power 775 only)
-    * tabprune isnm_perf_sum -a (Power 775 only)
+    * tabprune isnm_perf -a 
+    * tabprune isnm_perf_sum -a 
   * Run xcatsnap ( will capture database, config files) and copy to another host. By default it will create in /tmp/xcatsnap two files, for example:
     * xcatsnap.hpcrhmn.10110922.log
     * xcatsnap.hpcrhmn.10110922.tar.gz
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Lissa Valletta</dc:creator><pubDate>Tue, 12 Aug 2014 12:32:32 -0000</pubDate><guid>https://sourceforge.net244fdddcb14294af71385e9c00961cad0fd650e1</guid></item><item><title>XCAT_pLinux_Clusters_775 modified by Lissa Valletta</title><link>https://sourceforge.net/p/xcat/wiki/XCAT_pLinux_Clusters_775/</link><description>&lt;div class="markdown_content"&gt;&lt;p&gt;&lt;img alt="" src="https://sourceforge.net/p/xcat/wiki/XCAT_Documentation/attachment/Official-xcat-doc.png" /&gt;&lt;/p&gt;
&lt;div class="toc"&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#introduction"&gt;Introduction&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#overview-of-cluster-setup-process"&gt;Overview of Cluster Setup Process&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#distro-specific-steps"&gt;Distro-specific Steps&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#command-man-pages-and-database-attribute-descriptions"&gt;Command Man Pages and Database Attribute Descriptions&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#prepare-the-management-node-for-xcat-installation"&gt;Prepare the Management Node for xCAT Installation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#install-xcat-on-the-management-node"&gt;Install xCAT on the Management Node&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#configure-xcat"&gt;Configure xCAT&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#optionalsetup-the-dhcp-interfaces-in-site-table"&gt;(Optional)Setup the DHCP interfaces in site table&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#discover-and-define-your-system-p-hardware"&gt;Discover and Define Your System p Hardware&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#additional-setup-on-the-mn-for-power-775-clusters-only"&gt;Additional Setup on the MN for Power 775 Clusters Only&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#downloading-and-installing-dfm-and-hdwr_svr"&gt;Downloading and Installing DFM and hdwr_svr&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#obtain-additional-packages-for-hfi-network"&gt;Obtain additional packages for HFI network&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#install-loadleveler"&gt;Install LoadLeveler&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#install-teal"&gt;Install Teal&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#install-isnm"&gt;Install ISNM&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#install-isnm-prerequisite-software"&gt;Install ISNM prerequisite software&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#discover-and-define-hardware-components"&gt;Discover and define hardware components&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#install-and-configure-isnm"&gt;Install and Configure ISNM&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#check-the-hardware-component-and-site-definitions"&gt;Check the hardware component and site definitions.&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#hardware-server-connections"&gt;Hardware server connections&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#start-cnmd-and-setup-master-isr-idwzxhzdk63"&gt;Start CNMD and setup Master ISR ID :&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#configure-dfm-hierarchically-optional"&gt;Configure DFM Hierarchically (Optional)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#setup-the-xcat-mn-for-a-hierarchical-cluster"&gt;Setup the xCAT MN for a Hierarchical Cluster&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#complete-the-definition-of-the-compute-nodes"&gt;Complete the Definition of the Compute Nodes&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#define-xcat-groups"&gt;Define xCAT groups&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#update-the-attributes-of-the-node"&gt;Update the attributes of the node&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#check-the-sitemaster-value"&gt;Check the site.master value&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#set-the-type-attributes-of-the-node"&gt;Set the type attributes of the node&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#configure-conserver"&gt;Configure conserver&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#update-conserver-configuration"&gt;Update conserver configuration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#check-rconsrnetboot-and-getmacs-depend-on-it"&gt;Check rcons(rnetboot and getmacs depend on it)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#check-hardware-control-setup-to-the-nodes"&gt;Check hardware control setup to the nodes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#update-the-mac-table-with-the-address-of-the-nodes"&gt;Update the mac table with the address of the node(s)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#update-the-mac-table-with-the-address-of-the-nodes-for-power-775"&gt;Update the mac table with the address of the node(s) for Power 775&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#configure-dhcp"&gt;Configure DHCP&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#set-up-customization-scripts-optional"&gt;Set up customization scripts (optional)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#install-stateful-nodes"&gt;Install Stateful Nodes&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#begin-installation"&gt;Begin Installation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#use-network-boot-to-start-the-installation"&gt;Use network boot to start the installation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#alternative-network-boot-in-power-775"&gt;Alternative network boot in Power 775&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#stateless-node-deployment"&gt;Stateless Node Deployment&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#use-network-boot-to-start-the-installation-for-non-p775-nodes"&gt;Use network boot to start the installation for non-p775 nodes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#use-network-boot-to-start-the-installation-for-p775-nodes"&gt;Use network boot to start the installation for p775 nodes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#check-the-installation-result"&gt;Check the installation result&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#remove-an-image"&gt;Remove an image&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#statelite-node-deployment"&gt;Statelite Node Deployment&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#advanced-features"&gt;Advanced features&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#use-the-driver-update-disk"&gt;Use the driver update disk:&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#setup-kdump-service-over-ethernethfi-on-diskless-linux-for-xcat-26-and-higher"&gt;Setup Kdump Service over Ethernet/HFI on diskless Linux (for xCAT 2.6 and higher)&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#generate-rootimage-for-disklessstatelite"&gt;Generate rootimage for diskless/statelite&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#the-remaining-steps"&gt;The Remaining Steps&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#additional-configuration"&gt;Additional configuration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#references"&gt;References&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#appendix-a-migrate-your-management-node-to-a-new-service-pack-of-linux"&gt;Appendix A: Migrate your Management Node to a new Service Pack of Linux&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#appendix-b-install-your-management-node-to-a-new-release-of-linux"&gt;Appendix B: Install your Management Node to a new Release of Linux&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;This cookbook provides instructions on how to use xCAT to create and deploy a Linux cluster on IBM power 775 system machines.&lt;/p&gt;
&lt;p&gt;The power system machines have the following characteristics:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;May have multiple LPARs (an LPAR will be the target machine to install an operating system image on, i.e. the LPAR will be the compute node).&lt;/li&gt;
&lt;li&gt;The Ethernet card and SCSI disk can be virtual devices.&lt;/li&gt;
&lt;li&gt;An HMC or xCAT DFM (Direct FSP/BPA Management) is used for the HCP (hardware control point) to control them.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;xCAT supports three types of installations for compute nodes: Diskfull installation (Statefull), Diskless Stateless, and &lt;a class="" href="/p/xcat/wiki/XCAT_Linux_Statelite"&gt;Diskless Statelite&lt;/a&gt;. xCAT also supports hierarchical clusters where one or more service nodes are used to handle the installation and management of compute nodes. (Instructions and references will be given later in this document for setting up a hierarchical cluster.)&lt;/p&gt;
&lt;p&gt;This document will guide you through installing xCAT on your management node, configuring your cluster, deploying a Linux operating system to your compute nodes, and optionally upgrading firmware on your power system hardware.&lt;/p&gt;
&lt;h3 id="overview-of-cluster-setup-process"&gt;Overview of Cluster Setup Process&lt;/h3&gt;
&lt;p&gt;Here is a summary of the steps required to set up the cluster and what this document will take you through:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Prepare the management node - doing these things before installing the xCAT software helps the process to go more smoothly.&lt;/li&gt;
&lt;li&gt;Install the xCAT software on the management node.&lt;/li&gt;
&lt;li&gt;Configure some cluster wide information&lt;/li&gt;
&lt;li&gt;Define a little bit of information in the xCAT database about the ethernet switches and nodes - this is necessary to direct the node discovery process.&lt;/li&gt;
&lt;li&gt;Have xCAT configure and start several network daemons - this is necessary for both node discovery and node installation.&lt;/li&gt;
&lt;li&gt;Discovery the nodes - during this phase, xCAT configures the FSP's and collects many attributes about each node and stores them in the database.&lt;/li&gt;
&lt;li&gt;Set up the OS images and install the nodes.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="distro-specific-steps"&gt;Distro-specific Steps&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;[RH] indicates that step only needs to be done for RHEL and Red Hat based distros (CentOS, Scientific Linux, and in most cases Fedora).&lt;/li&gt;
&lt;li&gt;[SLES] indicates that step only needs to be done for SLES.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="command-man-pages-and-database-attribute-descriptions"&gt;Command Man Pages and Database Attribute Descriptions&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;All of the commands used in this document are described in the &lt;a class="" href="http://xcat.sourceforge.net/man1/xcat.1.html"&gt;xCAT man pages&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;All of the database attributes referred to in this document are described in the &lt;a class="" href="http://xcat.sourceforge.net/man5/xcatdb.5.html"&gt;xCAT database object and table descriptions&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="prepare-the-management-node-for-xcat-installation"&gt;Prepare the Management Node for xCAT Installation&lt;/h2&gt;
&lt;div&gt;
&lt;div class="markdown_content"&gt;&lt;p&gt;These steps prepare the Management Node for xCAT Installation.&lt;/p&gt;
&lt;div class="toc"&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#install-the-management-node-os"&gt;Install the Management Node OS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#supported-os-and-hardware"&gt;Supported OS and Hardware&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#91rh93-ensure-that-selinux-is-disabled"&gt;[RH] Ensure that SELinux is Disabled&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#disable-the-firewall"&gt;Disable the Firewall&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#set-up-the-networks"&gt;Set Up the Networks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#configure-nics"&gt;Configure NICS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#prevent-dhcp-client-from-overwriting-dns-configuration-optional"&gt;Prevent DHCP client from overwriting DNS configuration (Optional)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#configure-hostname"&gt;Configure hostname&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#setup-basic-hosts-file"&gt;Setup basic hosts file&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#setup-the-timezone"&gt;Setup the TimeZone&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#create-a-separate-file-system-for-install-optional"&gt;Create a Separate File system for /install (optional)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#restart-management-node"&gt;Restart Management Node&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;h3 id="install-the-management-node-os"&gt;&lt;strong&gt;Install the Management Node OS&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Install one of the supported distros on the Management Node (MN). It is recommended to ensure that dhcp, bind (not bind-chroot), httpd, nfs-utils, and perl-XML-Parser are installed.  (But if not, the process of installing the xCAT software later will pull them in, assuming you follow the steps to make the distro RPMs available.)&lt;/p&gt;
&lt;p&gt;Hardware requirements for your xCAT management node are dependent on your cluster size and configuration. A minimum requirement for an xCAT Management Node or Service Node that is dedicated to running xCAT to install a small cluster ( &amp;lt; 16 nodes) should have 4-6 Gigabytes of memory. A medium size cluster, 6-8 Gigabytes of memory; and a large cluster, 16 Gigabytes or more. Keeping swapping to a minimum should be a goal.&lt;/p&gt;
&lt;h3 id="supported-os-and-hardware"&gt;&lt;strong&gt;Supported OS and Hardware&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;For a list of supported OS and Hardware, refer to  &lt;a class="" href="/p/xcat/wiki/XCAT_Features"&gt;XCAT_Features&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id="91rh93-ensure-that-selinux-is-disabled"&gt;&lt;strong&gt;[RH] Ensure that SELinux is Disabled&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;'''Note:''' you can skip this step in xCAT 2.8.1 and above, because xCAT does it automatically when it is installed.&lt;/p&gt;
&lt;p&gt;To disable SELinux manually:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt; &lt;span class="n"&gt;echo&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;selinux&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;enforce&lt;/span&gt;
 &lt;span class="n"&gt;sed&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="o"&gt;/^&lt;/span&gt;&lt;span class="n"&gt;SELINUX&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;SELINUX&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;disabled&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;etc&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;selinux&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id="disable-the-firewall"&gt;&lt;strong&gt;Disable the Firewall&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Note: you can skip this step in xCAT 2.8 and above, because xCAT does it automatically when it is installed.&lt;/p&gt;
&lt;p&gt;The management node provides many services to the cluster nodes, but the firewall on the management node can interfere with this. If your cluster is on a secure network, the easiest thing to do is to disable the firewall on the Management Mode:&lt;/p&gt;
&lt;p&gt;For RH:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt; &lt;span class="n"&gt;service&lt;/span&gt; &lt;span class="n"&gt;iptables&lt;/span&gt; &lt;span class="n"&gt;stop&lt;/span&gt;
 &lt;span class="n"&gt;chkconfig&lt;/span&gt; &lt;span class="n"&gt;iptables&lt;/span&gt; &lt;span class="n"&gt;off&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;If disabling the firewall completely isn't an option, configure iptables to allow the following services on the NIC that faces the cluster: DHCP, TFTP, NFS, HTTP, DNS.&lt;/p&gt;
&lt;p&gt;For SLES:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt; &lt;span class="n"&gt;SuSEfirewall2&lt;/span&gt; &lt;span class="n"&gt;stop&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id="set-up-the-networks"&gt;&lt;strong&gt;Set Up the Networks&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;The xCAT installation process will scan and populate certain settings from the running configuration. Having the networks configured ahead of time will aid in correct configuration. (After installation of xCAT, all the networks in the cluster must be defined in the xCAT networks table before starting to install cluster nodes.)  When xCAT is installed on the Management Node, it will automatically run makenetworks to create an entry in the networks table for each of the networks the management node is on. Additional network configurations can be added to the xCAT networks table manually later if needed. &lt;/p&gt;
&lt;p&gt;The networks that are typically used in a cluster are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Management network - used by the management node to install and manage the OS of the nodes.  The MN and in-band NIC of the nodes are connected to this network.  If you have a large cluster with service nodes, sometimes this network is segregated into separate VLANs for each service node.  See &lt;a class="" href="../Setting%20Up%20a%20Linux%20Hierarchical%20Cluster"&gt;Setting Up a Linux Hierarchical Cluster&lt;/a&gt; for details.&lt;/li&gt;
&lt;li&gt;Service network -  used by the management node to control the nodes out of band via the BMC.  If the BMCs are configured in shared mode, then this network can be combined with the management network.&lt;/li&gt;
&lt;li&gt;Application network - used by the HPC applications on the compute nodes.  Usually an IB network.&lt;/li&gt;
&lt;li&gt;Site (Public) network - used to access the management node and sometimes for the compute nodes to provide services to the site.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In our example, we only deal with the management network because:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;the BMCs are in shared mode, so they don't need a separate service network&lt;/li&gt;
&lt;li&gt;we are not showing how to have xCAT automatically configure the application network NICs.  See &lt;a class="" href="/p/xcat/wiki/Configuring_Secondary_Adapters"&gt;Configuring_Secondary_Adapters&lt;/a&gt; if you are interested in that.&lt;/li&gt;
&lt;li&gt;under normal circumstances there is no need to put the site network in the networks table&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For a sample Networks Setup, see the following example: &lt;a class="" href="../Setting_Up_a_Linux_xCAT_Mgmt_Node/#appendix-a-network-table-setup-example"&gt;Setting_Up_a_Linux_xCAT_Mgmt_Node#Appendix_A:_Network_Table_Setup_Example&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="configure-nics"&gt;&lt;strong&gt;Configure NICS&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Configure the cluster facing NIC(s) on the management node.&lt;br /&gt;
For example edit the following files:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;On&lt;/span&gt; &lt;span class="n"&gt;RH&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;etc&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;sysconfig&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;network&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;scripts&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;ifcfg&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;eth1&lt;/span&gt;
&lt;span class="n"&gt;On&lt;/span&gt; &lt;span class="n"&gt;SLES&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;etc&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;sysconfig&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;network&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;ifcfg&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;eth1&lt;/span&gt;

 &lt;span class="n"&gt;DEVICE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;eth1&lt;/span&gt;
 &lt;span class="n"&gt;ONBOOT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;yes&lt;/span&gt;
 &lt;span class="n"&gt;BOOTPROTO&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;static&lt;/span&gt;
 &lt;span class="n"&gt;IPADDR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;172.20.0.1&lt;/span&gt;
 &lt;span class="n"&gt;NETMASK&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;255.240.0.0&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id="prevent-dhcp-client-from-overwriting-dns-configuration-optional"&gt;&lt;strong&gt;Prevent DHCP client from overwriting DNS configuration (Optional)&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;If the public facing NIC on your management node is configured by DHCP, you may want to set '''PEERDNS=no''' in the NIC's config file to prevent the dhclient from rewriting /etc/resolv.conf.  This would be important if you will be configuring DNS on the management node (via makedns - covered later in this doc) and want the management node itself to use that DNS.  In this case, set '''PEERDNS=no''' in each /etc/sysconfig/network-scripts/ifcfg-* file that has '''BOOTPROTO=dhcp'''.&lt;/p&gt;
&lt;p&gt;On the other hand, if you '''want''' dhclient to configure /etc/resolv.conf on your management node, then don't set PEERDNS=no in the NIC config files.&lt;/p&gt;
&lt;h3 id="configure-hostname"&gt;&lt;strong&gt;Configure hostname&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;The xCAT management node hostname should be configured before installing xCAT on the management node. The hostname or its resolvable ip address will be used as the default master name in the xCAT site table, when installed. This name needs to be the one that will resolve to the cluster-facing NIC. Short hostnames (no domain) are the norm for the management node and all cluster nodes.  Node names should never end in  "-enx"  for any x. &lt;/p&gt;
&lt;p&gt;To set the hostname, edit /etc/sysconfig/network to contain, for example:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt; &lt;span class="n"&gt;HOSTNAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;mgt&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;If you run hostname command, if should return the same:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt; &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="n"&gt;hostname&lt;/span&gt;
 &lt;span class="n"&gt;mgt&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id="setup-basic-hosts-file"&gt;&lt;strong&gt;Setup basic hosts file&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Ensure that at least the management node is in /etc/hosts:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt; &lt;span class="mf"&gt;127.0.0.1&lt;/span&gt;               &lt;span class="n"&gt;localhost&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;localdomain&lt;/span&gt; &lt;span class="n"&gt;localhost&lt;/span&gt;
 &lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;                     &lt;span class="n"&gt;localhost6&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;localdomain6&lt;/span&gt; &lt;span class="n"&gt;localhost6&lt;/span&gt;
 &lt;span class="err"&gt;###&lt;/span&gt;
 &lt;span class="mf"&gt;172.20.0.1&lt;/span&gt; &lt;span class="n"&gt;mgt&lt;/span&gt; &lt;span class="n"&gt;mgt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cluster&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id="setup-the-timezone"&gt;&lt;strong&gt;Setup the TimeZone&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;When using the management node to install compute nodes, the timezone configuration on the management node will be inherited by the compute nodes. So it is recommended to setup the correct timezone on the management node.  To do this on RHEL, see &lt;a href="http://www.redhat.com/advice/tips/timezone.html." rel="nofollow"&gt;http://www.redhat.com/advice/tips/timezone.html.&lt;/a&gt;  The process is similar, but not identical, for SLES.  (Just google it.)&lt;/p&gt;
&lt;p&gt;You can also optionally set up the MN as an NTP for the cluster.  See &lt;a class="" href="/p/xcat/wiki/Setting_up_NTP_in_xCAT"&gt;Setting_up_NTP_in_xCAT&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id="create-a-separate-file-system-for-install-optional"&gt;&lt;strong&gt;Create a Separate File system for /install (optional)&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;It is not required, but recommended, that you create a separate file system for the /install directory on the Management Node. The size should be at least 30 meg to hold to allow space for several install images.&lt;/p&gt;
&lt;h3 id="restart-management-node"&gt;&lt;strong&gt;Restart Management Node&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Note: in xCAT 2.8 and above, you do not need to restart the management node.  Simply restart the cluster-facing NIC, for example:  ifdown eth1; ifup eth1&lt;/p&gt;
&lt;p&gt;For xCAT 2.7 and below, though it is possible to restart the correct services for all settings, the simplest step would be to reboot the Management Node at this point.&lt;/p&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="install-xcat-on-the-management-node"&gt;Install xCAT on the Management Node&lt;/h2&gt;
&lt;div&gt;
&lt;div class="markdown_content"&gt;&lt;h3 id="get-the-xcat-installation-source"&gt;&lt;strong&gt;Get the xCAT Installation Source&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;There are two options to get the installation source of xCAT: &lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;download the xCAT installation packages &lt;/li&gt;
&lt;li&gt;or install directly from the internet-hosted repository &lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Pick either one, but not both. &lt;/p&gt;
&lt;h4 id="option-1-prepare-for-the-install-of-xcat-without-internet-access"&gt;&lt;strong&gt;Option 1: Prepare for the Install of xCAT without Internet Access&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;If not able to, or not want to, use the live internet repository, choose this option. &lt;/p&gt;
&lt;p&gt;Go to the &lt;a class="" href="/p/xcat/wiki/Download_xCAT"&gt;Download xCAT&lt;/a&gt; site and download the level of xCAT tarball you desire. Go to the &lt;a class="" href="http://sourceforge.net/projects/xcat/files/xcat-dep/2.x_Linux"&gt;xCAT Dependencies Download&lt;/a&gt; page and download the latest snap of the xCAT dependency tarball. (The latest snap of the xCAT dependency tarball will work with any version of xCAT.) &lt;/p&gt;
&lt;p&gt;Copy the files to the Management Node (MN) and untar them: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;mkdir&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;root&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;xcat2&lt;/span&gt;
&lt;span class="n"&gt;cd&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;root&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;xcat2&lt;/span&gt;
&lt;span class="n"&gt;tar&lt;/span&gt; &lt;span class="n"&gt;jxvf&lt;/span&gt; &lt;span class="n"&gt;xcat&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;2.&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tar&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;bz2&lt;/span&gt;     &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="n"&gt;or&lt;/span&gt; &lt;span class="n"&gt;core&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;rpms&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;snap&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tar&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;bz2&lt;/span&gt;
&lt;span class="n"&gt;tar&lt;/span&gt; &lt;span class="n"&gt;jxvf&lt;/span&gt; &lt;span class="n"&gt;xcat&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;dep&lt;/span&gt;&lt;span class="o"&gt;-*&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tar&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;bz2&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Point yum/zypper to the local repositories for xCAT and its dependencies: &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;[RH]:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="nx"&gt;cd&lt;/span&gt; &lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nb"&gt;root&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;xcat2&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;xcat&lt;/span&gt;&lt;span class="na"&gt;-dep&lt;/span&gt;&lt;span class="o"&gt;/&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;release&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;/&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;arch&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nx"&gt;.&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;mklocalrepo.sh&lt;/span&gt;
&lt;span class="nx"&gt;cd&lt;/span&gt; &lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nb"&gt;root&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;xcat2&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;xcat&lt;/span&gt;&lt;span class="na"&gt;-core&lt;/span&gt;
&lt;span class="nx"&gt;.&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;mklocalrepo.sh&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;[SLES 11]:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt; &lt;span class="nx"&gt;zypper&lt;/span&gt; &lt;span class="nx"&gt;ar&lt;/span&gt; &lt;span class="nb"&gt;file&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;///root/xcat2/xcat-dep/sles11/&amp;lt;arch&amp;gt; xCAT-dep &lt;/span&gt;
 &lt;span class="nx"&gt;zypper&lt;/span&gt; &lt;span class="nx"&gt;ar&lt;/span&gt; &lt;span class="nb"&gt;file&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;///root/xcat2/xcat-core  xcat-core&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;[SLES 10.2+]:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="nx"&gt;zypper&lt;/span&gt; &lt;span class="nx"&gt;sa&lt;/span&gt; &lt;span class="nb"&gt;file&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;///root/xcat2/xcat-dep/sles10/&amp;lt;arch&amp;gt; xCAT-dep&lt;/span&gt;
&lt;span class="nx"&gt;zypper&lt;/span&gt; &lt;span class="nx"&gt;sa&lt;/span&gt; &lt;span class="nb"&gt;file&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;///root/xcat2/xcat-core xcat-core&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h4 id="option-2-use-the-internet-hosted-xcat-repository"&gt;&lt;strong&gt;Option 2: Use the Internet-hosted xCAT Repository&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;When using the live internet repository, you need to first make sure that name resolution on your management node is at least set up enough to resolve sourceforge.net. Then make sure the correct repo files are in /etc/yum.repos.d.&lt;/p&gt;
&lt;h5 id="internet-repo-for-xcat-core"&gt;&lt;strong&gt;Internet repo for xCAT-core&lt;/strong&gt;&lt;/h5&gt;
&lt;p&gt;You could use the &lt;strong&gt;official release&lt;/strong&gt; or &lt;strong&gt;latest snapshot build&lt;/strong&gt; or &lt;strong&gt;development build&lt;/strong&gt;, based on your requirements.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;To get the repo file for the current official release&lt;/strong&gt;: &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;[RH]:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;wget &lt;a href="http://sourceforge.net/projects/xcat/files/yum"&gt;http://sourceforge.net/projects/xcat/files/yum/&lt;/a&gt;&amp;lt;xCAT-release&amp;gt;/xcat-core/xCAT-core.repo &lt;/p&gt;
&lt;p&gt;for example: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;cd&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;etc&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;yum&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;repos&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;d&lt;/span&gt;
&lt;span class="n"&gt;wget&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//sourceforge.net/projects/xcat/files/yum/2.8/xcat-core/xCAT-core.repo&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;[SLES11]:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;zypper ar -t rpm-md &lt;a href="http://sourceforge.net/projects/xcat/files/yum"&gt;http://sourceforge.net/projects/xcat/files/yum/&lt;/a&gt;&amp;lt;xCAT-release&amp;gt;/xcat-core xCAT-core &lt;/p&gt;
&lt;p&gt;for example: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;zypper&lt;/span&gt; &lt;span class="n"&gt;ar&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;rpm&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;md&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//sourceforge.net/projects/xcat/files/yum/2.8/xcat-core xCAT-core&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;[SLES10.2+]:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;zypper&lt;/span&gt; &lt;span class="n"&gt;sa&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//sourceforge.net/projects/xcat/files/yum/&amp;lt;xCAT-release\&amp;gt;/xcat-core xCAT-core&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;for example:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt; &lt;span class="n"&gt;zypper&lt;/span&gt; &lt;span class="n"&gt;sa&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//sourceforge.net/projects/xcat/files/yum/2.8/xcat-core xCAT-core&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;To get the repo file for the latest snapshot build, which includes the latest bug fixes, but is not completely tested:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;[RH]:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;wget &lt;a href="http://sourceforge.net/projects/xcat/files/yum"&gt;http://sourceforge.net/projects/xcat/files/yum/&lt;/a&gt;&amp;lt;xCAT-release&amp;gt;/core-snap/xCAT-core.repo &lt;/p&gt;
&lt;p&gt;for example:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;cd&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;etc&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;yum&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;repos&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;d&lt;/span&gt;
&lt;span class="n"&gt;wget&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//sourceforge.net/projects/xcat/files/yum/2.8/core-snap/xCAT-core.repo&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;[SLES11]:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;zypper ar -t rpm-md &lt;a href="http://sourceforge.net/projects/xcat/files/yum"&gt;http://sourceforge.net/projects/xcat/files/yum/&lt;/a&gt;&amp;lt;xCAT-release&amp;gt;/core-snap xCAT-core &lt;/p&gt;
&lt;p&gt;for example: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;zypper&lt;/span&gt; &lt;span class="n"&gt;ar&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;rpm&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;md&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//sourceforge.net/projects/xcat/files/yum/2.8/core-snap xCAT-core&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;[SLES10.2+]:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;zypper&lt;/span&gt; &lt;span class="n"&gt;sa&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//sourceforge.net/projects/xcat/files/yum/&amp;lt;xCAT-release\&amp;gt;/core-snap xCAT-core&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;for example:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt; &lt;span class="n"&gt;zypper&lt;/span&gt; &lt;span class="n"&gt;sa&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//sourceforge.net/projects/xcat/files/yum/2.8/core-snap xCAT-core&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;To get the repo file for the latest development build, which is the snap shot build of the new version we are actively developing. This version has not been released yet. Use at your own risk:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;[RH]:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;wget&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//sourceforge.net/projects/xcat/files/yum/devel/core-snap/xCAT-core.repo&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;[SLES11]:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;zypper&lt;/span&gt; &lt;span class="n"&gt;ar&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;rpm&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;md&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//sourceforge.net/projects/xcat/files/yum/devel/core-snap xCAT-core&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;[SLES10.2+]:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;zypper&lt;/span&gt; &lt;span class="n"&gt;sa&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//sourceforge.net/projects/xcat/files/yum/devel/core-snap xCAT-core&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h5 id="internet-repo-for-xcat-dep"&gt;&lt;strong&gt;Internet repo for xCAT-dep&lt;/strong&gt;&lt;/h5&gt;
&lt;p&gt;&lt;strong&gt;To get the repo file for xCAT-dep packages:&lt;/strong&gt; &lt;/p&gt;
&lt;p&gt;&lt;strong&gt; [RH]:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;wget &lt;a href="http://sourceforge.net/projects/xcat/files/yum/xcat-dep"&gt;http://sourceforge.net/projects/xcat/files/yum/xcat-dep/&lt;/a&gt;&amp;lt;OS-release&amp;gt;/&amp;lt;arch&amp;gt;/xCAT-dep.repo &lt;/p&gt;
&lt;p&gt;for example: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;wget&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//sourceforge.net/projects/xcat/files/yum/xcat-dep/rh6/x86_64/xCAT-dep.repo&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;[SLES11]:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;zypper ar -t rpm-md &lt;a href="http://sourceforge.net/projects/xcat/files/yum/xcat-dep"&gt;http://sourceforge.net/projects/xcat/files/yum/xcat-dep/&lt;/a&gt;&amp;lt;OS-release&amp;gt;/&amp;lt;arch&amp;gt; xCAT-dep &lt;/p&gt;
&lt;p&gt;for example: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;zypper&lt;/span&gt; &lt;span class="n"&gt;ar&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;rpm&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;md&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//sourceforge.net/projects/xcat/files/yum/xcat-dep/sles11/x86_64 xCAT-dep&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;[SLES10.2+]:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;zypper&lt;/span&gt; &lt;span class="n"&gt;sa&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//sourceforge.net/projects/xcat/files/yum/xcat-dep/&amp;lt;OS-release\&amp;gt;/&amp;lt;arch\&amp;gt; xCAT-dep&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;for example:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;zypper&lt;/span&gt; &lt;span class="n"&gt;sa&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//sourceforge.net/projects/xcat/files/yum/xcat-dep/sles10/x86_64 xCAT-dep&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id="for-both-options-make-required-packages-from-the-distro-available"&gt;&lt;strong&gt;For both Options: Make Required Packages From the Distro Available&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;xCAT uses on several packages that come from the Linux distro. Follow this section to create the repository of the OS on the Management Node. &lt;/p&gt;
&lt;p&gt;See the following documentation: &lt;/p&gt;
&lt;p&gt;&lt;a class="" href="/p/xcat/wiki/Setting_Up_the_OS_Repository_on_the_Mgmt_Node"&gt;Setting Up the OS Repository on the Mgmt Node&lt;/a&gt; &lt;/p&gt;
&lt;h3 id="install-xcat-packages"&gt;&lt;strong&gt;Install xCAT Packages&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;[RH]: Use yum to install xCAT and all the dependencies: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;yum&lt;/span&gt; &lt;span class="n"&gt;clean&lt;/span&gt; &lt;span class="n"&gt;metadata&lt;/span&gt;
&lt;span class="n"&gt;yum&lt;/span&gt; &lt;span class="n"&gt;install&lt;/span&gt; &lt;span class="n"&gt;xCAT&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;[SLES]Use zypper to install xCAT and all the dependencies: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;zypper&lt;/span&gt; &lt;span class="n"&gt;install&lt;/span&gt; &lt;span class="n"&gt;xCAT&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id="optional-install-the-packages-for-sysclone"&gt;&lt;strong&gt;(Optional) Install the Packages for sysclone&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Note: in xCAT 2.8.2 and above, xCAT supports cloning new nodes from a pre-installed/pre-configured node, we call this provisioning method as &lt;strong&gt;sysclone&lt;/strong&gt;. It leverages the opensource tool &lt;a class="" href="http://www.systemimager.org" rel="nofollow"&gt;systemimager&lt;/a&gt;. If you will be installing stateful(diskful) nodes using the &lt;strong&gt;sysclone&lt;/strong&gt; provmethod, you need to install systemimager and all the dependencies: &lt;/p&gt;
&lt;p&gt;[RH]: Use yum to install systemimager and all the dependencies: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;yum&lt;/span&gt; &lt;span class="n"&gt;install&lt;/span&gt; &lt;span class="n"&gt;systemimager&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;server&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;[SLES]: Use zypper to install systemimager and all the dependencies: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;zypper&lt;/span&gt; &lt;span class="n"&gt;install&lt;/span&gt; &lt;span class="n"&gt;systemimager&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;server&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id="quick-test-of-xcat-installation"&gt;&lt;strong&gt;Quick Test of xCAT Installation&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Add xCAT commands to the path by running the following: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;source&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;etc&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;profile&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;xcat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sh&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Check to see the database is initialized: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;tabdump&lt;/span&gt; &lt;span class="n"&gt;site&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;The output should similar to the following: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;comments&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;disable&lt;/span&gt;
    &lt;span class="s"&gt;"xcatdport"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;"3001"&lt;/span&gt;&lt;span class="p"&gt;,,&lt;/span&gt;
    &lt;span class="s"&gt;"xcatiport"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;"3002"&lt;/span&gt;&lt;span class="p"&gt;,,&lt;/span&gt;
    &lt;span class="s"&gt;"tftpdir"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;"/tftpboot"&lt;/span&gt;&lt;span class="p"&gt;,,&lt;/span&gt;
    &lt;span class="s"&gt;"installdir"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;"/install"&lt;/span&gt;&lt;span class="p"&gt;,,&lt;/span&gt;
         &lt;span class="p"&gt;.&lt;/span&gt;
         &lt;span class="p"&gt;.&lt;/span&gt;
         &lt;span class="p"&gt;.&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;If the tabdump command does not work, see &lt;a class="" href="/p/xcat/wiki/Debugging_xCAT_Problems"&gt;Debugging xCAT Problems&lt;/a&gt;. &lt;/p&gt;
&lt;h3 id="updating-xcat-packages-later"&gt;&lt;strong&gt;Updating xCAT Packages Later&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;If you need to update the xCAT RPMs later: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;If the management node does not have access to the internet&lt;/strong&gt;: download the new version of xCAT from &lt;a class="" href="/p/xcat/wiki/Download_xCAT"&gt;Download xCAT&lt;/a&gt; and the dependencies from &lt;a class="" href="http://sourceforge.net/project/showfiles.php?group_id=208749&amp;amp;package_id=258529&amp;amp;release_id=608981"&gt;xCAT Dependencies Download&lt;/a&gt; and untar them in the same place as before. &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;If the management node has access to the internet&lt;/strong&gt;, the commands below will pull the updates directly from the xCAT site. &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To update xCAT: &lt;/p&gt;
&lt;p&gt;[RH]: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;yum&lt;/span&gt; &lt;span class="n"&gt;clean&lt;/span&gt; &lt;span class="n"&gt;metadata&lt;/span&gt;
    &lt;span class="n"&gt;yum&lt;/span&gt; &lt;span class="n"&gt;update&lt;/span&gt; &lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;xCAT&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;[SLES]: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;zypper&lt;/span&gt; &lt;span class="n"&gt;refresh&lt;/span&gt;
    &lt;span class="n"&gt;zypper&lt;/span&gt; &lt;span class="n"&gt;update&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;package&lt;/span&gt; &lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;xCAT&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Note: this will not apply updates that may have been made to some of the xCAT deps packages. (If there are brand new deps packages, they will get installed.) In most cases, this is ok, but if you want to make all updates for xCAT rpms and deps, run the following command. This command will also pick up additional OS updates. &lt;/p&gt;
&lt;p&gt;[RH]: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;yum&lt;/span&gt; &lt;span class="n"&gt;update&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;[SLES]: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;zypper&lt;/span&gt; &lt;span class="n"&gt;refresh&lt;/span&gt;
    &lt;span class="n"&gt;zypper&lt;/span&gt; &lt;span class="n"&gt;update&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Sometimes zypper refresh fails to refresh zypper local repository. Try to run zypper clean to clean local metadata, then use zypper refresh. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you are updating from xCAT 2.7.x (or earlier) to xCAT 2.8 or later, there are some additional migration steps that need to be considered: &lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Switch from xCAT IBM HPC Integration support to using Software Kits - see &lt;br /&gt;
&lt;a class="" href="../IBM_HPC_Software_Kits/#switching-from-xcat-ibm-hpc-integration-support-to-using-software-kits"&gt;IBM_HPC_Software_Kits#Switching_from_xCAT_IBM_HPC_Integration_Support_to_Using_Software_Kits&lt;/a&gt;&lt;br /&gt;
 for details. &lt;/li&gt;
&lt;li&gt;(Optional) Use nic attibutes to replace the otherinterfaces attribute to configure secondary see &lt;a class="" href="/p/xcat/wiki/Cluster_Name_Resolution"&gt;Cluster_Name_Resolution&lt;/a&gt; for details. &lt;/li&gt;
&lt;li&gt;Convert non-osimage based system to osimage based system - see&lt;br /&gt;
&lt;a class="" href="/p/xcat/wiki/Convert_Non-osimage_Based_System_To_Osimage_Based_System"&gt;Convert_Non-osimage_Based_System_To_Osimage_Based_System&lt;/a&gt; for details&lt;/li&gt;
&lt;/ol&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="configure-xcat"&gt;Configure xCAT&lt;/h2&gt;
&lt;div&gt;
&lt;div class="markdown_content"&gt;&lt;div class="toc"&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#networks-table"&gt;Networks Table&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#passwd-table"&gt;passwd Table&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#setup-dns"&gt;Setup DNS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#setup-tftp"&gt;Setup TFTP&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#setup-conserver"&gt;Setup conserver&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#setup-dhcp"&gt;Setup DHCP&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;h3 id="networks-table"&gt;&lt;strong&gt;Networks Table&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;All networks in the cluster must be defined in the networks table. When xCAT was installed, it ran makenetworks, which created an entry in this table for each of the networks the management node is connected to. Now is the time to add to the networks table any other networks in the cluster, or update existing networks in the table. &lt;/p&gt;
&lt;p&gt;For a sample Networks Setup, see the following example in Appendix_A &lt;br /&gt;
&lt;a class="" href="../Setting_Up_a_Linux_xCAT_Mgmt_Node/#appendix-a-network-table-setup-example"&gt;Setting_Up_a_Linux_xCAT_Mgmt_Node#Appendix_A:_Network_Table_Setup_Example&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="passwd-table"&gt;&lt;strong&gt;passwd Table&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;The password should be set in the passwd table that will be assigned to root when the node is installed. You can modify this table using tabedit. To change the default password for root on the nodes, change the system line. To change the password to be used for the BMCs (x-series only), change the ipmi line. &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;tabedit&lt;/span&gt; &lt;span class="n"&gt;passwd&lt;/span&gt;
&lt;span class="cp"&gt;#key,username,password,cryptmethod,comments,disable&lt;/span&gt;
&lt;span class="s"&gt;"system"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;"root"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;"cluster"&lt;/span&gt;&lt;span class="p"&gt;,,,&lt;/span&gt;
&lt;span class="s"&gt;"hmc"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;"hscroot"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;"ABC123"&lt;/span&gt;&lt;span class="p"&gt;,,,&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id="setup-dns"&gt;&lt;strong&gt;Setup DNS&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;To get the hostname/IP pairs copied from /etc/hosts to the DNS on the MN: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Ensure that /etc/sysconfig/named does not have ROOTDIR set &lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set site.forwarders to your site-wide DNS servers that can resolve site or public hostnames. The DNS on the MN will forward any requests it can't answer to these servers. &lt;/p&gt;
&lt;p&gt;chdef -t site forwarders=1.2.3.4,1.2.5.6&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Edit /etc/resolv.conf to point the MN to its own DNS. (Note: this won't be required in xCAT 2.8 and above.) &lt;/p&gt;
&lt;p&gt;search cluster&lt;br /&gt;
nameserver 172.20.0.1&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run makedns &lt;/p&gt;
&lt;p&gt;makedns &amp;amp;&amp;amp; service named start&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For more information about name resolution in an xCAT Cluster, see &lt;a class="alink" href="/p/xcat/wiki/Cluster_Name_Resolution"&gt;[Cluster_Name_Resolution]&lt;/a&gt;. &lt;/p&gt;
&lt;h3 id="setup-tftp"&gt;&lt;strong&gt;Setup TFTP&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Nothing to do here - the TFTP server is done by xCAT during the Management Node install. &lt;/p&gt;
&lt;h3 id="setup-conserver"&gt;&lt;strong&gt;Setup conserver&lt;/strong&gt;&lt;/h3&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;makeconservercf&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id="setup-dhcp"&gt;&lt;strong&gt;Setup DHCP&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;You usually don't want your DHCP server listening on your public (site) network, so set site.dhcpinterfaces to your MN's cluster facing NICs. For example: &lt;/p&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;h4 id="optionalsetup-the-dhcp-interfaces-in-site-table"&gt;&lt;strong&gt;(Optional)Setup the DHCP interfaces in site table&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;To set up the site table dhcp interfaces for your system p cluster, identify the correct interfaces that xCAT should listen to on your management node and service nodes:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;     &lt;span class="n"&gt;chdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;site&lt;/span&gt; &lt;span class="n"&gt;dhcpinterfaces&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="n"&gt;pmanagenode&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;eth1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;eth0&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;
     &lt;span class="n"&gt;makedhcp&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt;
     &lt;span class="n"&gt;service&lt;/span&gt; &lt;span class="n"&gt;dhcpd&lt;/span&gt; &lt;span class="n"&gt;restart&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;For Power 775 cluster:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;     &lt;span class="n"&gt;chdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;site&lt;/span&gt; &lt;span class="n"&gt;dhcpinterfaces&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="n"&gt;pmanagenode&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;eth1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;hf0&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;
     &lt;span class="n"&gt;makedhcp&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt;
     &lt;span class="n"&gt;service&lt;/span&gt; &lt;span class="n"&gt;dhcpd&lt;/span&gt; &lt;span class="n"&gt;restart&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h2 id="discover-and-define-your-system-p-hardware"&gt;Discover and Define Your System p Hardware&lt;/h2&gt;
&lt;p&gt;The next steps are to discover your hardware on the nextwork, defined it in the xCAT database, configure xCAT's hardware control, and do the initial definition of the LPARs as the compute nodes. These steps are explained in the following 2 documents. Use the one that applies to your environment:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;For Power 775 clusters, use &lt;a class="alink" href="/p/xcat/wiki/XCAT_Power_775_Hardware_Management"&gt;[XCAT_Power_775_Hardware_Management]&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;For PowerLinux cluster, use &lt;a class="alink" href="/p/xcat/wiki/XCAT_PowerLinux_Hardware_Management"&gt;[XCAT_PowerLinux_Hardware_Management]&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;For all other system p hardware, use &lt;a class="alink" href="/p/xcat/wiki/XCAT_System_p_Hardware_Management"&gt;[XCAT_System_p_Hardware_Management]&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;After performing the steps in one of those documents, return here and continue on in this document.&lt;/p&gt;
&lt;h2 id="additional-setup-on-the-mn-for-power-775-clusters-only"&gt;Additional Setup on the MN for Power 775 Clusters Only&lt;/h2&gt;
&lt;p&gt;If you are not setting up a Power 775 cluster, skip this section.&lt;/p&gt;
&lt;p&gt;The xCAT Management Node must be configured and running on the DB2 database, before installing the following cluster hardware components.&lt;/p&gt;
&lt;p&gt;This section describes the additional setup required for the Power 775 support. This includes the setup of the cluster hardware components and the installation of TEAL, ISNM, and LoadLeveler on the xCAT management node. TEAL, ISNM and LoadLeveler have dependencies on each other so all three must be installed.&lt;/p&gt;
&lt;h3 id="downloading-and-installing-dfm-and-hdwr_svr"&gt;Downloading and Installing DFM and hdwr_svr&lt;/h3&gt;
&lt;p&gt;Refer to the following documentation for downloading and installing DFM and hdwr_svr in an HPC cluster: &lt;a class="" href="https://sourceforge.net/apps/wiki/xcat/index.php?title=XCAT_Power_775_Hardware_Management#Downloading_and_Installing_DFM"&gt;Downloading and Installing DFM&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="obtain-additional-packages-for-hfi-network"&gt;Obtain additional packages for HFI network&lt;/h3&gt;
&lt;p&gt;To work with HFI network in Power 775 clusters, the following RPMs and scripts must be obtained from IBM and put on the xCAT MN in the suggested directories. These packages and files should exist as part of the IBM LTC RH6 customized kernel:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;hfi&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;dd&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;kernel&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;2.6.32&lt;/span&gt;&lt;span class="o"&gt;-*&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ppc64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rpm&lt;/span&gt;
&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;hfi&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;dd&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;kernel&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;2.6.32&lt;/span&gt;&lt;span class="o"&gt;-*&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ppc64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rpm&lt;/span&gt;
&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;hfi&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;dd&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;hfi_util&lt;/span&gt;&lt;span class="o"&gt;-*&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;el6&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ppc64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rpm&lt;/span&gt;
&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;hfi&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;dd&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;hfi_ndai&lt;/span&gt;&lt;span class="o"&gt;-*&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;el6&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ppc64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rpm&lt;/span&gt;

&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;hfi&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;dhcp&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;-*&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;el6&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ppc64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rpm&lt;/span&gt;
&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;hfi&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;dhcp&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;dhcp&lt;/span&gt;&lt;span class="o"&gt;-*&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;el6&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ppc64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rpm&lt;/span&gt;
&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;hfi&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;dhcp&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;dhclient&lt;/span&gt;&lt;span class="o"&gt;-*&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;el6&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ppc64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rpm&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id="install-loadleveler"&gt;Install LoadLeveler&lt;/h3&gt;
&lt;p&gt;Refer to the following documentation for setting up LL in an HPC cluster:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="alink" href="/p/xcat/wiki/Setting_up_LoadLeveler_in_a_Stateful_Cluster"&gt;[Setting_up_LoadLeveler_in_a_Stateful_Cluster]&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="alink" href="/p/xcat/wiki/Setting_up_LoadLeveler_in_a_Statelite_or_Stateless_Cluster"&gt;[Setting_up_LoadLeveler_in_a_Statelite_or_Stateless_Cluster]&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="install-teal"&gt;&lt;strong&gt;Install Teal&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Download the Teal prerequisite rpm packages and the Teal product rpms to your xCAT management. The Teal packages should be available from Teal website: &lt;a href="http://sourceforge.net/projects/pyteal/files"&gt;http://sourceforge.net/projects/pyteal/files/&lt;/a&gt; The Teal product does have prerequisites on the LoadL resource manager. Place the packages in a directory such as:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;post&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;otherpkgs&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;rhels6&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;ppc64&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;teal&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;The Teal prerequisites are:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt; &lt;span class="n"&gt;gdbm&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;1.8.3&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;
 &lt;span class="n"&gt;readline&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;4.3&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;
 &lt;span class="n"&gt;python&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;2.6.2&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;
 &lt;span class="n"&gt;perl&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;Module&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;Load&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;0.16&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;115&lt;/span&gt;
 &lt;span class="n"&gt;pyodbc&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;2.1.7&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;   &lt;span class="o"&gt;***&lt;/span&gt;  &lt;span class="n"&gt;this&lt;/span&gt; &lt;span class="n"&gt;is&lt;/span&gt; &lt;span class="n"&gt;in&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;latest&lt;/span&gt; &lt;span class="n"&gt;xCAT&lt;/span&gt; &lt;span class="n"&gt;deps&lt;/span&gt; &lt;span class="n"&gt;package&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Other than pyodbc, these rpms were most likely installed with your base RedHat installation. If these rpms are not installed, run a command similar to the following to install:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt; &lt;span class="n"&gt;yum&lt;/span&gt; &lt;span class="n"&gt;install&lt;/span&gt; &lt;span class="n"&gt;gdbm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ppc64&lt;/span&gt; &lt;span class="n"&gt;readline&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ppc64&lt;/span&gt; &lt;span class="n"&gt;python&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ppc64&lt;/span&gt; &lt;span class="n"&gt;perl&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;Module&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;Load&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ppc64&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;The Teal rpms for the xCAT EMS are:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;teal&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;base&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;1.1.0.0&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;1.&lt;/span&gt;&lt;span class="n"&gt;ppc64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rpm&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;used&lt;/span&gt; &lt;span class="n"&gt;with&lt;/span&gt; &lt;span class="n"&gt;base&lt;/span&gt; &lt;span class="n"&gt;Teal&lt;/span&gt; &lt;span class="n"&gt;support&lt;/span&gt;
&lt;span class="n"&gt;teal&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;ll&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;1.1.0.0&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;1.&lt;/span&gt;&lt;span class="n"&gt;ppc64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rpm&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;used&lt;/span&gt; &lt;span class="n"&gt;with&lt;/span&gt; &lt;span class="n"&gt;LL&lt;/span&gt; &lt;span class="n"&gt;teal&lt;/span&gt;
&lt;span class="n"&gt;teal&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;sfp&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;1.1.0.0&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;1.&lt;/span&gt;&lt;span class="n"&gt;ppc64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rpm&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;used&lt;/span&gt; &lt;span class="n"&gt;with&lt;/span&gt; &lt;span class="n"&gt;HMC&lt;/span&gt; &lt;span class="n"&gt;Service&lt;/span&gt; &lt;span class="n"&gt;focal&lt;/span&gt; &lt;span class="n"&gt;point&lt;/span&gt;
&lt;span class="n"&gt;teal&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;isnm&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;1.1.0.0&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;1.&lt;/span&gt;&lt;span class="n"&gt;ppc64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rpm&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;used&lt;/span&gt; &lt;span class="n"&gt;with&lt;/span&gt; &lt;span class="n"&gt;isnm&lt;/span&gt; &lt;span class="n"&gt;Teal&lt;/span&gt;
&lt;span class="n"&gt;teal&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;pnsd&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;1.1.0.0&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;1.&lt;/span&gt;&lt;span class="n"&gt;ppc64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rpm&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;used&lt;/span&gt; &lt;span class="n"&gt;with&lt;/span&gt; &lt;span class="n"&gt;PE&lt;/span&gt;  &lt;span class="n"&gt;pnsd&lt;/span&gt; &lt;span class="n"&gt;teal&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;There are other Teal rpms for GPFS that will be required on the Power 775 cluster. GPFS is not required on the EMS, but will be required on GPFS I/O server nodes The teal-gpfs-sn-1.1.0.0-1 rpms has dependencies for gpfs.base and libmmantras.so.&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;teal&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;gpfs&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;1.1.0.0&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;1.&lt;/span&gt;&lt;span class="n"&gt;ppc64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rpm&lt;/span&gt;     &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;used&lt;/span&gt; &lt;span class="n"&gt;with&lt;/span&gt; &lt;span class="n"&gt;base&lt;/span&gt; &lt;span class="n"&gt;GPFS&lt;/span&gt;
&lt;span class="n"&gt;teal&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;gpfs&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;sn&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;1.1.0.0&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;1.&lt;/span&gt;&lt;span class="n"&gt;ppc64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rpm&lt;/span&gt;  &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;used&lt;/span&gt; &lt;span class="n"&gt;with&lt;/span&gt; &lt;span class="n"&gt;GPFS&lt;/span&gt; &lt;span class="n"&gt;server&lt;/span&gt; &lt;span class="n"&gt;nodes&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;To install:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;     &lt;span class="nx"&gt;yum&lt;/span&gt; &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nx"&gt;pyodbc&lt;/span&gt;
     &lt;span class="nx"&gt;cd&lt;/span&gt; &lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nb"&gt;install&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;otherpkgs&lt;/span&gt;&lt;span class="o"&gt;/&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;os&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;/&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;arch&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;/&lt;/span&gt;&lt;span class="nx"&gt;teal&lt;/span&gt;
     &lt;span class="nx"&gt;rpm&lt;/span&gt; &lt;span class="na"&gt;-Uvh&lt;/span&gt; &lt;span class="nx"&gt;.&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;teal&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="bp"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rpm&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;The teal executables are located in /opt/teal/bin directory on the EMS. For more information on Teal please refer to the Teal documentation. (Need to add pointer when available)&lt;/p&gt;
&lt;p&gt;Teal tables should viewed using tllsalert and tllsevent commands, or xCAT database commands (e.g. tabedit, tabdump).&lt;/p&gt;
&lt;p&gt;For full list of teal commands: &lt;a href="https://sourceforge.net/apps/mediawiki/pyteal/index.php?title=Command_Reference"&gt;https://sourceforge.net/apps/mediawiki/pyteal/index.php?title=Command_Reference&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;If there are changes to be made for Teal tables, they should be made only using the following teal commands:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;tlchalert&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;close&lt;/span&gt; &lt;span class="n"&gt;an&lt;/span&gt; &lt;span class="n"&gt;alert&lt;/span&gt; &lt;span class="n"&gt;that&lt;/span&gt; &lt;span class="n"&gt;has&lt;/span&gt; &lt;span class="n"&gt;been&lt;/span&gt; &lt;span class="n"&gt;resolved&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;  &lt;span class="n"&gt;It&lt;/span&gt; &lt;span class="n"&gt;will&lt;/span&gt; &lt;span class="n"&gt;also&lt;/span&gt; &lt;span class="n"&gt;close&lt;/span&gt; &lt;span class="n"&gt;all&lt;/span&gt; &lt;span class="n"&gt;duplicate&lt;/span&gt; &lt;span class="n"&gt;alerts&lt;/span&gt; &lt;span class="n"&gt;that&lt;/span&gt; &lt;span class="n"&gt;have&lt;/span&gt; &lt;span class="n"&gt;been&lt;/span&gt; &lt;span class="n"&gt;reported&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;
&lt;span class="n"&gt;tlrmalert&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;remove&lt;/span&gt; &lt;span class="n"&gt;select&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;all&lt;/span&gt; &lt;span class="n"&gt;alerts&lt;/span&gt; &lt;span class="n"&gt;that&lt;/span&gt; &lt;span class="n"&gt;have&lt;/span&gt; &lt;span class="n"&gt;been&lt;/span&gt; &lt;span class="n"&gt;closed&lt;/span&gt; &lt;span class="n"&gt;that&lt;/span&gt; &lt;span class="n"&gt;are&lt;/span&gt; &lt;span class="n"&gt;not&lt;/span&gt; &lt;span class="n"&gt;associated&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;other&lt;/span&gt; &lt;span class="n"&gt;alerts&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;
&lt;span class="n"&gt;tlrmevent&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;remove&lt;/span&gt; &lt;span class="n"&gt;select&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;all&lt;/span&gt; &lt;span class="n"&gt;events&lt;/span&gt; &lt;span class="n"&gt;that&lt;/span&gt; &lt;span class="n"&gt;are&lt;/span&gt; &lt;span class="n"&gt;not&lt;/span&gt; &lt;span class="n"&gt;associated&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;an&lt;/span&gt; &lt;span class="n"&gt;alert&lt;/span&gt; &lt;span class="n"&gt;that&lt;/span&gt; &lt;span class="k"&gt;is&lt;/span&gt; &lt;span class="n"&gt;still&lt;/span&gt; &lt;span class="n"&gt;being&lt;/span&gt; &lt;span class="n"&gt;saved&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;alert&lt;/span&gt; &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Typically a user will do these steps to manage Teal alerts and maintain the Teal tables:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="nx"&gt;Resolve&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;active&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nb"&gt;open&lt;/span&gt; &lt;span class="nx"&gt;alerts&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="nx"&gt;then&lt;/span&gt; &lt;span class="nb"&gt;remove&lt;/span&gt; &lt;span class="nx"&gt;them&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nx"&gt;tlchalert&lt;/span&gt;
    &lt;span class="nx"&gt;tlrmalert&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;older&lt;/span&gt;&lt;span class="na"&gt;-than&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;timestamp&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;to&lt;/span&gt; &lt;span class="nb"&gt;remove&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;alerts&lt;/span&gt; &lt;span class="nx"&gt;that&lt;/span&gt; &lt;span class="nx"&gt;are&lt;/span&gt; &lt;span class="nx"&gt;no&lt;/span&gt; &lt;span class="nx"&gt;longer&lt;/span&gt; &lt;span class="nx"&gt;required&lt;/span&gt;
    &lt;span class="nx"&gt;tlrmevent&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;older&lt;/span&gt;&lt;span class="na"&gt;-than&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;timestamp&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;to&lt;/span&gt; &lt;span class="nb"&gt;remove&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nb"&gt;events&lt;/span&gt; &lt;span class="nx"&gt;that&lt;/span&gt; &lt;span class="nx"&gt;are&lt;/span&gt; &lt;span class="nx"&gt;no&lt;/span&gt; &lt;span class="nx"&gt;longer&lt;/span&gt; &lt;span class="nx"&gt;required&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id="install-isnm"&gt;&lt;strong&gt;Install ISNM&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;Download the ISNM packages to the xCAT MN and place them in a directory such as&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nb"&gt;install&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;otherpkgs&lt;/span&gt;&lt;span class="o"&gt;/&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;os&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;/&lt;/span&gt;&lt;span class="nx"&gt;ppc64&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;isnm&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h4 id="install-isnm-prerequisite-software"&gt;&lt;strong&gt;Install ISNM prerequisite software&lt;/strong&gt;&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Install RSCT&lt;/p&gt;
&lt;p&gt;rpm .ivh rsct.core.utils-3.1.0.2-10266.ppc.rpm rsct.core-3.1.0.2-10266.ppc.rpm  src-1.3.1.1-10266.ppc.rpm&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Obtain RSCT from: &lt;a href="https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?lang=en_US&amp;amp;source=stg-rmc" rel="nofollow"&gt;https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?lang=en_US&amp;amp;source=stg-rmc&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;There is one hdwr_svr lib that is not in the package that must be manually placed in /usr/lib on the Managment Node. ( This will be fixed soon). Right now the lib is in the following backing tree:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;project&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;spreldenali&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;build&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;rdenali1107b&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;export&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;ppc64_redhat_6&lt;/span&gt;&lt;span class="mf"&gt;.0.0&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;usr&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;lib&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;libnetchmcx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;so&lt;/span&gt;
&lt;span class="n"&gt;cp&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="n"&gt;libnetchmcx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;so&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;usr&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;lib&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id="discover-and-define-hardware-components"&gt;Discover and define hardware components&lt;/h3&gt;
&lt;p&gt;The System P hardware components must be discovered, configured and defined in the xCAT database. If you haven't done so already, follow the steps in &lt;a class="alink" href="/p/xcat/wiki/XCAT_Power_775_Hardware_Management"&gt;[XCAT_Power_775_Hardware_Management]&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id="install-and-configure-isnm"&gt;Install and Configure ISNM&lt;/h3&gt;
&lt;p&gt;Install the ISNM package:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="nx"&gt;cd&lt;/span&gt; &lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nb"&gt;install&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;otherpkgs&lt;/span&gt;&lt;span class="o"&gt;/&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;os&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;/&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;arch&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;/&lt;/span&gt;&lt;span class="nx"&gt;isnm&lt;/span&gt;
    &lt;span class="nx"&gt;rpm&lt;/span&gt; &lt;span class="na"&gt;-Uvh&lt;/span&gt; &lt;span class="nx"&gt;.&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;ISNM&lt;/span&gt;&lt;span class="na"&gt;-cnm&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="bp"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;rpm&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Note: you should have already installed the hardware server ISNM-hdwr_svr*.rpm as part of xCAT's DFM.&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;NOTE&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;
&lt;span class="n"&gt;There&lt;/span&gt; &lt;span class="n"&gt;should&lt;/span&gt; &lt;span class="n"&gt;be&lt;/span&gt; &lt;span class="n"&gt;pointer&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;ISNM&lt;/span&gt; &lt;span class="n"&gt;documentation&lt;/span&gt; &lt;span class="n"&gt;in&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;High&lt;/span&gt; &lt;span class="n"&gt;Performance&lt;/span&gt; &lt;span class="n"&gt;Clustering&lt;/span&gt;
&lt;span class="n"&gt;using&lt;/span&gt; &lt;span class="mi"&gt;9125&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;F2C&lt;/span&gt;  &lt;span class="n"&gt;that&lt;/span&gt; &lt;span class="n"&gt;describes&lt;/span&gt;   &lt;span class="n"&gt;HFI&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;ISR&lt;/span&gt; &lt;span class="n"&gt;network&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;Power&lt;/span&gt; &lt;span class="mi"&gt;775&lt;/span&gt;  &lt;span class="n"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="n"&gt;This&lt;/span&gt; &lt;span class="n"&gt;would&lt;/span&gt; &lt;span class="n"&gt;be&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="n"&gt;good&lt;/span&gt; &lt;span class="n"&gt;place&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="n"&gt;describe&lt;/span&gt; &lt;span class="n"&gt;how&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;HFI&lt;/span&gt; &lt;span class="n"&gt;is&lt;/span&gt; &lt;span class="n"&gt;being&lt;/span&gt; &lt;span class="n"&gt;used&lt;/span&gt; &lt;span class="n"&gt;with&lt;/span&gt; &lt;span class="n"&gt;xCAT&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="n"&gt;We&lt;/span&gt; &lt;span class="n"&gt;can&lt;/span&gt; &lt;span class="n"&gt;provide&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="n"&gt;pointer&lt;/span&gt; &lt;span class="n"&gt;where&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;admin&lt;/span&gt; &lt;span class="n"&gt;should&lt;/span&gt; &lt;span class="n"&gt;locate&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;HFI&lt;/span&gt; &lt;span class="n"&gt;device&lt;/span&gt; &lt;span class="n"&gt;drivers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="n"&gt;If&lt;/span&gt; &lt;span class="n"&gt;you&lt;/span&gt; &lt;span class="n"&gt;are&lt;/span&gt; &lt;span class="n"&gt;only&lt;/span&gt; &lt;span class="n"&gt;trying&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="n"&gt;communicate&lt;/span&gt; &lt;span class="n"&gt;over&lt;/span&gt; &lt;span class="n"&gt;hfi&lt;/span&gt; &lt;span class="n"&gt;from&lt;/span&gt; &lt;span class="n"&gt;one&lt;/span&gt; &lt;span class="n"&gt;octant&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="n"&gt;another&lt;/span&gt; &lt;span class="n"&gt;in&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;same&lt;/span&gt; &lt;span class="n"&gt;CEC&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;CNM&lt;/span&gt; &lt;span class="n"&gt;daemon&lt;/span&gt; &lt;span class="n"&gt;must&lt;/span&gt; &lt;span class="n"&gt;be&lt;/span&gt; &lt;span class="n"&gt;running&lt;/span&gt; &lt;span class="n"&gt;on&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;EMS&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;  &lt;span class="n"&gt;and&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;master&lt;/span&gt; &lt;span class="n"&gt;ISR&lt;/span&gt; &lt;span class="n"&gt;ID&lt;/span&gt; &lt;span class="n"&gt;must&lt;/span&gt; &lt;span class="n"&gt;be&lt;/span&gt; &lt;span class="n"&gt;loaded&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;CEC&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="n"&gt;If&lt;/span&gt; &lt;span class="n"&gt;you&lt;/span&gt; &lt;span class="n"&gt;are&lt;/span&gt; &lt;span class="n"&gt;communicating&lt;/span&gt; &lt;span class="n"&gt;over&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;HFI&lt;/span&gt; &lt;span class="n"&gt;from&lt;/span&gt; &lt;span class="n"&gt;one&lt;/span&gt; &lt;span class="n"&gt;CEC&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="n"&gt;another&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;HFI&lt;/span&gt; &lt;span class="n"&gt;cable&lt;/span&gt; &lt;span class="n"&gt;links&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Dlinks&lt;/span&gt; &lt;span class="n"&gt;and&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;or&lt;/span&gt; &lt;span class="n"&gt;LR&lt;/span&gt; &lt;span class="n"&gt;links&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;must&lt;/span&gt; &lt;span class="n"&gt;be&lt;/span&gt; &lt;span class="n"&gt;physically&lt;/span&gt; &lt;span class="n"&gt;configured&lt;/span&gt; &lt;span class="n"&gt;between&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;Power&lt;/span&gt; &lt;span class="mi"&gt;775&lt;/span&gt; &lt;span class="n"&gt;CECs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h4 id="check-the-hardware-component-and-site-definitions"&gt;&lt;strong&gt;Check the hardware component and site definitions.&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;The CNM HFI network requires that the Power 775 frame and cecs be physically installed and properly defined in the xCAT DataBase. This activity should have already been accomplished following the xCAT Power 775 Hardware Management guide. The CNM HFI network requires the following additional data to be defined in the xCAT DB.&lt;/p&gt;
&lt;p&gt;Check that the correct Topology has been set in the site table. The topology definition is based on the the number of CECs and type of HFI network configured for your Power 775 cluster.&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt; &lt;span class="n"&gt;lsdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;site&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="n"&gt;topology&lt;/span&gt;   &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;should&lt;/span&gt; &lt;span class="n"&gt;be&lt;/span&gt; &lt;span class="n"&gt;one&lt;/span&gt; &lt;span class="n"&gt;of&lt;/span&gt; &lt;span class="n"&gt;supported&lt;/span&gt; &lt;span class="n"&gt;configs&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="n"&gt;D&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="n"&gt;D&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="n"&gt;D&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="n"&gt;If&lt;/span&gt; &lt;span class="n"&gt;there&lt;/span&gt; &lt;span class="n"&gt;is&lt;/span&gt; &lt;span class="n"&gt;no&lt;/span&gt; &lt;span class="n"&gt;topology&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt; &lt;span class="n"&gt;found&lt;/span&gt; &lt;span class="n"&gt;you&lt;/span&gt; &lt;span class="n"&gt;can&lt;/span&gt; &lt;span class="n"&gt;set&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt; &lt;span class="n"&gt;with&lt;/span&gt; &lt;span class="n"&gt;xCAT&lt;/span&gt; &lt;span class="n"&gt;chdef&lt;/span&gt; &lt;span class="n"&gt;command&lt;/span&gt;
 &lt;span class="n"&gt;chdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;site&lt;/span&gt;  &lt;span class="n"&gt;topology&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="n"&gt;D&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Check to make sure that the frame and the CECs node objects have the proper definitions. The frame must be connected to the EMS with DFM, and the frame number is assigned in the "id" attribute. The lsdef frame lists all the frame objects in your cluster, check that each frame has the frame number defined id=&amp;lt;frame #&amp;gt; . If the frame number is not correct, you should execute xCAT command "rspconfig" to set the frame number. The command below will set "id" to 17 for frame17 node, and will update frame number to 17 in the BPA .&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt; &lt;span class="n"&gt;lsdef&lt;/span&gt;  &lt;span class="n"&gt;frame&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;check&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="n"&gt;attribute&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="n"&gt;see&lt;/span&gt; &lt;span class="n"&gt;frame&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="n"&gt;number&lt;/span&gt; &lt;span class="n"&gt;assigned&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;each&lt;/span&gt; &lt;span class="n"&gt;frame&lt;/span&gt; &lt;span class="n"&gt;object&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;gt&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
 &lt;span class="n"&gt;rspconfig&lt;/span&gt; &lt;span class="n"&gt;frame17&lt;/span&gt; &lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;17&lt;/span&gt;
 &lt;span class="nl"&gt;Note:&lt;/span&gt; &lt;span class="n"&gt;To&lt;/span&gt; &lt;span class="n"&gt;change&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="n"&gt;frame&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="n"&gt;number&lt;/span&gt; &lt;span class="n"&gt;with&lt;/span&gt; &lt;span class="n"&gt;rspconfig&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;P775&lt;/span&gt; &lt;span class="n"&gt;cecs&lt;/span&gt; &lt;span class="n"&gt;must&lt;/span&gt; &lt;span class="n"&gt;be&lt;/span&gt; &lt;span class="n"&gt;powered&lt;/span&gt; &lt;span class="n"&gt;off&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="n"&gt;This&lt;/span&gt; &lt;span class="n"&gt;activity&lt;/span&gt; &lt;span class="n"&gt;will&lt;/span&gt; &lt;span class="n"&gt;take&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="n"&gt;few&lt;/span&gt; &lt;span class="n"&gt;minutes&lt;/span&gt;
       &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="n"&gt;complete&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;and&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;bpa&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt; &lt;span class="n"&gt;will&lt;/span&gt; &lt;span class="n"&gt;lose&lt;/span&gt; &lt;span class="n"&gt;HW&lt;/span&gt; &lt;span class="n"&gt;connections&lt;/span&gt; &lt;span class="n"&gt;since&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;BPS&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="n"&gt;need&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="n"&gt;be&lt;/span&gt; &lt;span class="n"&gt;rebooted&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="n"&gt;change&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;configuration&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Check to make sure that the Power 775 cluster has the proper node attributes defined in the xCAT DB.&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt; &lt;span class="n"&gt;lpars&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;octants&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;nodes&lt;/span&gt; &lt;span class="n"&gt;should&lt;/span&gt; &lt;span class="n"&gt;have&lt;/span&gt; &lt;span class="n"&gt;hwtype&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;lpar&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hcp&lt;/span&gt; &lt;span class="n"&gt;and&lt;/span&gt; &lt;span class="n"&gt;parent&lt;/span&gt; &lt;span class="n"&gt;has&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;proper&lt;/span&gt; &lt;span class="n"&gt;CEC&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt; &lt;span class="n"&gt;assigned&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
 &lt;span class="n"&gt;fsp&lt;/span&gt; &lt;span class="n"&gt;nodes&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;should&lt;/span&gt; &lt;span class="n"&gt;have&lt;/span&gt; &lt;span class="n"&gt;hwtype&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;fsp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hcp&lt;/span&gt; &lt;span class="n"&gt;is&lt;/span&gt; &lt;span class="n"&gt;set&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="n"&gt;itself&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;and&lt;/span&gt; &lt;span class="n"&gt;parent&lt;/span&gt; &lt;span class="n"&gt;has&lt;/span&gt; &lt;span class="n"&gt;proper&lt;/span&gt; &lt;span class="n"&gt;CEC&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt; &lt;span class="n"&gt;assigned&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
 &lt;span class="n"&gt;cec&lt;/span&gt; &lt;span class="n"&gt;nodes&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;should&lt;/span&gt; &lt;span class="n"&gt;have&lt;/span&gt; &lt;span class="n"&gt;hwtype&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;cec&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hcp&lt;/span&gt; &lt;span class="n"&gt;is&lt;/span&gt; &lt;span class="n"&gt;set&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="n"&gt;itself&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;and&lt;/span&gt; &lt;span class="n"&gt;parent&lt;/span&gt; &lt;span class="n"&gt;has&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;proper&lt;/span&gt; &lt;span class="n"&gt;Frame&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt; &lt;span class="n"&gt;assigned&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
 &lt;span class="n"&gt;bpa&lt;/span&gt; &lt;span class="n"&gt;nodes&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;should&lt;/span&gt; &lt;span class="n"&gt;have&lt;/span&gt; &lt;span class="n"&gt;hwtype&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;bpa&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hcp&lt;/span&gt; &lt;span class="n"&gt;is&lt;/span&gt; &lt;span class="n"&gt;set&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="n"&gt;itself&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;and&lt;/span&gt; &lt;span class="n"&gt;parent&lt;/span&gt; &lt;span class="n"&gt;has&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;proper&lt;/span&gt; &lt;span class="n"&gt;Frame&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt; &lt;span class="n"&gt;assigned&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
 &lt;span class="n"&gt;frame&lt;/span&gt; &lt;span class="n"&gt;nodes&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;should&lt;/span&gt; &lt;span class="n"&gt;have&lt;/span&gt; &lt;span class="n"&gt;hwtype&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hcp&lt;/span&gt; &lt;span class="n"&gt;is&lt;/span&gt; &lt;span class="n"&gt;set&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="n"&gt;itself&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;and&lt;/span&gt; &lt;span class="n"&gt;parent&lt;/span&gt; &lt;span class="n"&gt;is&lt;/span&gt; &lt;span class="n"&gt;blank&lt;/span&gt; &lt;span class="n"&gt;or&lt;/span&gt; &lt;span class="n"&gt;will&lt;/span&gt; &lt;span class="n"&gt;have&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="n"&gt;building&lt;/span&gt; &lt;span class="n"&gt;block&lt;/span&gt; &lt;span class="n"&gt;number&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Check to make sure that the cec node objects have the proper "supernode" attribute defined. The supernode will specify the HFI configuration being used by the cec. You should also make sure the cage id is properly defined where the "id" attribute matches the cage position for the CEC node. The CNM daemon and configuration commands will setup the Master ISR identifier for each cec. This will allow the HFI communications to work between the Power 775 cluster.&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt; &lt;span class="n"&gt;lsdef&lt;/span&gt;  &lt;span class="n"&gt;cec&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;supernode&lt;/span&gt;  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;check&lt;/span&gt; &lt;span class="n"&gt;that&lt;/span&gt; &lt;span class="n"&gt;supernode&lt;/span&gt; &lt;span class="n"&gt;and&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="n"&gt;attributes&lt;/span&gt; &lt;span class="n"&gt;are&lt;/span&gt; &lt;span class="n"&gt;correct&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;each&lt;/span&gt; &lt;span class="n"&gt;cec&lt;/span&gt; &lt;span class="n"&gt;object&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="n"&gt;chdef&lt;/span&gt;  &lt;span class="n"&gt;f17c01&lt;/span&gt;  &lt;span class="n"&gt;supernode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;   &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;will&lt;/span&gt; &lt;span class="n"&gt;set&lt;/span&gt; &lt;span class="n"&gt;HFI&lt;/span&gt; &lt;span class="n"&gt;supernode&lt;/span&gt; &lt;span class="n"&gt;setting&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h4 id="hardware-server-connections"&gt;&lt;strong&gt;Hardware server connections&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;The CNM hardware server daemon will be started as part of the Power 775 Hardware setup working with DFM. The hardware server daemon is used by both xCAT and CNM to track the hardware connections between the xCAT EMS and the Frame/BPA and CEC/FSPs. There are two different connections "tooltype" used with the hardware server daemon and the xCAT mkhwconn command. The tooltype "lpar' is used by the xCAT DFM support, and the tooltype "fnm" is used by the CNM support.&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;mkhwconn&lt;/span&gt; &lt;span class="n"&gt;frame17&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;T&lt;/span&gt; &lt;span class="n"&gt;fnm&lt;/span&gt;   &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;will&lt;/span&gt; &lt;span class="n"&gt;make&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;HW&lt;/span&gt; &lt;span class="n"&gt;connection&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;frame17&lt;/span&gt;  &lt;span class="n"&gt;frame&lt;/span&gt; &lt;span class="n"&gt;and&lt;/span&gt; &lt;span class="n"&gt;cecs&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;drawers&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="n"&gt;mkhwconn&lt;/span&gt; &lt;span class="n"&gt;cec&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;T&lt;/span&gt; &lt;span class="n"&gt;fnm&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;The hardware server daemon works with the /var/opt/isnm/hdwr_svr/data/HmcNetConfig file. The expectation is that HmcNetConfig file will get created as part of the first mkhwconn execution working with xCAT DFM . The CNM will add additional connections for the "fnm" HW connections.&lt;/p&gt;
&lt;p&gt;There are hardware server log files that are created and saved under /var/opt/isnm/hdwr_svr/log directory. If you have issues with hardware server daemon, you may want to check the recent "hdwr_svr.log.*" log files. If you need to take a hardware server daemon dump, you can execute "kill -USR1 &amp;lt;hdwr_svr.pid&amp;gt; (this will create the hdwr_svr dump file)&lt;/p&gt;
&lt;h4 id="start-cnmd-and-setup-master-isr-idwzxhzdk63"&gt;&lt;strong&gt; Start CNMD and setup Master ISR ID :&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;Once all the xCAT definitions are properly updated with CNM configuration data, It is time to start up the CNM daemon and load ounthe proper ISR data in the CECs. Make sure that all the CECs have been powered off prior to the initialization of the CNM daemon. This is necessary to setup the proper HFI configuration data in the Frame and CECs. You can execute the CNM command "chnwm" to activate or deactivate the CNM daemon for AIX EMS&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;opt&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;isnm&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;cnm&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;bin&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;chnwm&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;d&lt;/span&gt;  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;take&lt;/span&gt; &lt;span class="n"&gt;down&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;CNM&lt;/span&gt; &lt;span class="n"&gt;daemon&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;rpower&lt;/span&gt; &lt;span class="n"&gt;cec&lt;/span&gt;  &lt;span class="n"&gt;off&lt;/span&gt;             &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;power&lt;/span&gt; &lt;span class="n"&gt;down&lt;/span&gt; &lt;span class="n"&gt;all&lt;/span&gt; &lt;span class="n"&gt;cecs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;opt&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;isnm&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;cnm&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;bin&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;chnwm&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;   &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;activate&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;CNM&lt;/span&gt; &lt;span class="n"&gt;daemon&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;You can execute the Linux "service" command to activate or deactivite the CNM daemon for Linux EMS.&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;service&lt;/span&gt; &lt;span class="n"&gt;cnmd&lt;/span&gt; &lt;span class="n"&gt;stop&lt;/span&gt;     &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;take&lt;/span&gt; &lt;span class="n"&gt;down&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;CNM&lt;/span&gt; &lt;span class="n"&gt;daemon&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;rpower&lt;/span&gt; &lt;span class="n"&gt;cec&lt;/span&gt; &lt;span class="n"&gt;off&lt;/span&gt;        &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;power&lt;/span&gt; &lt;span class="n"&gt;down&lt;/span&gt; &lt;span class="n"&gt;all&lt;/span&gt; &lt;span class="n"&gt;cecs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;service&lt;/span&gt; &lt;span class="n"&gt;cnmd&lt;/span&gt;  &lt;span class="n"&gt;start&lt;/span&gt;   &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;activate&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;CNM&lt;/span&gt; &lt;span class="n"&gt;daemon&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;You can now load the HFI master ISR identifier on the Power 775 frame and cecs. This will be accomplished using the ISNM command "chnwsvrconfig". You will need to specify this command for your frame/cecs in you P775 cluster.&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;opt&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;isnm&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;cnm&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;bin&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;chnwsvrconfig&lt;/span&gt;  &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;A&lt;/span&gt;   &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;will&lt;/span&gt; &lt;span class="n"&gt;configure&lt;/span&gt; &lt;span class="n"&gt;all&lt;/span&gt; &lt;span class="n"&gt;associated&lt;/span&gt; &lt;span class="n"&gt;P775&lt;/span&gt; &lt;span class="n"&gt;Frames&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;cecs&lt;/span&gt; &lt;span class="n"&gt;with&lt;/span&gt; &lt;span class="n"&gt;HFI&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;opt&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;isnm&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;cnm&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;bin&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;chnwsvrconfig&lt;/span&gt;  &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt; &lt;span class="mi"&gt;17&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;   &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;will&lt;/span&gt; &lt;span class="n"&gt;configure&lt;/span&gt; &lt;span class="n"&gt;one&lt;/span&gt; &lt;span class="n"&gt;cec&lt;/span&gt; &lt;span class="n"&gt;in&lt;/span&gt; &lt;span class="n"&gt;P775&lt;/span&gt; &lt;span class="n"&gt;frame&lt;/span&gt; &lt;span class="mi"&gt;17&lt;/span&gt; &lt;span class="n"&gt;with&lt;/span&gt; &lt;span class="n"&gt;cage&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;You can now verify that the CNM daemon and HFI configuration is working by executing the CNM command "nmcmd" to dump the drawer status information. This will list the current state of the drawers working with CNM. Please reference the HPC using the 9125-F2C guide for more detail about CNM commands, implementation, and debug.&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;opt&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;isnm&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;cnm&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;bin&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;nmcmd&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;D&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;D&lt;/span&gt;
 &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="n"&gt;nmcmd&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;D&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;D&lt;/span&gt;
 &lt;span class="n"&gt;Frame&lt;/span&gt; &lt;span class="mi"&gt;17&lt;/span&gt;  &lt;span class="n"&gt;Cage&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="n"&gt;Supernode&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="n"&gt;Drawer&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="n"&gt;STANDBY&lt;/span&gt;
 &lt;span class="n"&gt;Frame&lt;/span&gt; &lt;span class="mi"&gt;17&lt;/span&gt;  &lt;span class="n"&gt;Cage&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt; &lt;span class="n"&gt;Supernode&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="n"&gt;Drawer&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="n"&gt;STANDBY&lt;/span&gt;
 &lt;span class="n"&gt;Frame&lt;/span&gt; &lt;span class="mi"&gt;17&lt;/span&gt;  &lt;span class="n"&gt;Cage&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt; &lt;span class="n"&gt;Supernode&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="n"&gt;Drawer&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="n"&gt;STANDBY&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id="configure-dfm-hierarchically-optional"&gt;Configure DFM Hierarchically (Optional)&lt;/h3&gt;
&lt;p&gt;Depending on how large your cluster is, you may find that DFM performs better if the operations are sent to the FSPs via the service nodes. See &lt;a class="" href="https://sourceforge.net/apps/mediawiki/xcat/index.php?title=XCAT_Power_775_Hardware_Management#Overview_of_Using_DFM_Hierarchically"&gt;Overview of Using DFM Hierarchically&lt;/a&gt; to get an overview of how to use DFM Hierarchically.&lt;/p&gt;
&lt;h2 id="setup-the-xcat-mn-for-a-hierarchical-cluster"&gt;Setup the xCAT MN for a Hierarchical Cluster&lt;/h2&gt;
&lt;p&gt;For large clusters, you can distribute most of the xCAT services from the management node to xCAT service nodes. Doing so, creates a "hierarchical cluster". The following document describes additional xCAT management node configuration for a Linux Hierarchical Cluster, how to install your service nodes, and how to configure your compute nodes to be managed by service nodes.&lt;/p&gt;
&lt;p&gt;&lt;a class="alink" href="/p/xcat/wiki/Setting_Up_a_Linux_Hierarchical_Cluster"&gt;[Setting_Up_a_Linux_Hierarchical_Cluster]&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Note: Hierarchical Clusters are required for Power 775 clusters in order to deploy and manage compute nodes across the HFI network. In addition to setting up xCAT cluster hierarchy, the above document also contains important instructions specific to configuring and managing your Power 775 cluster.&lt;/p&gt;
&lt;p&gt;Setting up a hierarchical cluster assumes more advanced knowledge of xCAT node definition, deployment, and management. If this is your first time setting up an xCAT cluster, you should skip this step for now and experiment with a simple cluster managed directly from your xCAT management node to become familiar with all the different concepts and processes.&lt;/p&gt;
&lt;h2 id="complete-the-definition-of-the-compute-nodes"&gt;Complete the Definition of the Compute Nodes&lt;/h2&gt;
&lt;p&gt;The hardware management documents explained how to get the LPARs of the CECs defined as nodes in the xCAT database. Before deploying an OS on the nodes, you must set some additional attributes of the nodes.&lt;/p&gt;
&lt;h3 id="define-xcat-groups"&gt;Define xCAT groups&lt;/h3&gt;
&lt;p&gt;See the following &lt;a class="alink" href="/p/xcat/wiki/Node_Group_Support"&gt;[Node_Group_Support]&lt;/a&gt;,for more details on how to define xCAT groups. For the example below add the compute group to the nodes.&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;chdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;o&lt;/span&gt; &lt;span class="n"&gt;pnode1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;pnode2&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="n"&gt;groups&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;compute&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id="update-the-attributes-of-the-node"&gt;Update the attributes of the node&lt;/h3&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;chdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;o&lt;/span&gt; &lt;span class="n"&gt;pnode1&lt;/span&gt; &lt;span class="n"&gt;netboot&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;yaboot&lt;/span&gt; &lt;span class="n"&gt;tftpserver&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;192.168.0.1&lt;/span&gt; &lt;span class="n"&gt;nfsserver&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;192.168.0.1&lt;/span&gt;
&lt;span class="n"&gt;monserver&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;192.168.0.1&lt;/span&gt; &lt;span class="n"&gt;xcatmaster&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;192.168.0.1&lt;/span&gt; &lt;span class="n"&gt;installnic&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"eth0"&lt;/span&gt; &lt;span class="n"&gt;primarynic&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"eth0"&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Note: Make sure the attributes "installnic" and "primarynic" are set up by the correct Ethernet or HFI Interface of compute node. Otherwise the compute node installation may hang on requesting information from an incorrect interface. The "installnic" and "primarynic" can also be set to mac address if you are not sure about the Ethernet interface name, the mac address can be got through getmacs command. The installnic" and "primarynic" can also be set to keyword "mac", which means that the network interface specified by the mac address in the mac table will be used.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Make sure that the address used above ( 192.168.0.1) is the address of the Management Node as known by the node. Also make sure site.master has this address.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;If you are using redhat 7 (RHEL7), "yaboot" is deprecated, the netboot method for system P is must be  "grub2"&lt;/strong&gt;Make sure that the address used above ( 192.168.0.1) is the address of the Management Node as known by the node. Also make sure site.master has this address.&lt;strong&gt;&lt;br /&gt;
. So, set "netboot" attribute to "grub2" to provision Redhat 7 on ppc64 node.&lt;/strong&gt;&lt;/p&gt;
&lt;h4 id="check-the-sitemaster-value"&gt;Check the site.master value&lt;/h4&gt;
&lt;p&gt;Make sure site.master is the address or name known by the node&lt;/p&gt;
&lt;p&gt;To change site.master to this address:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;chdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;site&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;o&lt;/span&gt; &lt;span class="n"&gt;clustersite&lt;/span&gt; &lt;span class="n"&gt;master&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"192.168.0.1"&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h4 id="set-the-type-attributes-of-the-node"&gt;Set the type attributes of the node&lt;/h4&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="nx"&gt;chdef&lt;/span&gt; &lt;span class="na"&gt;-t&lt;/span&gt; &lt;span class="nx"&gt;node&lt;/span&gt; &lt;span class="na"&gt;-o&lt;/span&gt; &lt;span class="nx"&gt;pnode1&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="o"&gt;=&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;os&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;arch&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;ppc64&lt;/span&gt; &lt;span class="n"&gt;profile&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;compute&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;For valid options:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt; &lt;span class="n"&gt;tabdump&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;d&lt;/span&gt; &lt;span class="n"&gt;nodetype&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id="configure-conserver"&gt;Configure conserver&lt;/h3&gt;
&lt;p&gt;The xCAT rcons command uses the conserver package to provide support for multiple read-only consoles on a single node and the console logging. For example, if a user has a read-write console session open on node node1, other users could also log in to that console session on node1 as read-only users. This allows sharing a console server session between multiple users for diagnostic or other collaborative purposes. The console logging function will log the console output and activities for any node with remote console attributes set to the following file which an be replayed for debugging or any other purposes:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;var&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;consoles&lt;/span&gt;&lt;span class="o"&gt;/&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;management&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Note: conserver=&amp;lt;management node&amp;gt; is the default, so it optional in the command&lt;/p&gt;
&lt;h4 id="update-conserver-configuration"&gt;&lt;strong&gt;Update conserver configuration&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;Each xCAT node with remote console attributes set should be added into the conserver configuration file to make the rcons work. The xCAT command &lt;strong&gt;makeconservercf&lt;/strong&gt; will put all the nodes into conserver configuration file /etc/conserver.cf. The makeconservercf command must be run when there is any node definition changes that will affect the conserver, such as adding new nodes, removing nodes or changing the nodes' remote console settings.&lt;/p&gt;
&lt;p&gt;To add or remove new nodes for conserver support:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;makeconservercf&lt;/span&gt;
&lt;span class="n"&gt;service&lt;/span&gt; &lt;span class="n"&gt;conserver&lt;/span&gt; &lt;span class="n"&gt;stop&lt;/span&gt;
&lt;span class="n"&gt;service&lt;/span&gt; &lt;span class="n"&gt;conserver&lt;/span&gt; &lt;span class="n"&gt;start&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id="check-rconsrnetboot-and-getmacs-depend-on-it"&gt;Check rcons(rnetboot and getmacs depend on it)&lt;/h3&gt;
&lt;p&gt;The functions rnetboot and getmacs depend on conserver functions, check it is available.&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;rcons&lt;/span&gt; &lt;span class="n"&gt;pnode1&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;If it works ok, you will get into the console interface of the pnode1. If it does not work, review your rcons setup as documented in previous steps.&lt;/p&gt;
&lt;h3 id="check-hardware-control-setup-to-the-nodes"&gt;Check hardware control setup to the nodes&lt;/h3&gt;
&lt;p&gt;See if you setup is correct at this point, run rpower to check node status:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;rpower&lt;/span&gt; &lt;span class="n"&gt;pnode1&lt;/span&gt; &lt;span class="n"&gt;stat&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id="update-the-mac-table-with-the-address-of-the-nodes"&gt;Update the mac table with the address of the node(s)&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Before run getmacs, make sure the node is off.&lt;/strong&gt; The reason is the HMC cannot shutdown linux nodes which are in running state.&lt;/p&gt;
&lt;p&gt;You can force the lpar shutdown with:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;rpower&lt;/span&gt; &lt;span class="n"&gt;pnode1&lt;/span&gt; &lt;span class="n"&gt;stat&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt; &lt;span class="n"&gt;is&lt;/span&gt; &lt;span class="n"&gt;on&lt;/span&gt; &lt;span class="n"&gt;then&lt;/span&gt; &lt;span class="n"&gt;run&lt;/span&gt;
&lt;span class="n"&gt;rpower&lt;/span&gt; &lt;span class="n"&gt;pnode1&lt;/span&gt; &lt;span class="n"&gt;off&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;If there's only one Ethernet adapter on the node or you have specified the installnic or primarynic attribute of the node, using following command can get the correct mac address.&lt;/p&gt;
&lt;p&gt;Check for *nic definition, by running&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;lsdef&lt;/span&gt; &lt;span class="n"&gt;pnode1&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;To set installnic or primarynic:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;chdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;pnode1&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;o&lt;/span&gt; &lt;span class="n"&gt;blade01&lt;/span&gt; &lt;span class="n"&gt;installnic&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;eth0&lt;/span&gt; &lt;span class="n"&gt;primarynic&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;eth1&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Get mac addresses:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;getmacs&lt;/span&gt; &lt;span class="n"&gt;pnode1&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;If there are more than one Ethernet adapters on the node, and you don't know which one has been configured for the installation process, or the lpar is just created and there is no active profile for that lpar, or the lpar is on a P5 system and there is no lhea/sea ethernet adapters, then you have to specify more parameters for the lpar to try to figure out an available interface by using the ping operation. Run this command:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;getmacs&lt;/span&gt; &lt;span class="n"&gt;pnode1&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;D&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;S&lt;/span&gt; &lt;span class="mf"&gt;192.168.0.1&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;G&lt;/span&gt; &lt;span class="mf"&gt;192.168.0.10&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;The output looks like following:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;pnode1&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
&lt;span class="n"&gt;Type&lt;/span&gt; &lt;span class="n"&gt;Location&lt;/span&gt; &lt;span class="n"&gt;Code&lt;/span&gt; &lt;span class="n"&gt;MAC&lt;/span&gt; &lt;span class="n"&gt;Address&lt;/span&gt; &lt;span class="n"&gt;Full&lt;/span&gt; &lt;span class="n"&gt;Path&lt;/span&gt; &lt;span class="n"&gt;Name&lt;/span&gt; &lt;span class="n"&gt;Ping&lt;/span&gt; &lt;span class="n"&gt;Result&lt;/span&gt; &lt;span class="n"&gt;Device&lt;/span&gt; &lt;span class="n"&gt;Type&lt;/span&gt;
&lt;span class="n"&gt;ent&lt;/span&gt; &lt;span class="n"&gt;U9133&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;55&lt;/span&gt;&lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="n"&gt;E093F&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;V4&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;C5&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;T1&lt;/span&gt; &lt;span class="n"&gt;f2&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="n"&gt;f0&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;05&lt;/span&gt; &lt;span class="sr"&gt;/vdevice/&lt;/span&gt;&lt;span class="n"&gt;l&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;lan&lt;/span&gt;&lt;span class="err"&gt;@&lt;/span&gt;&lt;span class="mi"&gt;30000005&lt;/span&gt; &lt;span class="n"&gt;virtual&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;And the Mac address will be written into the xCAT mac table. Run to verify:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;tabdump&lt;/span&gt; &lt;span class="n"&gt;mac&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id="update-the-mac-table-with-the-address-of-the-nodes-for-power-775"&gt;Update the mac table with the address of the node(s) for Power 775&lt;/h3&gt;
&lt;p&gt;To set installnic or primarynic:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;chdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;o&lt;/span&gt; &lt;span class="n"&gt;c250f07c04ap13&lt;/span&gt; &lt;span class="n"&gt;installnic&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;hf0&lt;/span&gt; &lt;span class="n"&gt;primarynic&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;hf1&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Get mac addresses:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;getmacs&lt;/span&gt; &lt;span class="n"&gt;c250f07c04ap13&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;D&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h4 id="configure-dhcp"&gt;&lt;strong&gt;Configure DHCP&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;Add the defined nodes into the DHCP configuration:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt; &lt;span class="n"&gt;makedhcp&lt;/span&gt; &lt;span class="n"&gt;c250f07c04ap13&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Restart the dhcp service:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt; &lt;span class="n"&gt;service&lt;/span&gt; &lt;span class="n"&gt;dhcpd&lt;/span&gt; &lt;span class="n"&gt;restart&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id="set-up-customization-scripts-optional"&gt;Set up customization scripts (optional)&lt;/h3&gt;
&lt;p&gt;xCAT supports the running of customization scripts on the nodes when they are installed. You can see what scripts xCAT will run by default by looking at the xcatdefaults entry in the xCAT postscripts database table. The postscripts attribute of the node definition can be used to specify the comma separated list of the scripts that you want to be executed on the nodes. The order of the scripts in the list determines the order in which they will be run.&lt;/p&gt;
&lt;p&gt;To check current postscript and postbootscripts setting:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;tabdump&lt;/span&gt; &lt;span class="n"&gt;postscripts&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;For example, if you want to have your two scripts called foo and bar run on node node01 you could add them to the postscripts table:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;chdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;o&lt;/span&gt; &lt;span class="n"&gt;node01&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="n"&gt;postscripts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;foo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;bar&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;(The -p flag means to add these to whatever is already set.)&lt;/p&gt;
&lt;p&gt;For more information on creating and setting up Post*scripts: &lt;a class="alink" href="/p/xcat/wiki/Postscripts_and_Prescripts"&gt;[Postscripts_and_Prescripts]&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="install-stateful-nodes"&gt;Install Stateful Nodes&lt;/h2&gt;
&lt;div&gt;
&lt;div class="markdown_content"&gt;&lt;p&gt;This section describes deploying stateful nodes.&lt;br /&gt;
There are two options to install your nodes as stateful (diskful) nodes: &lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Use ISOs or DVDs, follow the Option 1 , Installing Stateful Nodes using ISOs or DVDs below.&lt;/li&gt;
&lt;li&gt;Clone new nodes from a pre-installed/pre-configured node, follow the Option 2,Installing Stateful Nodes Using Sysclone below.&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="toc"&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#option-1-installing-stateful-nodes-using-isos-or-dvds"&gt;Option 1: Installing Stateful Nodes Using ISOs or DVDs&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#create-the-distro-repository-on-the-mn"&gt;Create the Distro Repository on the MN&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#select-or-create-an-osimage-definition"&gt;Select or Create an osimage Definition&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#install-a-new-kernel-on-the-nodes-optional"&gt;Install a New Kernel on the Nodes (Optional)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#customize-the-disk-partitioning-optional"&gt;Customize the disk partitioning (Optional)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#update-the-distro-at-a-later-time"&gt;Update the Distro at a Later Time&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#option-2-installing-stateful-nodes-using-sysclone"&gt;Option 2: Installing Stateful Nodes Using Sysclone&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#install-or-configure-the-golden-client"&gt;Install or Configure the Golden Client&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#capture-image-from-the-golden-client"&gt;Capture image from the Golden Client&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;h3 id="option-1-installing-stateful-nodes-using-isos-or-dvds"&gt;Option 1: Installing Stateful Nodes Using ISOs or DVDs&lt;/h3&gt;
&lt;p&gt;This section describes the process for setting up xCAT to install nodes; that is how to install an OS on the disk of each node. &lt;/p&gt;
&lt;h4 id="create-the-distro-repository-on-the-mn"&gt;&lt;strong&gt;Create the Distro Repository on the MN&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;The &lt;a class="" href="http://xcat.sourceforge.net/man8/copycds.8.html"&gt;copycds&lt;/a&gt; command copies the contents of the linux distro media to /install/&amp;lt;os&amp;gt;/&amp;lt;arch&amp;gt; so that it will be available to install nodes with or create diskless images. &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Obtain the Redhat or SLES ISOs or DVDs. &lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If using an ISO, copy it to (or NFS mount it on) the management node, and then run: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="nx"&gt;copycds&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;path&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;/&lt;/span&gt;&lt;span class="nx"&gt;RHEL6.2&lt;/span&gt;&lt;span class="o"&gt;-*-&lt;/span&gt;&lt;span class="nx"&gt;Server&lt;/span&gt;&lt;span class="na"&gt;-x86_64-DVD1.iso&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If using a DVD, put it in the DVD drive of the management node and run:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;copycds&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;dev&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;dvd&lt;/span&gt;       &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="n"&gt;or&lt;/span&gt; &lt;span class="n"&gt;whatever&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;device&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="n"&gt;of&lt;/span&gt; &lt;span class="n"&gt;your&lt;/span&gt; &lt;span class="n"&gt;dvd&lt;/span&gt; &lt;span class="n"&gt;drive&lt;/span&gt; &lt;span class="n"&gt;is&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Tip: if this is the same distro version as your management node, create a .repo file in /etc/yum.repos.d with content similar to: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;local&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;rhels6&lt;/span&gt;&lt;span class="mf"&gt;.2&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;xCAT&lt;/span&gt; &lt;span class="n"&gt;local&lt;/span&gt; &lt;span class="n"&gt;rhels&lt;/span&gt; &lt;span class="mf"&gt;6.2&lt;/span&gt;
    &lt;span class="n"&gt;baseurl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;file&lt;/span&gt;&lt;span class="o"&gt;:/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;rhels6&lt;/span&gt;&lt;span class="mf"&gt;.2&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt;
    &lt;span class="n"&gt;enabled&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;
    &lt;span class="n"&gt;gpgcheck&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;This way, if you need some additional RPMs on your MN at a later, you can simply install them using yum. Or if you are installing other software on your MN that requires some additional RPMs from the disto, they will automatically be found and installed. &lt;/p&gt;
&lt;h4 id="select-or-create-an-osimage-definition"&gt;&lt;strong&gt;Select or Create an osimage Definition&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;The copycds command also automatically creates several osimage defintions in the database that can be used for node deployment. To see them: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;lsdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;osimage&lt;/span&gt;          &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="n"&gt;see&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;list&lt;/span&gt; &lt;span class="n"&gt;of&lt;/span&gt; &lt;span class="n"&gt;osimages&lt;/span&gt;
    &lt;span class="n"&gt;lsdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;osimage&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;osimage&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;          &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="n"&gt;see&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;attributes&lt;/span&gt; &lt;span class="n"&gt;of&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="n"&gt;particular&lt;/span&gt; &lt;span class="n"&gt;osimage&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;From the list above, select the osimage for your distro, architecture, provisioning method (in this case install), and profile (compute, service, etc.). Although it is optional, we recommend you make a copy of the osimage, changing its name to a simpler name. For example: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;lsdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;osimage&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;z&lt;/span&gt; &lt;span class="n"&gt;rhels6&lt;/span&gt;&lt;span class="mf"&gt;.2&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;compute&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;sed&lt;/span&gt; &lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="o"&gt;/^&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;^&lt;/span&gt; &lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="o"&gt;+:/&lt;/span&gt;&lt;span class="n"&gt;mycomputeimage&lt;/span&gt;&lt;span class="o"&gt;:/&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;mkdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;z&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;This displays the osimage "rhels6.2-x86_64-install-compute" in a format that can be used as input to mkdef, but on the way there it uses sed to modify the name of the object to "mycomputeimage". &lt;/p&gt;
&lt;p&gt;Initially, this osimage object points to templates, pkglists, etc. that are shipped by default with xCAT. And some attributes, for example otherpkglist and synclists, won't have any value at all because xCAT doesn't ship a default file for that. You can now change/fill in any &lt;a class="" href="http://xcat.sourceforge.net/man7/osimage.7.html"&gt;osimage attributes&lt;/a&gt; that you want. A general convention is that if you are modifying one of the default files that an osimage attribute points to, copy it into /install/custom and have your osimage point to it there. (If you modify the copy under /opt/xcat directly, it will be over-written the next time you upgrade xCAT.) &lt;/p&gt;
&lt;p&gt;But for now, we will use the default values in the osimage definition and continue on. (If you really want to see examples of modifying/creating the pkglist, template, otherpkgs pkglist, and sync file list, see the section &lt;a class="alink" href="/p/xcat/wiki/Using_Provmethod%3Dosimagename"&gt;[Using_Provmethod=osimagename]&lt;/a&gt;. Most of the examples there can be used for stateful nodes too.) &lt;/p&gt;
&lt;h4 id="install-a-new-kernel-on-the-nodes-optional"&gt;&lt;strong&gt;Install a New Kernel on the Nodes (Optional)&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;Create a postscript file called (for example) updatekernel: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;vi&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;postscripts&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;updatekernel&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Add the following lines to the file: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="o"&gt;!/&lt;/span&gt;&lt;span class="n"&gt;bin&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;bash&lt;/span&gt;
    &lt;span class="n"&gt;rpm&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;Uivh&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;kernel&lt;/span&gt;&lt;span class="o"&gt;-*&lt;/span&gt;&lt;span class="n"&gt;rpm&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Change the permission on the file: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;chmod&lt;/span&gt; &lt;span class="mi"&gt;755&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;postscripts&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;updatekernel&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Make the new kernel RPM available to the postscript: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="nx"&gt;mkdir&lt;/span&gt; &lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nb"&gt;install&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;postscripts&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nb"&gt;data&lt;/span&gt;
    &lt;span class="nx"&gt;cp&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;kernel&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nb"&gt;install&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;postscripts&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nb"&gt;data&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Add the postscript to your compute nodes: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;chdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;group&lt;/span&gt; &lt;span class="n"&gt;compute&lt;/span&gt; &lt;span class="n"&gt;postscripts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;updatekernel&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Now when you install your nodes (done in a step below), it will also update the kernel. &lt;/p&gt;
&lt;p&gt;Alternatively, you could install your nodes with the stock kernel, and update the nodes afterward using updatenode and the same postscript above, in this case, you need to reboot your nodes to make the new kernel be effective. &lt;/p&gt;
&lt;h4 id="customize-the-disk-partitioning-optional"&gt;&lt;strong&gt;Customize the disk partitioning (Optional)&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;By default, xCAT will install the operating system on the first disk in the node. However, you may choose to customize the disk partitioning during the install process and define a specific disk layout. You can do this in one of two ways: &lt;/p&gt;
&lt;p&gt;1. Partitioning definition configuration file: Create a custom osimage file that contains the correct disk partitioning definition. The nodeset command will insert the contents of this file directly into the generated autoinst configuration file that will be used by the OS installer (e.g. kickstart for RedHat, AutoYaST for SLES). The file must be formatted for that installer. For example, for a SLES AutoYaST install, you might create a file called /install/custom/my-partitions, containing: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;  &lt;span class="nt"&gt;&amp;lt;drive&amp;gt;&lt;/span&gt;
     &lt;span class="nt"&gt;&amp;lt;device&amp;gt;&lt;/span&gt;XCATPARTITIONHOOK&lt;span class="nt"&gt;&amp;lt;/device&amp;gt;&lt;/span&gt;
     &lt;span class="nt"&gt;&amp;lt;initialize&lt;/span&gt; &lt;span class="na"&gt;config:type=&lt;/span&gt;&lt;span class="s"&gt;"boolean"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;true&lt;span class="nt"&gt;&amp;lt;/initialize&amp;gt;&lt;/span&gt;
     &lt;span class="nt"&gt;&amp;lt;use&amp;gt;&lt;/span&gt;all&lt;span class="nt"&gt;&amp;lt;/use&amp;gt;&lt;/span&gt;
     &lt;span class="nt"&gt;&amp;lt;partitions&lt;/span&gt; &lt;span class="na"&gt;config:type=&lt;/span&gt;&lt;span class="s"&gt;"list"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
       &lt;span class="nt"&gt;&amp;lt;partition&amp;gt;&lt;/span&gt;
         &lt;span class="nt"&gt;&amp;lt;create&lt;/span&gt; &lt;span class="na"&gt;config:type=&lt;/span&gt;&lt;span class="s"&gt;"boolean"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;true&lt;span class="nt"&gt;&amp;lt;/create&amp;gt;&lt;/span&gt;
         &lt;span class="nt"&gt;&amp;lt;filesystem&lt;/span&gt; &lt;span class="na"&gt;config:type=&lt;/span&gt;&lt;span class="s"&gt;"symbol"&lt;/span&gt;&lt;span class="err"&gt;&amp;lt;swap&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;/filesystem&amp;gt;
         &lt;span class="nt"&gt;&amp;lt;format&lt;/span&gt; &lt;span class="na"&gt;config:type=&lt;/span&gt;&lt;span class="s"&gt;"boolean"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;true&lt;span class="nt"&gt;&amp;lt;/format&amp;gt;&lt;/span&gt;
         &lt;span class="nt"&gt;&amp;lt;mount&amp;gt;&lt;/span&gt;swap&lt;span class="nt"&gt;&amp;lt;/mount&amp;gt;&lt;/span&gt;
         &lt;span class="nt"&gt;&amp;lt;mountby&lt;/span&gt; &lt;span class="na"&gt;config:type=&lt;/span&gt;&lt;span class="s"&gt;"symbol"&lt;/span&gt;&lt;span class="err"&gt;&amp;lt;path&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;/mountby&amp;gt;
         &lt;span class="nt"&gt;&amp;lt;partition_nr&lt;/span&gt; &lt;span class="na"&gt;config:type=&lt;/span&gt;&lt;span class="s"&gt;"integer"&lt;/span&gt;&lt;span class="err"&gt;&amp;lt;1&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;/partition_nr&amp;gt;
         &lt;span class="nt"&gt;&amp;lt;partition_type&amp;gt;&lt;/span&gt;primary&lt;span class="nt"&gt;&amp;lt;/partition_type&amp;gt;&lt;/span&gt;
         &lt;span class="nt"&gt;&amp;lt;size&amp;gt;&lt;/span&gt;32G&lt;span class="nt"&gt;&amp;lt;/size&amp;gt;&lt;/span&gt;
       &lt;span class="nt"&gt;&amp;lt;/partition&amp;gt;&lt;/span&gt;
       &lt;span class="nt"&gt;&amp;lt;partition&amp;gt;&lt;/span&gt;
         &lt;span class="nt"&gt;&amp;lt;create&lt;/span&gt; &lt;span class="na"&gt;config:type=&lt;/span&gt;&lt;span class="s"&gt;"boolean"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;true&lt;span class="nt"&gt;&amp;lt;/create&amp;gt;&lt;/span&gt;
         &lt;span class="nt"&gt;&amp;lt;filesystem&lt;/span&gt; &lt;span class="na"&gt;config:type=&lt;/span&gt;&lt;span class="s"&gt;"symbol"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;ext3&lt;span class="nt"&gt;&amp;lt;/filesystem&amp;gt;&lt;/span&gt;
         &lt;span class="nt"&gt;&amp;lt;format&lt;/span&gt; &lt;span class="na"&gt;config:type=&lt;/span&gt;&lt;span class="s"&gt;"boolean"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;true&lt;span class="nt"&gt;&amp;lt;/format&amp;gt;&lt;/span&gt;
         &lt;span class="nt"&gt;&amp;lt;mount&amp;gt;&lt;/span&gt;/&lt;span class="nt"&gt;&amp;lt;/mount&amp;gt;&lt;/span&gt;
         &lt;span class="nt"&gt;&amp;lt;mountby&lt;/span&gt; &lt;span class="na"&gt;config:type=&lt;/span&gt;&lt;span class="s"&gt;"symbol"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;path&lt;span class="nt"&gt;&amp;lt;/mountby&amp;gt;&lt;/span&gt;
         &lt;span class="nt"&gt;&amp;lt;partition_nr&lt;/span&gt; &lt;span class="na"&gt;config:type=&lt;/span&gt;&lt;span class="s"&gt;"integer"&lt;/span&gt;&lt;span class="err"&gt;&amp;lt;2&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;/partition_nr&amp;gt;
         &lt;span class="nt"&gt;&amp;lt;partition_type&amp;gt;&lt;/span&gt;primary&lt;span class="nt"&gt;&amp;lt;/partition_type&amp;gt;&lt;/span&gt;
         &lt;span class="nt"&gt;&amp;lt;size&amp;gt;&lt;/span&gt;64G&lt;span class="nt"&gt;&amp;lt;/size&amp;gt;&lt;/span&gt;
       &lt;span class="nt"&gt;&amp;lt;/partition&amp;gt;&lt;/span&gt;
     &lt;span class="nt"&gt;&amp;lt;/partitions&amp;gt;&lt;/span&gt;
   &lt;span class="nt"&gt;&amp;lt;/drive&amp;gt;&lt;/span&gt;

Then use this file in your osimage:

      chdef -t osimage &lt;span class="nt"&gt;&amp;lt;osimagename&amp;gt;&lt;/span&gt; partitionfile=/install/custom/my-partitions

When nodeset runs and generates the /install/autoinst file for a node, it will replace the #XCAT_PARTITION_START#...#XCAT_PARTITION_END# directives from your osimage template with the contents of your custom partitionfile.
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;2. Partitioning definition script: Create a shell script that will be run on the node during the install process to dynamically create the disk partitioning definition. This script will be run during the OS installer %pre script execution and must write the correct partitioning definition into the file /tmp/partitionfile on the node. For example, for a RedHat kickstart install, you might create a file called /install/custom/my-partitions.sh, containing: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;     &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"part swap --size 1024"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;tmp&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;partitionfile&lt;/span&gt;
     &lt;span class="nx"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"part / --size 1 --grow --fstype ext3"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;tmp&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;partitionfile&lt;/span&gt;

&lt;span class="nx"&gt;Then&lt;/span&gt; &lt;span class="nx"&gt;assign&lt;/span&gt; &lt;span class="nx"&gt;this&lt;/span&gt; &lt;span class="nb"&gt;script&lt;/span&gt; &lt;span class="k"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;your&lt;/span&gt; &lt;span class="nx"&gt;osimage&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;

     &lt;span class="nx"&gt;chdef&lt;/span&gt; &lt;span class="na"&gt;-t&lt;/span&gt; &lt;span class="nx"&gt;osimage&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;osimagename&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;partitionfile&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'s:/install/custom/my-partitions.sh'&lt;/span&gt;

&lt;span class="nx"&gt;Note&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="s1"&gt;'s:'&lt;/span&gt; &lt;span class="nx"&gt;preceding&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nb"&gt;filename&lt;/span&gt; &lt;span class="nx"&gt;tells&lt;/span&gt; &lt;span class="nx"&gt;nodeset&lt;/span&gt; &lt;span class="nx"&gt;that&lt;/span&gt; &lt;span class="nx"&gt;this&lt;/span&gt; &lt;span class="nx"&gt;is&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt; &lt;span class="nx"&gt;script.&lt;/span&gt; &lt;span class="nx"&gt;When&lt;/span&gt; &lt;span class="nx"&gt;nodeset&lt;/span&gt; &lt;span class="nx"&gt;runs&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="nx"&gt;generates&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nb"&gt;install&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;autoinst&lt;/span&gt; &lt;span class="nb"&gt;file&lt;/span&gt; &lt;span class="nb"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt; &lt;span class="nx"&gt;node&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;it&lt;/span&gt; &lt;span class="nx"&gt;will&lt;/span&gt; &lt;span class="nb"&gt;add&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;execution&lt;/span&gt; &lt;span class="nx"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nb"&gt;contents&lt;/span&gt; &lt;span class="nx"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;this&lt;/span&gt; &lt;span class="nb"&gt;script&lt;/span&gt; &lt;span class="k"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="nx"&gt;pre&lt;/span&gt; &lt;span class="nx"&gt;section&lt;/span&gt; &lt;span class="nx"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;that&lt;/span&gt; &lt;span class="nx"&gt;file.&lt;/span&gt; &lt;span class="nx"&gt;The&lt;/span&gt; &lt;span class="nx"&gt;nodeset&lt;/span&gt; &lt;span class="nb"&gt;command&lt;/span&gt; &lt;span class="nx"&gt;will&lt;/span&gt; &lt;span class="nx"&gt;then&lt;/span&gt; &lt;span class="nb"&gt;replace&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="vi"&gt;#XCAT_PARTITION_START&lt;/span&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="nx"&gt;...&lt;/span&gt;&lt;span class="vi"&gt;#XCAT_PARTITION_END&lt;/span&gt;&lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;directives&lt;/span&gt; &lt;span class="nb"&gt;from&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;osimage&lt;/span&gt; &lt;span class="nx"&gt;template&lt;/span&gt; &lt;span class="nb"&gt;file&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="s2"&gt;"%include /tmp/partitionfile"&lt;/span&gt; &lt;span class="k"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;dynamically&lt;/span&gt; &lt;span class="nb"&gt;include&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;tmp&lt;/span&gt; &lt;span class="nx"&gt;definition&lt;/span&gt; &lt;span class="nb"&gt;file&lt;/span&gt; &lt;span class="nx"&gt;your&lt;/span&gt; &lt;span class="nb"&gt;script&lt;/span&gt; &lt;span class="nx"&gt;created.&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h4 id="update-the-distro-at-a-later-time"&gt;&lt;strong&gt;Update the Distro at a Later Time&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;After the initial install of the distro onto nodes, if you want to update the distro on the nodes (either with a few updates or a new SP) without reinstalling the nodes: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;create the new repo using copycds: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="nx"&gt;copycds&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;path&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;/&lt;/span&gt;&lt;span class="nx"&gt;RHEL6.3&lt;/span&gt;&lt;span class="o"&gt;-*-&lt;/span&gt;&lt;span class="nx"&gt;Server&lt;/span&gt;&lt;span class="na"&gt;-x86_64-DVD1.iso&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Or, for just a few updated rpms, you can copy the updated rpms from the distributor into a directory under /install and run createrepo in that directory. &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;add the new repo to the pkgdir attribute of the osimage: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;chdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;osimage&lt;/span&gt; &lt;span class="n"&gt;rhels6&lt;/span&gt;&lt;span class="mf"&gt;.2&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;compute&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="n"&gt;pkgdir&lt;/span&gt;&lt;span class="o"&gt;=/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;rhels6&lt;/span&gt;&lt;span class="mf"&gt;.3&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Note: the above command will add a 2nd repo to the pkgdir attribute. This is only supported for xCAT 2.8.2 and above. For earlier versions of xCAT, omit the -p flag to replace the existing repo directory with the new one. &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;run the ospkgs postscript to have yum update all rpms on the nodes &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;updatenode&lt;/span&gt; &lt;span class="n"&gt;compute&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;P&lt;/span&gt; &lt;span class="n"&gt;ospkgs&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="option-2-installing-stateful-nodes-using-sysclone"&gt;Option 2: Installing Stateful Nodes Using Sysclone&lt;/h3&gt;
&lt;p&gt;This section describes how to install or configure a diskful node (we call it a golden-client), capture an osimage from this golden-client, then the osimage can be used to install/clone other nodes. See &lt;a class="" href="/p/xcat/wiki/Using_Clone_to_Deploy_Server"&gt;Using_Clone_to_Deploy_Server&lt;/a&gt; for more information.&lt;/p&gt;
&lt;p&gt;Note: this support is available in xCAT 2.8.2 and above. &lt;/p&gt;
&lt;h4 id="install-or-configure-the-golden-client"&gt;&lt;strong&gt;Install or Configure the Golden Client&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;If you want to use the &lt;strong&gt;sysclone&lt;/strong&gt; provisioning method, you need a golden-client. In this way, you can customize and tweak the golden-client’s software and configuration according to your needs, and verify it’s proper operation. Once the image is captured and deployed, the new nodes will behave in the same way the golden-client does. &lt;/p&gt;
&lt;p&gt;To install a golden-client, follow the section &lt;a class="" href="../Installing_Stateful_Linux_Nodes/#option-1-installing-stateful-nodes-using-isos-or-dvds"&gt;Installing_Stateful_Linux_Nodes#Option_1:_Installing_Stateful_Nodes_Using_ISOs_or_DVDs&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;To install the systemimager rpms on the golden-client, do these steps on the mgmt node: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Download the xcat-dep tarball which includes systemimager rpms. (You might already have the xcat-dep tarball on the mgmt node.) &lt;/p&gt;
&lt;p&gt;Go to &lt;a class="" href="http://sourceforge.net/projects/xcat/files/xcat-dep/2.x_Linux"&gt;xcat-dep&lt;/a&gt; and get the latest xCAT dependency tarball. Copy the file to the management node and untar it in the appropriate sub-directory of /install/post/otherpkgs. For example: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;For&lt;/span&gt; &lt;span class="n"&gt;RH&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;CentOS&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;    
&lt;span class="n"&gt;mkdir&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;post&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;otherpkgs&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;rhels6&lt;/span&gt;&lt;span class="mf"&gt;.3&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;xcat&lt;/span&gt;
&lt;span class="n"&gt;cd&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;post&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;otherpkgs&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;rhels6&lt;/span&gt;&lt;span class="mf"&gt;.3&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;xcat&lt;/span&gt;
&lt;span class="n"&gt;tar&lt;/span&gt; &lt;span class="n"&gt;jxvf&lt;/span&gt; &lt;span class="n"&gt;xcat&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;dep&lt;/span&gt;&lt;span class="o"&gt;-*&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tar&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;bz2&lt;/span&gt;

&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;For&lt;/span&gt; &lt;span class="n"&gt;SLES&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;  
&lt;span class="n"&gt;mkdir&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;post&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;otherpkgs&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;sles11&lt;/span&gt;&lt;span class="mf"&gt;.3&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;xcat&lt;/span&gt;
&lt;span class="n"&gt;cd&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;post&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;otherpkgs&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;sles11&lt;/span&gt;&lt;span class="mf"&gt;.3&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;xcat&lt;/span&gt;
&lt;span class="n"&gt;tar&lt;/span&gt; &lt;span class="n"&gt;jxvf&lt;/span&gt; &lt;span class="n"&gt;xcat&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;dep&lt;/span&gt;&lt;span class="o"&gt;-*&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tar&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;bz2&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add the sysclone otherpkglist file and otherpkgdir to osimage definition that is used for the golden client, and then use updatenode to install the rpms. For example: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;For&lt;/span&gt; &lt;span class="n"&gt;RH&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;CentOS&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; 
&lt;span class="n"&gt;chdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;osimage&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;o&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;osimage&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;otherpkglist&lt;/span&gt;&lt;span class="o"&gt;=/&lt;/span&gt;&lt;span class="n"&gt;opt&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;xcat&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;share&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;xcat&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;rh&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;sysclone&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rhels6&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;otherpkgs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pkglist&lt;/span&gt;
&lt;span class="n"&gt;chdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;osimage&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;o&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;osimage&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="n"&gt;otherpkgdir&lt;/span&gt;&lt;span class="o"&gt;=/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;post&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;otherpkgs&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;rhels6&lt;/span&gt;&lt;span class="mf"&gt;.3&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt;
&lt;span class="n"&gt;updatenode&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;golden&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;cilent&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;S&lt;/span&gt;

&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;For&lt;/span&gt; &lt;span class="n"&gt;SLES&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;  
&lt;span class="n"&gt;chdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;osimage&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;o&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;osimage&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;otherpkglist&lt;/span&gt;&lt;span class="o"&gt;=/&lt;/span&gt;&lt;span class="n"&gt;opt&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;xcat&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;share&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;xcat&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;sles&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;sysclone&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sles11&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;otherpkgs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pkglist&lt;/span&gt;
&lt;span class="n"&gt;chdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;osimage&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;o&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;osimage&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="n"&gt;otherpkgdir&lt;/span&gt;&lt;span class="o"&gt;=/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;post&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;otherpkgs&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;sles11&lt;/span&gt;&lt;span class="mf"&gt;.3&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt;
&lt;span class="n"&gt;updatenode&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;golden&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;cilent&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;S&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id="capture-image-from-the-golden-client"&gt;&lt;strong&gt;Capture image from the Golden Client &lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;On the mgmt node, use &lt;a class="" href="http://xcat.sourceforge.net/man1/imgcapture.1.html"&gt;imgcapture&lt;/a&gt; to capture an osimage from the golden-client. &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="nx"&gt;imgcapture&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;my&lt;/span&gt;&lt;span class="na"&gt;-golden-client&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="na"&gt;-t&lt;/span&gt; &lt;span class="nx"&gt;sysclone&lt;/span&gt; &lt;span class="na"&gt;-o&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;mycomputeimage&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Tip: when imgcapture is run, it pulls the osimage from the golden-client, and creates the image files system and a corresponding osimage definition on the xcat management node.&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt; &lt;span class="nx"&gt;lsdef&lt;/span&gt; &lt;span class="na"&gt;-t&lt;/span&gt; &lt;span class="nx"&gt;osimage&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;mycomputeimage&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;check&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;osimage&lt;/span&gt; &lt;span class="nx"&gt;attributes.&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="begin-installation"&gt;Begin Installation&lt;/h3&gt;
&lt;h3 id="use-network-boot-to-start-the-installation"&gt;Use network boot to start the installation&lt;/h3&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;rnetboot&lt;/span&gt; &lt;span class="n"&gt;compute&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id="alternative-network-boot-in-power-775"&gt;Alternative network boot in Power 775&lt;/h3&gt;
&lt;p&gt;For Power 775 nodes, you can also initiate network boot using the &lt;a class="" href="http://xcat.sourceforge.net/man1/rbootseq.1.html"&gt;rbootseq&lt;/a&gt; and &lt;a class="" href="http://xcat.sourceforge.net/man1/rpower.1.html"&gt;rpower&lt;/a&gt; commands, but this method is not recommended for diskfull p775 service nodes.&lt;/p&gt;
&lt;div&gt;
&lt;div class="markdown_content"&gt;&lt;h3 id="monitor-installation"&gt;&lt;strong&gt;Monitor installation&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;It is possible to use the wcons command to watch the installation process for a sampling of the nodes: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;wcons&lt;/span&gt; &lt;span class="n"&gt;n1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;n20&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;n80&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;n100&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;or rcons to watch one node &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;rcons&lt;/span&gt; &lt;span class="n"&gt;n1&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Additionally, nodestat may be used to check the status of a node as it installs: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;nodestat&lt;/span&gt; &lt;span class="n"&gt;n20&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;n21&lt;/span&gt;
&lt;span class="nl"&gt;n20:&lt;/span&gt; &lt;span class="n"&gt;installing&lt;/span&gt; &lt;span class="n"&gt;man&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;pages&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mf"&gt;2.39&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;10.&lt;/span&gt;&lt;span class="n"&gt;el5&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nl"&gt;n21:&lt;/span&gt; &lt;span class="n"&gt;installing&lt;/span&gt; &lt;span class="n"&gt;prep&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Note: the percentage complete reported by nodestat is not necessarily reliable. &lt;/p&gt;
&lt;p&gt;You can also watch nodelist.status until it changes to "booted" for each node: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;nodels&lt;/span&gt; &lt;span class="n"&gt;compute&lt;/span&gt; &lt;span class="n"&gt;nodelist&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;xcoll&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Once all of the nodes are installed and booted, you should be able ssh to all of them from the MN (w/o a password), because xCAT should have automatically set up the ssh keys (if the postscripts ran successfully): &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;xdsh&lt;/span&gt; &lt;span class="n"&gt;compute&lt;/span&gt; &lt;span class="n"&gt;date&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;If there are problems, see &lt;a class="alink" href="/p/xcat/wiki/Debugging_xCAT_Problems"&gt;[Debugging_xCAT_Problems]&lt;/a&gt;. &lt;/p&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;div&gt;
&lt;div class="markdown_content"&gt;&lt;h3 id="installing-additional-packages-using-an-otherpkgs-pkglist"&gt;Installing Additional Packages Using an Otherpkgs Pkglist&lt;/h3&gt;
&lt;p&gt;If you have additional rpms (rpms &lt;strong&gt;not&lt;/strong&gt; in the distro) that you also want installed, make a directory to hold them, create a list of the rpms you want installed, and add that information to the osimage definition: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Create a directory to hold the additional rpms: &lt;/p&gt;
&lt;p&gt;mkdir -p /install/post/otherpkgs/rh/x86_64&lt;br /&gt;
cd /install/post/otherpkgs/rh/x86_64&lt;br /&gt;
cp /myrpms/* .&lt;br /&gt;
createrepo .&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: when the management node is rhels6.x, and the otherpkgs repository data is for rhels5.x, we should run createrepo with "-s md5". Such as: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;createrepo&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt; &lt;span class="n"&gt;md5&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Create a file that lists the additional rpms that should be installed. For example, in /install/custom/netboot/rh/compute.otherpkgs.pkglist put: &lt;/p&gt;
&lt;p&gt;myrpm1&lt;br /&gt;
myrpm2&lt;br /&gt;
myrpm3&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add both the directory and the file to the osimage definition: &lt;/p&gt;
&lt;p&gt;chdef -t osimage mycomputeimage otherpkgdir=/install/post/otherpkgs/rh/x86_64 otherpkglist=/install/custom/netboot/rh/compute.otherpkgs.pkglist&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you add more rpms at a later time, you must run createrepo again. The createrepo command is in the createrepo rpm, which for RHEL is in the 1st DVD, but for SLES is in the SDK DVD. &lt;/p&gt;
&lt;p&gt;If you have &lt;strong&gt;multiple sets of rpms&lt;/strong&gt; that you want to &lt;strong&gt;keep separate&lt;/strong&gt; to keep them organized, you can put them in separate sub-directories in the otherpkgdir. If you do this, you need to do the following extra things, in addition to the steps above: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Run createrepo in each sub-directory &lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In your otherpkgs.pkglist, list at least 1 file from each sub-directory. (During installation, xCAT will define a yum or zypper repository for each directory you reference in your otherpkgs.pkglist.) For example: &lt;/p&gt;
&lt;p&gt;xcat/xcat-core/xCATsn&lt;br /&gt;
xcat/xcat-dep/rh6/x86_64/conserver-xcat&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;There are some examples of otherpkgs.pkglist in /opt/xcat/share/xcat/netboot/&amp;lt;distro&amp;gt;/service.*.otherpkgs.pkglist that show the format. &lt;/p&gt;
&lt;p&gt;Note: the otherpkgs postbootscript should by default be associated with every node. Use lsdef to check: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;lsdef&lt;/span&gt; &lt;span class="n"&gt;node1&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="n"&gt;postbootscripts&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;If it is not, you need to add it. For example, add it for all of the nodes in the "compute" group: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;chdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;group&lt;/span&gt; &lt;span class="n"&gt;compute&lt;/span&gt; &lt;span class="n"&gt;postbootscripts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;otherpkgs&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div&gt;
&lt;div class="markdown_content"&gt;&lt;h3 id="installing-os-updates-by-setting-linuximagepkgdironly-support-for-rhels-and-sles"&gt;Installing OS Updates By Setting linuximage.pkgdir(only support for rhels and sles)&lt;/h3&gt;
&lt;p&gt;The linuximage.pkgdir is the name of the directory where the distro packages are stored. It can be set multiple paths. The multiple paths must be separated by ",". The first path is the value of osimage.pkgdir and must be the OS base pkg directory path, such as pkgdir=/install/rhels6.2/x86_64,/install/updates/rhels6.2/x86_64 . In the os base pkg path, there is default repository data. In the other pkg path(s), the users should make sure there is repository data. If not, use "createrepo" command to create them. &lt;/p&gt;
&lt;p&gt;If you have additional os updates rpms (rpms may be from the os website, or the additional os distro) that you also want installed, make a directory to hold them, create a list of the rpms you want installed, and add that information to the osimage definition: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Create a directory to hold the additional rpms: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;mkdir&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;updates&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;rhels6&lt;/span&gt;&lt;span class="mf"&gt;.2&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt; 
&lt;span class="n"&gt;cd&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;updates&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;rhels6&lt;/span&gt;&lt;span class="mf"&gt;.2&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt; 
&lt;span class="n"&gt;cp&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;myrpms&lt;/span&gt;&lt;span class="o"&gt;/*&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If there is no repository data in the directory, you can run "createrepo" to create it: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;createrepo&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;The createrepo command is in the createrepo rpm, which for RHEL is in the 1st DVD, but for SLES is in the SDK DVD. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: when the management node is rhels6.x, and the otherpkgs repository data is for rhels5.x, we should run createrepo with "-s md5". Such as: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;createrepo&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt; &lt;span class="n"&gt;md5&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Append the additional rpms into the corresponding pkglist. For example, in /install/custom/install/rh/compute.rhels6.x86_64.pkglist, append: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="p"&gt;...&lt;/span&gt;
&lt;span class="n"&gt;myrpm1&lt;/span&gt;
&lt;span class="n"&gt;myrpm2&lt;/span&gt;
&lt;span class="n"&gt;myrpm3&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add both the directory and the file to the osimage definition: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;chdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;osimage&lt;/span&gt; &lt;span class="n"&gt;mycomputeimage&lt;/span&gt; &lt;span class="n"&gt;pkgdir&lt;/span&gt;&lt;span class="o"&gt;=/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;rhels6&lt;/span&gt;&lt;span class="mf"&gt;.2&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;updates&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;rhels6&lt;/span&gt;&lt;span class="mf"&gt;.2&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt; \
    &lt;span class="n"&gt;pkglist&lt;/span&gt;&lt;span class="o"&gt;=/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;custom&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;rh&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;compute&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rhels6&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pkglist&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you add more rpms at a later time, you must run createrepo again. &lt;/p&gt;
&lt;p&gt;Note: After the above setting, &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;For diskfull install, run "nodeset &amp;lt;noderange&amp;gt; mycomputeimage" to pick up the changes, and then boot up the nodes &lt;/li&gt;
&lt;li&gt;For diskless, run genimage to install the packages into the image, and then packimage and boot up the nodes. &lt;/li&gt;
&lt;li&gt;If the nodes are up, run "updatenode &amp;lt;noderange&amp;gt; ospkgs" to update the packages. &lt;/li&gt;
&lt;li&gt;These functions are only support for rhels6.x and sles11.x &lt;/li&gt;
&lt;/ul&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;h2 id="stateless-node-deployment"&gt;Stateless Node Deployment&lt;/h2&gt;
&lt;p&gt;The following section (and its subsections) is the standard xCAT procedure for building and deploying a linux stateless image. Some of the example commands refer to the x86_64 architecture, but the procedure is the same on ppc64. Just replace x86_64 with ppc64. Also, when it comes time to boot the nodes, use rnetboot instead of rpower.&lt;/p&gt;
&lt;p&gt;In addition, if you are building/deploying a stateless image on a &lt;strong&gt;p775&lt;/strong&gt; cluster, do these additional things when following the procedure below:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;In the section for installing other packages, add the &lt;a class="" href="ftp://linuxpatch.ncsa.uiuc.edu/PERCS/powerpc-utils-1.2.2-18.el6.ppc64.rpm" rel="nofollow"&gt;powerpc-utils rpm&lt;/a&gt; to the otherpkgs directory and the otherpkglist.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In the section for using postinstall files, add the following lines to your postinstall script (the location of the rootimg should be changed to your location):&lt;/p&gt;
&lt;p&gt;cp /hfi/dd/&lt;em&gt; /install/test/netboot/rh/ppc64/compute/rootimg/tmp/&lt;br /&gt;
chroot /install/test/netboot/rh/ppc64/compute/rootimg/ /bin/rpm -ivh '/tmp/dhclient-&lt;/em&gt;.rpm' --force&lt;br /&gt;
chroot /install/test/netboot/rh/ppc64/compute/rootimg/ /bin/rpm -ivh '/tmp/dhcp-&lt;em&gt;.rpm' --force&lt;br /&gt;
chroot /install/test/netboot/rh/ppc64/compute/rootimg/ /bin/rpm -ivh '/tmp/kernel-headers-&lt;/em&gt;.rpm' --force&lt;br /&gt;
chroot /install/test/netboot/rh/ppc64/compute/rootimg/ /bin/rpm -ivh '/tmp/net-tools-&lt;em&gt;.rpm' --force&lt;br /&gt;
chroot /install/test/netboot/rh/ppc64/compute/rootimg/ /bin/rpm -ivh '/tmp/hfi_ndai-&lt;/em&gt;.rpm' --force&lt;br /&gt;
chroot /install/test/netboot/rh/ppc64/compute/rootimg/ /bin/rpm -ivh '/tmp/hfi_util-*.rpm' --force&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In the Generate/Pack image section, put the p775 custom kernel in /install/kernels and use a genimage command like:&lt;/p&gt;
&lt;p&gt;genimage -i hf0 -n hf_if -k 2.6.32-71.el6.20110617.ppc64 redhat6img&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Also verify that name resolution will be set up correctly on the nodes. This is necessary for the confighfi postscript to configure the HFI NICs properly. See &lt;a class="alink" href="/p/xcat/wiki/Cluster_Name_Resolution"&gt;[Cluster_Name_Resolution]&lt;/a&gt; for details about setting up name resolution.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;When it it comes time to boot the nodes, use rbootseq and rpower.&lt;/li&gt;
&lt;/ul&gt;
&lt;div&gt;
&lt;div class="markdown_content"&gt;&lt;p&gt;Note: this section describes how to create a stateless image using the genimage command to install a list of rpms into the image. As an alternative, you can also capture an image from a running node and create a stateless image out of it. See &lt;a class="alink" href="/p/xcat/wiki/Capture_Linux_Image"&gt;[Capture_Linux_Image]&lt;/a&gt; for details. &lt;/p&gt;
&lt;div class="toc"&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#create-the-distro-repository-on-the-mn"&gt;Create the Distro Repository on the MN&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#using-an-osimage-definition"&gt;Using an osimage Definition&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#select-or-create-an-osimage-definition"&gt;Select or Create an osimage Definition&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#set-up-pkglists"&gt;Set up pkglists&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#set-up-a-postinstall-script-optional"&gt;Set up a postinstall script (optional)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#set-up-files-to-be-synchronized-on-the-nodes"&gt;Set up Files to be synchronized on the nodes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#configure-the-nodes-to-use-your-osimage"&gt;Configure the nodes to use your osimage&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#generate-and-pack-your-image"&gt;Generate and pack your image&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#building-an-image-for-a-different-os-or-architecture"&gt;Building an Image for a Different OS or Architecture&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#building-an-image-for-the-same-os-and-architecture-as-the-mn"&gt;Building an Image for the Same OS and Architecture as the MN&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#installing-a-new-kernel-in-the-stateless-image"&gt;Installing a New Kernel in the Stateless Image&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#installing-new-kernel-drivers-to-stateless-initrd"&gt;Installing New Kernel Drivers to Stateless Initrd&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#boot-the-nodes"&gt;Boot the nodes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;h3 id="create-the-distro-repository-on-the-mn"&gt;Create the Distro Repository on the MN&lt;/h3&gt;
&lt;p&gt;The &lt;a class="" href="http://xcat.sourceforge.net/man8/copycds.8.html"&gt;copycds&lt;/a&gt; command copies the contents of the linux distro media to /install/&amp;lt;os&amp;gt;/&amp;lt;arch&amp;gt; so that it will be available to install nodes with or create diskless images. &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Obtain the Redhat or SLES ISOs or DVDs. &lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If using an ISO, copy it to (or NFS mount it on) the management node, and then run: &lt;/p&gt;
&lt;p&gt;copycds &amp;lt;path&amp;gt;/RHEL6.2-Server-20080430.0-x86_64-DVD.iso&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If using a DVD, put it in the DVD drive of the management node and run: &lt;/p&gt;
&lt;p&gt;copycds /dev/dvd       # or whatever the device name of your dvd drive is&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Tip: if this is the same distro version as your management node, create a .repo file in /etc/yum.repos.d with content similar to: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="k"&gt;[local-rhels6.2-x86_64]&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;xCAT local rhels 6.2&lt;/span&gt;
&lt;span class="na"&gt;baseurl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;file:/install/rhels6.2/x86_64&lt;/span&gt;
&lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;1&lt;/span&gt;
&lt;span class="na"&gt;gpgcheck&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s"&gt;0&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;This way, if you need some additional RPMs on your MN at a later, you can simply install them using yum. Or if you are installing other software on your MN that requires some additional RPMs from the disto, they will automatically be found and installed. &lt;/p&gt;
&lt;h3 id="using-an-osimage-definition"&gt;Using an osimage Definition&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Note: To use an osimage as your provisioning method, you need to be running xCAT 2.6.6 or later.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The provmethod attribute of your nodes should contain the name of the osimage object definition that is being used for those nodes. The &lt;a class="" href="http://xcat.sourceforge.net/man7/osimage.7.html"&gt;osimage object&lt;/a&gt; contains paths for pkgs, templates, kernels, etc. If you haven't already, run &lt;a class="" href="http://xcat.sourceforge.net/man8/copycds.8.html"&gt;copycds&lt;/a&gt; to copy the distro rpms to /install. Default osimage objects are also defined when copycds is run. To view the osimages: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;lsdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;osimage&lt;/span&gt;          &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="n"&gt;see&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;list&lt;/span&gt; &lt;span class="n"&gt;of&lt;/span&gt; &lt;span class="n"&gt;osimages&lt;/span&gt;
    &lt;span class="n"&gt;lsdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;osimage&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;osimage&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
          &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="n"&gt;see&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;attributes&lt;/span&gt; &lt;span class="n"&gt;of&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="n"&gt;particular&lt;/span&gt; &lt;span class="n"&gt;osimage&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id="select-or-create-an-osimage-definition"&gt;Select or Create an osimage Definition&lt;/h3&gt;
&lt;p&gt;From the list found above, select the osimage for your distro, architecture, provisioning method (install, netboot, statelite), and profile (compute, service, etc.). Although it is optional, we recommend you make a copy of the osimage, changing its name to a simpler name. For example: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;lsdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;osimage&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;z&lt;/span&gt; &lt;span class="n"&gt;rhels6&lt;/span&gt;&lt;span class="mf"&gt;.3&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;netboot&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;compute&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;sed&lt;/span&gt; &lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="o"&gt;/^&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;^&lt;/span&gt; &lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="o"&gt;+:/&lt;/span&gt;&lt;span class="n"&gt;mycomputeimage&lt;/span&gt;&lt;span class="o"&gt;:/&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;mkdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;z&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;This displays the osimage "rhels6.3-x86_64-netboot-compute" in a format that can be used as input to mkdef, but on the way there it uses sed to modify the name of the object to "mycomputeimage". &lt;/p&gt;
&lt;p&gt;Initially, this osimage object points to templates, pkglists, etc. that are shipped by default with xCAT. And some attributes, for example otherpkglist and synclists, won't have any value at all because xCAT doesn't ship a default file for that. You can now change/fill in any &lt;a class="" href="http://xcat.sourceforge.net/man7/osimage.7.html"&gt;osimage attributes&lt;/a&gt; that you want. A general convention is that if you are modifying one of the default files that an osimage attribute points to, copy it into /install/custom and have your osimage point to it there. (If you modify the copy under /opt/xcat directly, it will be over-written the next time you upgrade xCAT.) An important attribute to change is the rootimgdir which will contain the generated osimage files so that you don't over-write an image built with the shipped definitions. To continue the previous example: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;      &lt;span class="n"&gt;chdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;osimage&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;o&lt;/span&gt; &lt;span class="n"&gt;mycomputeimage&lt;/span&gt; &lt;span class="n"&gt;rootimgdir&lt;/span&gt;&lt;span class="o"&gt;=/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;netboot&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;rhels6&lt;/span&gt;&lt;span class="mf"&gt;.3&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;mycomputeimage&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id="set-up-pkglists"&gt;Set up pkglists&lt;/h3&gt;
&lt;p&gt;You likely want to customize the main pkglist for the image. This is the list of rpms or groups that will be installed from the distro. (Other rpms that they depend on will be installed automatically.) For example: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;mkdir&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;custom&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;netboot&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;rh&lt;/span&gt;
    &lt;span class="n"&gt;cp&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;opt&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;xcat&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;share&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;xcat&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;netboot&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;rh&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;compute&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rhels6&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pkglist&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;custom&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;netboot&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;rh&lt;/span&gt;
    &lt;span class="n"&gt;vi&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;custom&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;netboot&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;rh&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;compute&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rhels6&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pkglist&lt;/span&gt;
    &lt;span class="n"&gt;chdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;osimage&lt;/span&gt; &lt;span class="n"&gt;mycomputeimage&lt;/span&gt; &lt;span class="n"&gt;pkglist&lt;/span&gt;&lt;span class="o"&gt;=/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;custom&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;netboot&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;rh&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;compute&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rhels6&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pkglist&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;The goal is to install the fewest number of rpms that still provides the function and applications that you need, because the resulting ramdisk will use real memory in your nodes. &lt;/p&gt;
&lt;p&gt;Also, check to see if the default exclude list excludes all files and directories you do not want in the image. The exclude list enables you to trim the image after the rpms are installed into the image, so that you can make the image as small as possible. &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;cp&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;opt&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;xcat&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;share&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;xcat&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;netboot&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;rh&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;compute&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;exlist&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;custom&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;netboot&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;rh&lt;/span&gt;
    &lt;span class="n"&gt;vi&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;custom&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;netboot&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;rh&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;compute&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;exlist&lt;/span&gt; 
    &lt;span class="n"&gt;chdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;osimage&lt;/span&gt; &lt;span class="n"&gt;mycomputeimage&lt;/span&gt; &lt;span class="n"&gt;exlist&lt;/span&gt;&lt;span class="o"&gt;=/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;custom&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;netboot&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;rh&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;compute&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;exlist&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Make sure nothing is excluded in the exclude list that you need on the node. For example, if you require perl on your nodes, remove the line "./usr/lib/perl5*". &lt;/p&gt;
&lt;p&gt;[[include Install_OS_Updates (already included)] &lt;/p&gt;
&lt;p&gt;[[include Install_Additional_Packages (already included)] &lt;/p&gt;
&lt;h3 id="set-up-a-postinstall-script-optional"&gt;Set up a postinstall script (optional)&lt;/h3&gt;
&lt;p&gt;Postinstall scripts for diskless images are analogous to postscripts for diskfull installation. The postinstall script is run by genimage near the end of its processing. You can use it to do anything to your image that you want done every time you generate this kind of image. In the script you can install rpms that need special flags, or tweak the image in some way. There are some examples shipped in /opt/xcat/share/xcat/netboot/&amp;lt;distro&amp;gt;. If you create a postinstall script to be used by genimage, then point to it in your osimage definition. For example: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;chdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;osimage&lt;/span&gt; &lt;span class="n"&gt;mycomputeimage&lt;/span&gt; &lt;span class="n"&gt;postinstall&lt;/span&gt;&lt;span class="o"&gt;=/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;custom&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;netboot&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;rh&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;compute&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;postinstall&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id="set-up-files-to-be-synchronized-on-the-nodes"&gt;Set up Files to be synchronized on the nodes&lt;/h3&gt;
&lt;p&gt;Note: This is only supported for stateless nodes in xCAT 2.7 and above. &lt;/p&gt;
&lt;p&gt;Sync lists contain a list of files that should be sync'd from the management node to the image and to the running nodes. This allows you to have 1 copy of config files for a particular type of node and make sure that all those nodes are running with those config files. The sync list should contain a line for each file you want sync'd, specifying the path it has on the MN and the path it should be given on the node. For example: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;custom&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;syncfiles&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;compute&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;etc&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;motd&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;etc&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;motd&lt;/span&gt;
    &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;etc&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;hosts&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;etc&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;hosts&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;If you put the above contents in /install/custom/netboot/rh/compute.synclist, then: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;chdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;osimage&lt;/span&gt; &lt;span class="n"&gt;mycomputeimage&lt;/span&gt; &lt;span class="n"&gt;synclists&lt;/span&gt;&lt;span class="o"&gt;=/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;custom&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;netboot&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;rh&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;compute&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;synclist&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;For more details, see &lt;a class="" href="/p/xcat/wiki/Sync-ing_Config_Files_to_Nodes"&gt;Sync-ing_Config_Files_to_Nodes&lt;/a&gt;. &lt;/p&gt;
&lt;h3 id="configure-the-nodes-to-use-your-osimage"&gt;Configure the nodes to use your osimage&lt;/h3&gt;
&lt;p&gt;You can configure any noderange to use this osimage. In this example, we define that the whole compute group should use the image: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;     &lt;span class="n"&gt;chdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;group&lt;/span&gt; &lt;span class="n"&gt;compute&lt;/span&gt; &lt;span class="n"&gt;provmethod&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;mycomputeimage&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Now that you have associated an osimage with nodes, if you want to list a node's attributes, including the osimage attributes all in one command: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;lsdef&lt;/span&gt; &lt;span class="n"&gt;node1&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;osimage&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id="generate-and-pack-your-image"&gt;Generate and pack your image&lt;/h3&gt;
&lt;p&gt;There are other attributes that can be set in your osimage definition. See the &lt;a class="" href="http://xcat.sourceforge.net/man7/osimage.7.html"&gt;osimage man page&lt;/a&gt; for details. &lt;/p&gt;
&lt;h4 id="building-an-image-for-a-different-os-or-architecture"&gt;&lt;strong&gt;Building an Image for a Different OS or Architecture&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;If you are building an image for a different OS/architecture than is on the Management node, you need to follow this process: &lt;a class="alink" href="/p/xcat/wiki/Building_a_Stateless_Image_of_a_Different_Architecture_or_OS"&gt;[Building_a_Stateless_Image_of_a_Different_Architecture_or_OS]&lt;/a&gt;. Note: different OS in this case means, for example, RHEL 5 vs. RHEL 6. If the difference is just an update level/service pack (e.g. RHEL 6.0 vs. RHEL 6.3), then you can build it on the MN. &lt;/p&gt;
&lt;h4 id="building-an-image-for-the-same-os-and-architecture-as-the-mn"&gt;&lt;strong&gt;Building an Image for the Same OS and Architecture as the MN&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;If the image you are building is for nodes that are the same OS and architecture as the management node (the most common case), then you can follow the instructions here to run genimage on the management node. &lt;/p&gt;
&lt;p&gt;Run &lt;a class="" href="http://xcat.sourceforge.net/man1/genimage.1.html"&gt;genimage&lt;/a&gt; to generate the image based on the mycomputeimage definition: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;genimage&lt;/span&gt; &lt;span class="n"&gt;mycomputeimage&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Before you pack the image, you have the opportunity to change any files in the image that you want to, by cd'ing to the rootimgdir (e.g. /install/netboot/rhels6/x86_64/compute/rootimg). Although, instead, we recommend that you make all changes to the image via your postinstall script, so that it is repeatable. &lt;/p&gt;
&lt;p&gt;The genimage command creates /etc/fstab in the image. If you want to, for example, limit the amount of space that can be used in /tmp and /var/tmp, you can add lines like the following to it (either by editing it by hand or via the postinstall script): &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;tmpfs&lt;/span&gt;   &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;tmp&lt;/span&gt;     &lt;span class="n"&gt;tmpfs&lt;/span&gt;    &lt;span class="n"&gt;defaults&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt;             &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
    &lt;span class="n"&gt;tmpfs&lt;/span&gt;   &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;var&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;tmp&lt;/span&gt;     &lt;span class="n"&gt;tmpfs&lt;/span&gt;    &lt;span class="n"&gt;defaults&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt;       &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;But probably an easier way to accomplish this is to create a postscript to be run when the node boots up with the following lines: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;logger&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;xcat&lt;/span&gt; &lt;span class="s"&gt;"$0: BEGIN"&lt;/span&gt;
    &lt;span class="n"&gt;mount&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;o&lt;/span&gt; &lt;span class="n"&gt;remount&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;tmp&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;
    &lt;span class="n"&gt;mount&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;o&lt;/span&gt; &lt;span class="n"&gt;remount&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;var&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;tmp&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;
    &lt;span class="n"&gt;logger&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;xcat&lt;/span&gt; &lt;span class="s"&gt;"$0: END"&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Assuming you call this postscript settmpsize, you can add this to the list of postscripts that should be run for your compute nodes by: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;chdef&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;group&lt;/span&gt; &lt;span class="n"&gt;compute&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="n"&gt;postbootscripts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;settmpsize&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Now pack the image to create the ramdisk: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;packimage&lt;/span&gt; &lt;span class="n"&gt;mycomputeimage&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h4 id="installing-a-new-kernel-in-the-stateless-image"&gt;&lt;strong&gt;Installing a New Kernel in the Stateless Image&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;The &lt;em&gt;kerneldir&lt;/em&gt; attribute in &lt;em&gt;linuximage&lt;/em&gt; table is used to assign one directory to hold the new kernel to be installed into the stateless/statelite image. Its default value is &lt;em&gt;/install/kernels&lt;/em&gt;, you need to create the directory named &lt;em&gt;&amp;lt;kernelver&amp;gt;&lt;/em&gt; under the &lt;em&gt;kerneldir&lt;/em&gt;, and genimage will pick them up from there. &lt;/p&gt;
&lt;p&gt;Assuming you have the kernel in RPM format in /tmp, the value of &lt;em&gt;kerneldir&lt;/em&gt; is not set (which will take the default value: &lt;em&gt;/install/kernels&lt;/em&gt;). &lt;/p&gt;
&lt;p&gt;This procedure assumes you are using xCAT 2.6.1 or later. The rpm names are an example and you can substitute your level and architecture. The kernel will be installed directly from the rpm package. &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;For RHEL: &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The kernel RPM package is usually named &lt;em&gt;kernel-&amp;lt;kernelver&amp;gt;.rpm&lt;/em&gt;, for example: kernel-2.6.32.10-0.5.x86_64.rpm is the kernel package for &lt;strong&gt;2.6.32.10-0.5.x86_64&lt;/strong&gt;. &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;cp&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;tmp&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;kernel&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;2.6.32.10&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rpm&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;kernels&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;
    &lt;span class="n"&gt;createrepo&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;kernels&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;For SLES: &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Usually, the kernel files for SLES are separated into two parts: &lt;em&gt;kernel-&amp;lt;arch&amp;gt;-base&lt;/em&gt; and &lt;em&gt;kernel&lt;/em&gt;, and the naming of kernel RPM packages are different. For example, there's two RPM packages in /tmp: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;kernel&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;ppc64&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;base&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;2.6.27.19&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;5.1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rpm&lt;/span&gt;
    &lt;span class="n"&gt;kernel&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;ppc64&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;2.6.27.19&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;5.1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rpm&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;&lt;em&gt;2.6.27.19-5.1.x86_64&lt;/em&gt; is &lt;strong&gt;NOT&lt;/strong&gt; the kernel version. _2.6.27.19-5-_x86_64 is the kernel version . Follow this naming convention to determine the kernel version. &lt;/p&gt;
&lt;p&gt;After the kernel version is determined for SLES, then: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;cp&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;tmp&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;kernel&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;ppc64&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;base&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;2.6.27.19&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;5.1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rpm&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;kernels&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;
    &lt;span class="n"&gt;cp&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;tmp&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;kernel&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;ppc64&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;2.6.27.19&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;5.1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rpm&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;kernels&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Run genimage/packimage to update the image with the new kernel: (Use sles as example) &lt;/p&gt;
&lt;p&gt;Since the kernel version is different from the rpm package version, the -g flag needs to be specified on the genimage command for the rpm version of kernel packages. &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;genimage&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="n"&gt;eth0&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt; &lt;span class="n"&gt;ibmveth&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;o&lt;/span&gt; &lt;span class="n"&gt;sles11&lt;/span&gt;&lt;span class="mf"&gt;.1&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="n"&gt;compute&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;k&lt;/span&gt; &lt;span class="mf"&gt;2.6.27.19&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;x86_64&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;g&lt;/span&gt; &lt;span class="mf"&gt;2.6.27.19&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;5.1&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h4 id="installing-new-kernel-drivers-to-stateless-initrd"&gt;&lt;strong&gt;Installing New Kernel Drivers to Stateless Initrd&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;The kernel drivers in the stateless initrd are used for the devices during the netboot. If you are missing one or more kernel drivers for specific devices (especially for the network device), the netboot process will fail. xCAT offers two approaches to add additional drivers to the stateless initrd during the running of &lt;strong&gt;genimage&lt;/strong&gt;. &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Use the '-n' flag to add new drivers to the stateless initrd &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="nx"&gt;genimage&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;imagename&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="na"&gt;-n&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;driver&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Generally, the genimage command has a default driver list which will be added to the initrd. But if you specify the '-n' flag, the default driver list will be replaced with your &amp;lt;new driver list&amp;gt;. That means you need to include any drivers that you need from the default driver list into your &amp;lt;new driver list&amp;gt;. &lt;/p&gt;
&lt;p&gt;The default driver list: &lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;rh&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;x86&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;   &lt;span class="n"&gt;tg3&lt;/span&gt; &lt;span class="n"&gt;bnx2&lt;/span&gt; &lt;span class="n"&gt;bnx2x&lt;/span&gt; &lt;span class="n"&gt;e1000&lt;/span&gt; &lt;span class="n"&gt;e1000e&lt;/span&gt; &lt;span class="n"&gt;igb&lt;/span&gt; &lt;span class="n"&gt;mlx_en&lt;/span&gt; &lt;span class="n"&gt;virtio_net&lt;/span&gt; &lt;span class="n"&gt;be2net&lt;/span&gt;
    &lt;span class="n"&gt;rh&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;ppc&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;   &lt;span class="n"&gt;e1000&lt;/span&gt; &lt;span class="n"&gt;e1000e&lt;/span&gt; &lt;span class="n"&gt;igb&lt;/span&gt; &lt;span class="n"&gt;ibmveth&lt;/span&gt; &lt;span class="n"&gt;ehea&lt;/span&gt;
    &lt;span class="n"&gt;sles&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;x86&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;tg3&lt;/span&gt; &lt;span class="n"&gt;bnx2&lt;/span&gt; &lt;span class="n"&gt;bnx2x&lt;/span&gt; &lt;span class="n"&gt;e1000&lt;/span&gt; &lt;span class="n"&gt;e1000e&lt;/span&gt; &lt;span class="n"&gt;igb&lt;/span&gt; &lt;span class="n"&gt;mlx_en&lt;/span&gt; &lt;span class="n"&gt;be2net&lt;/span&gt;
    &lt;span class="n"&gt;sels&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;ppc&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;tg3&lt;/span&gt; &lt;span class="n"&gt;e1000&lt;/span&gt; &lt;span class="n"&gt;e1000e&lt;/span&gt; &lt;span class="n"&gt;igb&lt;/span&gt; &lt;span class="n"&gt;ibmveth&lt;/span&gt; &lt;span class="n"&gt;ehea&lt;/span&gt; &lt;span class="n"&gt;be2net&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Note: With this approach, xCAT will search for the drivers in the rootimage. You need to make sure the drivers have been included in the rootimage before generating the initrd. You can install the drivers manually in an existing rootimage (using chroot) and run genimage again, or you can use a postinstall script to install drivers to the rootimage during your initial genimage run. &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Use the &lt;strong&gt;driver rpm package&lt;/strong&gt; to add new drivers from rpm packages to the stateless initrd &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Refer to the doc &lt;a class="" href="../Using_Linux_Driver_Update_Disk/#driver-rpm-package"&gt;Using_Linux_Driver_Update_Disk#Driver_RPM_Package&lt;/a&gt;. &lt;/p&gt;
&lt;h3 id="boot-the-nodes"&gt;Boot the nodes&lt;/h3&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="n"&gt;nodeset&lt;/span&gt; &lt;span class="n"&gt;compute&lt;/span&gt; &lt;span class="n"&gt;osimage&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;mycomputeimage&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;(If you need to update your diskless image sometime later, change your osimage attributes and the files they point to accordingly, and then rerun genimage, packimage, nodeset, and boot the nodes.) &lt;/p&gt;
&lt;p&gt;Now boot your nodes... &lt;/p&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;h3 id="use-network-boot-to-start-the-installation-for-non-p775-nodes"&gt;Use network boot to start the installation for non-p775 nodes&lt;/h3&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;rnetboot&lt;/span&gt; &lt;span class="n"&gt;compute&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id="use-network-boot-to-start-the-installation-for-p775-nodes"&gt;Use network boot to start the installation for p775 nodes&lt;/h3&gt;
&lt;p&gt;Starting with xCAT 2.6, Power 775 diskless nodes can also be network booted using the &lt;a class="" href="http://xcat.sourceforge.net/man1/rbootseq.1.html"&gt;rbootseq&lt;/a&gt; and &lt;a class="" href="http://xcat.sourceforge.net/man1/rpower.1.html"&gt;rpower&lt;/a&gt; commands. This is recommended (instead of rnetboot) for diskless p775 nodes, because it is faster:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;rbootseq&lt;/span&gt; &lt;span class="n"&gt;compute&lt;/span&gt; &lt;span class="n"&gt;hfi&lt;/span&gt;
&lt;span class="n"&gt;rpower&lt;/span&gt; &lt;span class="n"&gt;compute&lt;/span&gt; &lt;span class="n"&gt;on&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Note: if your diskless p775 node has an ethernet adapter and you are trying to network boot it via that NIC, instead of the HFI NIC, then use "net" instead of "hfi" as the argument to the rbootseq command. This is an unusual case.&lt;/p&gt;
&lt;h3 id="check-the-installation-result"&gt;Check the installation result&lt;/h3&gt;
&lt;p&gt;After the node installation is completed successfully, the node's status will be changed to &lt;strong&gt;booted&lt;/strong&gt;, the following command to check the node's status:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;lsdef&lt;/span&gt; &lt;span class="n"&gt;compute&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="n"&gt;status&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;When the node's status is changed to &lt;strong&gt;booted&lt;/strong&gt;, you can also check ssh service on the node is working and you can login without password.&lt;/p&gt;
&lt;p&gt;Note: Do not run ssh or xdsh against the node until the node installation is completed successfully. Running ssh or xdsh against the node before the node installation completed may result in ssh hostkeys issues.&lt;/p&gt;
&lt;p&gt;If ssh is working but cannot login without password, setup the ssh key to the compute node using xdsh:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;xdsh&lt;/span&gt; &lt;span class="n"&gt;compute&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;K&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;After exchanging ssh key, following command should work.&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;xdsh&lt;/span&gt; &lt;span class="n"&gt;compute&lt;/span&gt; &lt;span class="n"&gt;date&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id="remove-an-image"&gt;Remove an image&lt;/h3&gt;
&lt;p&gt;If you want to remove an image, rmimage is used to remove the Linux stateless or statelite image from the file system. It is better to use this command than just remove the filesystem yourself, because it also remove appropriate links to real files system that may be distroyed on your Management Node, if you just use the rm -rf command.&lt;/p&gt;
&lt;p&gt;You can specify the &amp;lt;os&amp;gt;, &amp;lt;arch&amp;gt; and &amp;lt;profile&amp;gt; value to the rmimagecommand:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="nx"&gt;rmimage&lt;/span&gt; &lt;span class="na"&gt;-o&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;os&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="na"&gt;-a&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;arch&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="na"&gt;-p&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;profile&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Or, you can specify one imagename to the command:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;    &lt;span class="nx"&gt;rmimage&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;imagename&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h2 id="statelite-node-deployment"&gt;Statelite Node Deployment&lt;/h2&gt;
&lt;p&gt;Statelite is an xCAT feature which allows you to have mostly stateless nodes (for ease of management), but tell xCAT that just a little bit of state should be kept in a few specific files or directories that are persistent for each node. If you would like to use this feature, refer to the &lt;a class="alink" href="/p/xcat/wiki/XCAT_Linux_Statelite"&gt;[XCAT_Linux_Statelite]&lt;/a&gt; documentation.&lt;/p&gt;
&lt;h2 id="advanced-features"&gt;Advanced features&lt;/h2&gt;
&lt;h3 id="use-the-driver-update-disk"&gt;Use the driver update disk:&lt;/h3&gt;
&lt;p&gt;Refer to &lt;a class="alink" href="/p/xcat/wiki/Using_Linux_Driver_Update_Disk"&gt;[Using_Linux_Driver_Update_Disk]&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id="setup-kdump-service-over-ethernethfi-on-diskless-linux-for-xcat-26-and-higher"&gt;Setup Kdump Service over Ethernet/HFI on diskless Linux (for xCAT 2.6 and higher)&lt;/h3&gt;
&lt;p&gt;Follow &lt;a class="alink" href="/p/xcat/wiki/Kdump%20over%20Ethernet%20or%20HFI%20for%20Linux%20diskless%20nodes"&gt;[Kdump over Ethernet or HFI for Linux diskless nodes]&lt;/a&gt; to define the diskless image object.&lt;/p&gt;
&lt;h4 id="generate-rootimage-for-disklessstatelite"&gt;&lt;strong&gt;Generate rootimage for diskless/statelite&lt;/strong&gt;&lt;/h4&gt;
&lt;p&gt;Follow &lt;a class="" href="../XCAT_pLinux_Clusters/#stateless-node-deployment"&gt;XCAT_pLinux_Clusters#Stateless_node_deployment&lt;/a&gt; to generate the diskless rootimg . Follow &lt;a class="" href="/p/xcat/wiki/XCAT_Linux_Statelite"&gt;XCAT_Linux_Statelite&lt;/a&gt; to generate the statelite image.&lt;/p&gt;
&lt;h4 id="the-remaining-steps"&gt;The Remaining Steps&lt;/h4&gt;
&lt;p&gt;Follow the documents including &lt;span&gt;[&lt;a class="" href="http://sourceforge.net/apps/mediawiki/xcat/index.php?title=XCAT_pLinux_Clusters"&gt;xCAT_pLinux_Clusters&lt;/a&gt;]&lt;/span&gt; and &lt;span&gt;[&lt;a class="" href="http://sourceforge.net/apps/mediawiki/xcat/index.php?title=XCAT_Linux_Statelite"&gt;xCAT_Linux_Statelite&lt;/a&gt;]&lt;/span&gt; to setup the diskless/statelite image, and to make the specified noderange booting with the diskless/statelite image.&lt;/p&gt;
&lt;h4 id="additional-configuration"&gt;Additional configuration&lt;/h4&gt;
&lt;p&gt;After noderange booted up with the diskless/statelite image, add a dynamic ip range into networks table to the network used for compute node installation. This dynamic ip range should be large enough accommodate all of the nodes on the network. For example:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt; &lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="n"&gt;netname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;mask&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;mgtifname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;gateway&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;dhcpserver&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;tftpserver&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;nameservers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;ntpservers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;logservers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;dynamicrange&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;nodehostname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;ddnsdomain&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;vlanid&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;comments&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;disable&lt;/span&gt;
 &lt;span class="s"&gt;"hfinet"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;"20.0.0.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;"255.0.0.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;"hf0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;"20.7.4.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;"20.7.4.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;"20.7.4.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;"20.7.4.1"&lt;/span&gt;&lt;span class="p"&gt;,,,&lt;/span&gt;&lt;span class="s"&gt;"20.7.4.100-20.7.4.200"&lt;/span&gt;&lt;span class="p"&gt;,,,,,&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h2 id="references"&gt;References&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="" href="http://xcat.sf.net"&gt;xCAT web site&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="" href="http://xcat.sf.net/man1/xcat.1.html"&gt;xCAT man pages&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="" href="http://xcat.sf.net/man5/xcatdb.5.html"&gt;xCAT DB table descriptions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="" href="/p/xcat/wiki/Monitoring_an_xCAT_Cluster"&gt;Monitoring Your Cluster with xCAT&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="alink" href="/p/xcat/wiki/XCAT_AIX_Cluster_Overview_and_Mgmt_Node"&gt;[XCAT_AIX_Cluster_Overview_and_Mgmt_Node]&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="" href="http://xcat.wiki.sourceforge.net"&gt;xCAT wiki&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="" href="http://xcat.org/mailman/listinfo/xcat-user" rel="nofollow"&gt;xCAT mailing list&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="" href="https://sourceforge.net/tracker/?group_id=208749&amp;amp;atid=1006945"&gt;xCAT bugs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="" href="https://sourceforge.net/tracker/?group_id=208749&amp;amp;atid=1006948"&gt;xCAT feature requests&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="appendix-a-migrate-your-management-node-to-a-new-service-pack-of-linux"&gt;&lt;strong&gt;Appendix A: Migrate your Management Node to a new Service Pack of Linux&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;If you need to migrate your xCAT Management Node with a new SP level of Linux, for example rhels6.1 to rhels6.2 you should as a precautionary measure:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Backup database and save critical files to be used if needed to reference or restore using xcatsnap. Move the xcatsnap log and *gz file off the Management Node.&lt;/li&gt;
&lt;li&gt;Backup images and custom data in /install and move off the Management Node.&lt;/li&gt;
&lt;li&gt;service xcatd stop&lt;/li&gt;
&lt;li&gt;service xcatd stop on any service nodes&lt;/li&gt;
&lt;li&gt;Migrate to the new SP level of Linux.&lt;/li&gt;
&lt;li&gt;service xcatd start&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you have any Service Nodes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Migrate to the new SP level of linux and reinstall the servicenode with xCAT following normal procedures.&lt;/li&gt;
&lt;li&gt;service xcatd start&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The document:&lt;/p&gt;
&lt;p&gt;&lt;a class="" href="../Setting_Up_a_Linux_xCAT_Mgmt_Node/#appendix-d-upgrade-your-management-node-to-a-new-service-pack-of-linux"&gt;Setting_Up_a_Linux_xCAT_Mgmt_Node#Appendix_D:_Upgrade_your_Management_Node_to_a_new_Service_Pack_of_Linux&lt;/a&gt;&lt;br /&gt;
gives a sample procedure on how to update the management node or service nodes to a new service pack of Linux.&lt;/p&gt;
&lt;h2 id="appendix-b-install-your-management-node-to-a-new-release-of-linux"&gt;&lt;strong&gt;Appendix B: Install your Management Node to a new Release of Linux&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;First backup critical xCAT data to another server so it will not be loss during OS install.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Back up the xcat database using xcatsnap, important config files and other system config files for reference and for restore later. Prune some of the larger tables:&lt;/li&gt;
&lt;li&gt;
&lt;ul&gt;
&lt;li&gt;tabprune eventlog -a&lt;/li&gt;
&lt;li&gt;tabprune auditlog -a&lt;/li&gt;
&lt;li&gt;tabprune isnm_perf -a (Power 775 only)&lt;/li&gt;
&lt;li&gt;tabprune isnm_perf_sum -a (Power 775 only)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Run xcatsnap ( will capture database, config files) and copy to another host. By default it will create in /tmp/xcatsnap two files, for example:&lt;ul&gt;
&lt;li&gt;xcatsnap.hpcrhmn.10110922.log&lt;/li&gt;
&lt;li&gt;xcatsnap.hpcrhmn.10110922.tar.gz&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Back up from /install directory, all images, custom setup data that you want to save. and move to another server. xcatsnap will not backup images.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;After the OS install:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Proceed to to setup the xCAT MN as a new xCAT MN using the instructions in this document.&lt;/li&gt;
&lt;/ul&gt;&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Lissa Valletta</dc:creator><pubDate>Tue, 12 Aug 2014 12:21:57 -0000</pubDate><guid>https://sourceforge.nete42e2ce98af7afb8e50eb01d7ade850eceef6224</guid></item></channel></rss>