<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Recent changes to Using_xCAT_in_SoftLayer</title><link>https://sourceforge.net/p/xcat/wiki/Using_xCAT_in_SoftLayer/</link><description>Recent changes to Using_xCAT_in_SoftLayer</description><atom:link href="https://sourceforge.net/p/xcat/wiki/Using_xCAT_in_SoftLayer/feed" rel="self"/><language>en</language><lastBuildDate>Mon, 19 Jan 2015 05:36:18 -0000</lastBuildDate><atom:link href="https://sourceforge.net/p/xcat/wiki/Using_xCAT_in_SoftLayer/feed" rel="self" type="application/rss+xml"/><item><title>Using_xCAT_in_SoftLayer modified by zhao er tao</title><link>https://sourceforge.net/p/xcat/wiki/Using_xCAT_in_SoftLayer/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v115
+++ v116
@@ -281,7 +281,7 @@
     mkdef -t network publicnet gateway=50.97.240.33 mask=255.255.255.240 mgtifname=eth1 net=50.97.240.32
 ~~~~

-    * Note: in the networks table, mgtifname means the NIC on the xcat mgmt node that directly connects to that vlan.  If the xcat mgmt node is not directly connected to this vlan (it reaches it via a router), then set mgtifname to "!remote!".
+    * Note: in the networks table, mgtifname means the NIC on the xcat mgmt node that directly connects to that vlan.  If the xcat mgmt node is not directly connected to this vlan (it reaches it via a router), then set mgtifname to "!remote!&amp;lt;nicname&amp;gt;" and add "!remote!" for "dhcpinterfaces" in site table.

   * For each node, set the IP address and hostname suffix and the network for the public network.  In this example, we assume the public NICs will be configured to be bonded together, which works even if one of the NICs (e.g. eth3) is down: 

&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">zhao er tao</dc:creator><pubDate>Mon, 19 Jan 2015 05:36:18 -0000</pubDate><guid>https://sourceforge.netecfff923bcff1d78b16227703d4239fd30ef0e4c</guid></item><item><title>Using_xCAT_in_SoftLayer modified by Guang Cheng Li</title><link>https://sourceforge.net/p/xcat/wiki/Using_xCAT_in_SoftLayer/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v114
+++ v115
@@ -781,3 +781,5 @@
      Note: because of the slowness of the switches to respond to NICs coming up, the installation process will probably hang at one point. On the console, autoyast will ask if you want to retry. Wait about 15 seconds and then retry and the process should continue.

 # Appendix C - Setup xCAT High Available Management Node in SoftLayer
+
+See [Setup xCAT High Available Management Node in SoftLayer](Setup_xCAT_High_Available_Management_Node_in_SoftLayer) for details.
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Guang Cheng Li</dc:creator><pubDate>Mon, 20 Oct 2014 07:13:06 -0000</pubDate><guid>https://sourceforge.net0fd626094ae3e5618dc3b74fded86c8ff8f84a81</guid></item><item><title>Using_xCAT_in_SoftLayer modified by Guang Cheng Li</title><link>https://sourceforge.net/p/xcat/wiki/Using_xCAT_in_SoftLayer/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v113
+++ v114
@@ -779,3 +779,5 @@
 ~~~~

      Note: because of the slowness of the switches to respond to NICs coming up, the installation process will probably hang at one point. On the console, autoyast will ask if you want to retry. Wait about 15 seconds and then retry and the process should continue. 
+
+# Appendix C - Setup xCAT High Available Management Node in SoftLayer
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Guang Cheng Li</dc:creator><pubDate>Mon, 20 Oct 2014 07:11:21 -0000</pubDate><guid>https://sourceforge.net07eee02ade262bc194a151932107400368796b53</guid></item><item><title>Using_xCAT_in_SoftLayer modified by Bruce</title><link>https://sourceforge.net/p/xcat/wiki/Using_xCAT_in_SoftLayer/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v112
+++ v113
@@ -333,7 +333,7 @@

 You can set up routes (both default gateway and more specific routes) to be configured on the nodes using the routes table, the routenames attribute and the setroute postscript.  These 3 work together like this:

-  1 you define routes in the routes table for any routes you will need on any of nodes and give them unique names
+  1. you define routes in the routes table for any routes you will need on any of nodes and give them unique names
   + for each node set its routenames attribute to the routes you want that node to have
   + add the setroute postscript to the postbootscripts attribute for all nodes

&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Bruce</dc:creator><pubDate>Mon, 15 Sep 2014 12:57:02 -0000</pubDate><guid>https://sourceforge.netdbe63683c1d706ea3a575c32bb371c889e43a096</guid></item><item><title>Using_xCAT_in_SoftLayer modified by Bruce</title><link>https://sourceforge.net/p/xcat/wiki/Using_xCAT_in_SoftLayer/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v111
+++ v112
@@ -333,9 +333,9 @@

 You can set up routes (both default gateway and more specific routes) to be configured on the nodes using the routes table, the routenames attribute and the setroute postscript.  These 3 work together like this:

-  # you define routes in the routes table for any routes you will need on any of nodes and give them unique names
-  # for each node set its routenames attribute to the routes you want that node to have
-  # add the setroute postscript to the postbootscripts attribute for all nodes
+  1 you define routes in the routes table for any routes you will need on any of nodes and give them unique names
+  + for each node set its routenames attribute to the routes you want that node to have
+  + add the setroute postscript to the postbootscripts attribute for all nodes

 If you want to set the default gateway of the nodes to go out to the internet, create a route entry that points to the gateway IP address that SoftLayer defines for the public vlan for this node:

@@ -343,7 +343,7 @@
     mkdef -t route def198_11_206 gateway=198.11.206.1 ifname=bond1 mask=0.0.0.0 net=0.0.0.0
 ~~~~

-    * Note: in this case, ifname is the NIC that the node will use to reach this gateway.
+  * Note: in this case, ifname is the NIC that the node will use to reach this gateway.

   * Add this route to the node definitions and add setroute to the postbootscripts list: 

&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Bruce</dc:creator><pubDate>Mon, 15 Sep 2014 12:55:32 -0000</pubDate><guid>https://sourceforge.netbe5456903011d61c2a36ea876dc4310c345e95fb</guid></item><item><title>Using_xCAT_in_SoftLayer modified by Bruce</title><link>https://sourceforge.net/p/xcat/wiki/Using_xCAT_in_SoftLayer/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v110
+++ v111
@@ -38,7 +38,7 @@

 A centos 6.x management node was used when creating this document. If you choose to run a different distro on your mgmt node, some of the steps in this document will be slightly different.

-To configure the management node, follow the 1st half of [XCAT iDataPlex Cluster Quick Start](XCAT_iDataPlex_Cluster_Quick_Start) to install and configure the xCAT management node. Stop before you get to the section "Node Definition and Discovery". SoftLayer uses Supermicro servers, not iDataPlex, but they are both x86_64 IPMI-controlled servers, so they are very similar from an xCAT stand point. You don't need to follow all of the steps in [XCAT_iDataPlex_Cluster_Quick_Start], so here is a summary of the steps from that document that you should perform: 
+To configure the management node, follow the 1st half of [XCAT iDataPlex Cluster Quick Start](XCAT_iDataPlex_Cluster_Quick_Start) to install and configure the xCAT management node.  Install xCAT 2.8.5 or later.  Stop before you get to the section "Node Definition and Discovery". SoftLayer uses Supermicro servers, not iDataPlex, but they are both x86_64 IPMI-controlled servers, so they are very similar from an xCAT stand point. You don't need to follow all of the steps in [XCAT_iDataPlex_Cluster_Quick_Start], so here is a summary of the steps from that document that you should perform: 

   * (Your example configuration will be different.) 
   * (You can skip most of the section "Prepare the Management Node for xCAT Installation". The OS will already be installed. You don't need to disable SELinux or the firewall, xCAT will do that for you. The networks and nics will already be set up by SoftLayer, and will not be using DHCP. The time zone will already be set. You don't need to configure switches, because we will use another method for discovery. You can leave the hostname set the way it is (to the public NIC). The only affect this will have is the "master", "domain", and "nameservers" site attributes will need to be set to the private NIC after the xCAT software is installed.) 
@@ -177,6 +177,14 @@
     chdef -t site dhcpsetup=n
 ~~~~

+  * If you are deploying RHEL onto your nodes set managedaddressmode to static:
+
+~~~~
+    chdef -t site managedaddressmode=static
+~~~~
+
+    * Note:  currently this site setting doesn't work correctly with the SLES support for configuring the private NIC into bond0.
+
 # Configuring Console Access for the Nodes

 If the provisioning of nodes doesn't work perfectly the 1st time, access to the console can be critical in figuring out and correcting the problem. There are 3 options for getting access to each nodes' console: 
@@ -273,14 +281,16 @@
     mkdef -t network publicnet gateway=50.97.240.33 mask=255.255.255.240 mgtifname=eth1 net=50.97.240.32
 ~~~~

-  * For each node, set the IP address and hostname suffix and the network for eth1: 
-    
-~~~~
-    chdef &amp;lt;node&amp;gt; nicips.eth1=50.2.3.4 nichostnamesuffixes.eth1=-pub nicnetworks.eth1=50_97_240_32-255_255_255_240
-~~~~
-
-     * Note: In this example, the "-pub" will be added to the end of the node name to form the hostname of the eth1 IP address. If you also want a completely different hostname (that doesn't start with the node name), set nicaliases.eth1. 
-     * Note: If you have a lot of nodes and your IP addresses follow a regular pattern, you can set them all at once, using xCAT's support for regular expressions. See [Listing and Modifying the Database](Listing_and_Modifying_the_Database/#using-regular-expressions-in-the-xcat-tables) for details. 
+    * Note: in the networks table, mgtifname means the NIC on the xcat mgmt node that directly connects to that vlan.  If the xcat mgmt node is not directly connected to this vlan (it reaches it via a router), then set mgtifname to "!remote!".
+
+  * For each node, set the IP address and hostname suffix and the network for the public network.  In this example, we assume the public NICs will be configured to be bonded together, which works even if one of the NICs (e.g. eth3) is down: 
+    
+~~~~
+    chdef &amp;lt;node&amp;gt; nicips.eth1=50.2.3.4 nichostnamesuffixes.bond1=-pub
+~~~~
+
+    * Note: In this example, the "-pub" will be added to the end of the node name to form the hostname of the eth1 IP address. If you also want a completely different hostname (that doesn't start with the node name), set nicaliases.eth1. 
+    * Note: If you have a lot of nodes and your IP addresses follow a regular pattern, you can set them all at once, using xCAT's support for regular expressions. See [Listing and Modifying the Database](Listing_and_Modifying_the_Database/#using-regular-expressions-in-the-xcat-tables) for details. 

   * Add these new NICs to name resolution: 

@@ -289,7 +299,7 @@
     makedns &amp;lt;noderange&amp;gt;
 ~~~~

-  * If you have nodes that are on a private vlan that is different from the xcat mgmt node's private vlan, then you need to add the following line to the global "options" statement in /etc/named.conf so that the nodes on the other private vlans will be allowed to query the dns on the xcat mgmt node: 
+  * If you are running a version of xcat older than 2.8.5 and you have nodes that are on a private vlan that is different from the xcat mgmt node's private vlan, then you need to add the following line to the global "options" statement in /etc/named.conf so that the nodes on the other private vlans will be allowed to query the dns on the xcat mgmt node: 

 ~~~~
     allow-recursion { any; };
@@ -303,7 +313,7 @@
     chdef &amp;lt;noderange&amp;gt; -p postscripts='configbond bond1 eth1@eth3'
 ~~~~

-     Note: the -p flag adds the postscript to the end of the existing list. 
+    * Note: the -p flag adds the postscript to the end of the existing list. 

   * Test the setup on 1 node using updatenode: 

@@ -311,26 +321,50 @@
     updatenode &amp;lt;node&amp;gt; -P 'configbond bond1 eth1@eth3'
 ~~~~

+  * Set installnic to "mac" to select the install NIC by mac address instead of NIC name (e.g. eth0), because the NIC name can vary depending on what OS or initrd is booted:
+    
+~~~~
+    chdef &amp;lt;node&amp;gt; installnic=mac
+~~~~
+
+    * Note: there has been at least one case in which using installnic=mac (which results in the ksdevice kernel parameter (in RHEL) being set to the mac) doesn't work.  We are still investigating it.
+
 ## Set Up Routes

-If you are going to set the node's default gateway to the public NIC, you probably want a specific route for the private VLANs: 
+You can set up routes (both default gateway and more specific routes) to be configured on the nodes using the routes table, the routenames attribute and the setroute postscript.  These 3 work together like this:
+
+  # you define routes in the routes table for any routes you will need on any of nodes and give them unique names
+  # for each node set its routenames attribute to the routes you want that node to have
+  # add the setroute postscript to the postbootscripts attribute for all nodes
+
+If you want to set the default gateway of the nodes to go out to the internet, create a route entry that points to the gateway IP address that SoftLayer defines for the public vlan for this node:
+    
+~~~~
+    mkdef -t route def198_11_206 gateway=198.11.206.1 ifname=bond1 mask=0.0.0.0 net=0.0.0.0
+~~~~
+
+    * Note: in this case, ifname is the NIC that the node will use to reach this gateway.
+
+  * Add this route to the node definitions and add setroute to the postbootscripts list: 
+    
+~~~~
+    chdef &amp;lt;noderange&amp;gt; -p routenames=def198_11_206
+    chdef &amp;lt;node&amp;gt; -p postbootscripts='setroute'
+~~~~
+
+If you are setting the node's default gateway to the public NIC, you will want a specific route for the private VLANs if you have servers in more than 1 private vlan: 

   * Create a route in the route table: 

 ~~~~
-    mkdef -t route privateroute gateway=10.54.51.1 ifname=bond0 mask=255.0.0.0 net=10.0.0.0
-~~~~
-
-  * Add this route to the node definitions: 
-    
-~~~~
-    chdef &amp;lt;noderange&amp;gt; routenames=privateroute
-~~~~
-
-  * For each node, add the postbootscripts to set the private route and the default gateway: 
-    
-~~~~
-    chdef &amp;lt;node&amp;gt; -p postbootscripts='setroute,setdefaultroute 50.97.240.33 bond1'
+    mkdef -t route priv10_54_51 gateway=10.54.51.1 ifname=bond0 mask=255.0.0.0 net=10.0.0.0
+~~~~
+
+  * Add this route to the node definitions and add setroute to the postbootscripts list: 
+    
+~~~~
+    chdef &amp;lt;noderange&amp;gt; -p routenames=priv10_54_51
+    chdef &amp;lt;node&amp;gt; -p postbootscripts='setroute'
 ~~~~

   * If some of the nodes you are installing are on a different private vlan than the xcat mgmt node, you need to set those nodes' xcatmaster attribute to the ip addresss of the mgmt node. For example: 
@@ -342,7 +376,7 @@
   * Test one node using updatenode: 

 ~~~~
-    updatenode &amp;lt;node&amp;gt; -P 'setroute,setdefaultroute 50.97.240.33 bond1'
+    updatenode &amp;lt;node&amp;gt; -P 'setroute'
 ~~~~

 # Configure the Node to Install a New OS (Scripted Install)
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Bruce</dc:creator><pubDate>Mon, 15 Sep 2014 12:51:48 -0000</pubDate><guid>https://sourceforge.net0f6a25c3b7bf41fa6c230efe57c3dbf13820c88d</guid></item><item><title>Using_xCAT_in_SoftLayer modified by Lissa Valletta</title><link>https://sourceforge.net/p/xcat/wiki/Using_xCAT_in_SoftLayer/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v109
+++ v110
@@ -673,7 +673,7 @@

   * Query the speed that the bmc is currently configured to, and query the COM ports that exist on this node: 

-~~~~~~
+~~~~
     ipmitool sol info 1      # note the speed that the bmc is currently using
     dmesg|grep ttyS          # to see com ports avail (the deprecated msg is ok)
 ~~~~
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Lissa Valletta</dc:creator><pubDate>Wed, 10 Sep 2014 15:54:58 -0000</pubDate><guid>https://sourceforge.netf40e527e88ff194807b97c9345c289f059f63a54</guid></item><item><title>Using_xCAT_in_SoftLayer modified by Bruce</title><link>https://sourceforge.net/p/xcat/wiki/Using_xCAT_in_SoftLayer/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v108
+++ v109
@@ -167,8 +167,7 @@
   * Have xcat configure and start dns on the MN

 ~~~~
-    makedns -n      # create named.conf
-    makedns         # add all of the nodes
+    makedns -n      # create named.conf and add all of the nodes
 ~~~~

   * Since we will be using static IP addresses for the nodes, there is no need for DHCP. We recommend you stop dhcpd so it doesn't confuse any debugging situations: 
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Bruce</dc:creator><pubDate>Wed, 20 Aug 2014 19:07:47 -0000</pubDate><guid>https://sourceforge.netdb3a4a343093939c4e046b2d71c4296d5fc4f1de</guid></item><item><title>Using_xCAT_in_SoftLayer modified by Lissa Valletta</title><link>https://sourceforge.net/p/xcat/wiki/Using_xCAT_in_SoftLayer/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v107
+++ v108
@@ -1,3 +1,7 @@
+![](https://sourceforge.net/p/xcat/wiki/XCAT_Documentation/attachment/Official-xcat-doc.png) 
+
+
+
 [TOC]

 # Overview
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Lissa Valletta</dc:creator><pubDate>Thu, 31 Jul 2014 13:59:18 -0000</pubDate><guid>https://sourceforge.netdb54f7d1b15635c06521a9d990f4cdfdd23ba77a</guid></item><item><title>Using_xCAT_in_SoftLayer modified by Lissa Valletta</title><link>https://sourceforge.net/p/xcat/wiki/Using_xCAT_in_SoftLayer/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v106
+++ v107
@@ -62,9 +62,9 @@
 * Download the RPM from: https://sourceforge.net/projects/xcat/files/yum/devel/core-snap/ 
 * Install it:

-~~~~~~
+~~~~
     yum install xCAT-SoftLayer-*.rpm
-~~~~~~
+~~~~

 The xCAT-SoftLayer rpm requires the perl-ExtUtils-MakeMaker, perl-CPAN, perl-Test-Harness, and perl-SOAP-Lite rpms, so yum will also install those (if they aren't already installed). 

@@ -73,96 +73,106 @@
 * Use the [SoftLayer portal](https://manage.softlayer.com/) to get your API key using these [directions](http://knowledgelayer.softlayer.com/procedure/retrieve-your-api-key). 
 * Download the SoftLayer perl API, using git, to any directory (you may have to install git): 

-~~~~~~
+~~~~
     cd /usr/local/lib
     git clone https://github.com/softlayer/softlayer-api-perl-client
-~~~~~~
+~~~~

 * Create a file called /root/.slconfig and put in it your SoftLayer userid, the API key, and the location of the SL perl API: 

-~~~~~~
+~~~~
     # Config file used by the xcat cmd getslnodes
     userid = SL12345
     apikey = 1a2b3c4d5e6f1a2b3c4d5e6f1a2b3c4d5e6f1a2b3c4d5e6f
     apidir = /usr/loca/lib/softlayer-api-perl-client
-~~~~~~
+~~~~

      Note: this config file will be used by the xCAT utility getslnodes (described in the next section). 

 * The softlayer api perl client also needs the following perl modules: XML::Hash::LX, CPAN::Meta::Requirements, Class::Inspector, IO::SessionData, lib::abs, Test::Simple. For now, you need to download these to your mgmt node from [CPAN](http://www.cpan.org/) and build them using the [instructions on CPAN](http://www.cpan.org/modules/INSTALL.html). In my experience, all i had to do was: 

-~~~~~~
+~~~~
     cpan App::cpanminus    # hit enter (taking the default "yes") a bunch of times
     cpanm XML::Hash::LX
-~~~~~~
+~~~~

 ## Get the Bare Metal Nodes Information and Define It to xCAT

 To query all of the SL bare metal servers available to this account and display the xCAT node attributes that should be set: 

+~~~~
     getslnodes
-    
-
-To query a specific server or subset of servers: 
-
+~~~~
+    
+
+To query a specific server or subset of servers:
+
+ 
+~~~~
     getslnodes &amp;lt;hostname&amp;gt;
+~~~~

 where &amp;lt;hostname&amp;gt; is the 1st part of one or more hostnames of the SL bare metal servers. 

 To create the xCAT node objects in the database, either copy/paste/run the commands output by the command above, or run: 
-    
+  
+~~~~  
     getslnodes | mkdef -z
+~~~~

 If your xCAT management node is also a bare metal server, this will create a node definition in the xCAT db for it too, which is probably not what you want.  (xCAT does support having the mgmt node in the db and using xCAT to maintain software and config files on it, but that is probably not your main goal here, and you could accidentally make changes to your mgmt node that you might not intend.)  If you want to remove your mgmt node from the db:

+~~~~
     rmdef &amp;lt;mgmt-node&amp;gt;
+~~~~

 Now add the nodes to the /etc/hosts file: 
-    
+ 
+~~~~   
     makehosts
-    
+~~~~    

 Follow the steps in [Cluster_Name_Resolution] to set up name resolution for the nodes, but the quick steps are: 

   * Make sure the public and private networks of the xCAT MN are defined: 

-~~~~~~
+~~~~
     lsdef -t network -l
-~~~~~~
+~~~~

   * Set the site.nameservers attribute to be the private IP address of the MN (should be the same as site.master), and set the site.forwarders attribute to be the SL name servers, and set site.domain attribute to the domain the bare metal nodes are using: 

-~~~~~~
+~~~~
     chdef -t site nameservers=&amp;lt;private-mn-ip&amp;gt; forwarders=&amp;lt;SL-name-servers&amp;gt; domain=&amp;lt;bm-domain&amp;gt;
-~~~~~~
+~~~~

   * Edit /etc/resolv.conf to point to the MN private IP as the name server and use the domain above: 

-~~~~~~
+~~~~
     search &amp;lt;domain&amp;gt;
     nameserver &amp;lt;private-mn-ip&amp;gt;
-~~~~~~
+~~~~

   * Turn off node booting flow control (see the [site](http://xcat.sourceforge.net/man5/site.5.html) table): 

-~~~~~~
+~~~~
     chdef -t site  useflowcontrol=no
-~~~~~~
+~~~~

   * Have xcat configure and start dns on the MN 

-~~~~~~
+~~~~
     makedns -n      # create named.conf
     makedns         # add all of the nodes
-~~~~~~
+~~~~

   * Since we will be using static IP addresses for the nodes, there is no need for DHCP. We recommend you stop dhcpd so it doesn't confuse any debugging situations: 

-~~~~~~
+~~~~
     service dhcpd stop
     chdef -t site dhcpsetup=n
-~~~~~~
+~~~~

 # Configuring Console Access for the Nodes

@@ -182,42 +192,42 @@

   * Verify that serialport and serialspeed are **not** set for the nodes you want to install. (The installer, e.g. autoyast, typically can only display to one console, the serial console or video console, but not both. If these serial console attributes are set, the installer will display its progress to the serial console, which we are not setting up in this option.) 

-~~~~~~
+~~~~
     lsdef &amp;lt;node&amp;gt; -i serialport,serialspeed
-~~~~~~
+~~~~

   * Set up the epel repo (for CentOS or RHEL), and install VNC, fluxbox, and firefox: 

-~~~~~~
+~~~~
     yum install tigervnc-server firefox java icedtea-web fluxbox metacity xterm xsetroot
-~~~~~~
+~~~~

   * Start the VNC server (use whatever resolution you want): 

-~~~~~~
+~~~~
     vncserver -geometry 1280x960 -AlwaysShared &amp;amp;
-~~~~~~
+~~~~

 **On your client machine (desktop or laptop) connect to the VNC server:**

-~~~~~~
+~~~~
     vncviewer &amp;lt;xcat-mn-public-ip&amp;gt;:1 &amp;amp;
-~~~~~~
+~~~~

 **From inside the VNC session:**

   * In the xterm in VNC, if you don't like twm, start the window manager you want and add it to .vnc/xstartup for the future. I prefer metacity: 

-~~~~~~
+~~~~
     metacity &amp;amp;
     sed -i s/twm/metacity/ /root/.vnc/xstartup
-~~~~~~
+~~~~

   * List the BMC information for the node you want to open the console to: 

-~~~~~~
+~~~~
     lsdef &amp;lt;node&amp;gt; -i bmc,bmcusername,bmcpassword
-~~~~~~
+~~~~

   * Start firefox inside vnc 
   * In firefox, enter the IP address of the BMC, and then login with the BMC username and password. 
@@ -250,53 +260,53 @@

   * Make sure the networks for the public vlans are defined in the xcat database: 

-~~~~~~
+~~~~
     lsdef -t network -l
-~~~~~~
+~~~~

   * If they aren't, define them: 

-~~~~~~
+~~~~
     mkdef -t network publicnet gateway=50.97.240.33 mask=255.255.255.240 mgtifname=eth1 net=50.97.240.32
-~~~~~~
+~~~~

   * For each node, set the IP address and hostname suffix and the network for eth1: 

-~~~~~~
+~~~~
     chdef &amp;lt;node&amp;gt; nicips.eth1=50.2.3.4 nichostnamesuffixes.eth1=-pub nicnetworks.eth1=50_97_240_32-255_255_255_240
-~~~~~~
+~~~~

      * Note: In this example, the "-pub" will be added to the end of the node name to form the hostname of the eth1 IP address. If you also want a completely different hostname (that doesn't start with the node name), set nicaliases.eth1. 
      * Note: If you have a lot of nodes and your IP addresses follow a regular pattern, you can set them all at once, using xCAT's support for regular expressions. See [Listing and Modifying the Database](Listing_and_Modifying_the_Database/#using-regular-expressions-in-the-xcat-tables) for details. 

   * Add these new NICs to name resolution: 

-~~~~~~
+~~~~
     makehosts &amp;lt;noderange&amp;gt;
     makedns &amp;lt;noderange&amp;gt;
-~~~~~~
+~~~~

   * If you have nodes that are on a private vlan that is different from the xcat mgmt node's private vlan, then you need to add the following line to the global "options" statement in /etc/named.conf so that the nodes on the other private vlans will be allowed to query the dns on the xcat mgmt node: 

-~~~~~~
+~~~~
     allow-recursion { any; };
-~~~~~~
+~~~~

   * After editing /etc/named.conf, then run "service named restart". If you run "makedns -n" in the future, you will need to make this change to /etc/named.conf again (because it will be overwritten). This will be fixed in xcat in bug [#4144]. 

   * Normally, you would use the confignics postscript to configure eth1 at the end of the node provisioning. But since SoftLayer bare metal servers should have their NICs part of a bond, use the configbond postscript instead by adding it to the list of postscripts that should be run for these nodes: 

-~~~~~~
+~~~~
     chdef &amp;lt;noderange&amp;gt; -p postscripts='configbond bond1 eth1@eth3'
-~~~~~~
+~~~~

      Note: the -p flag adds the postscript to the end of the existing list. 

   * Test the setup on 1 node using updatenode: 

-~~~~~~
+~~~~
     updatenode &amp;lt;node&amp;gt; -P 'configbond bond1 eth1@eth3'
-~~~~~~
+~~~~

 ## Set Up Routes

@@ -304,33 +314,33 @@

   * Create a route in the route table: 

-~~~~~~
+~~~~
     mkdef -t route privateroute gateway=10.54.51.1 ifname=bond0 mask=255.0.0.0 net=10.0.0.0
-~~~~~~
+~~~~

   * Add this route to the node definitions: 

-~~~~~~
+~~~~
     chdef &amp;lt;noderange&amp;gt; routenames=privateroute
-~~~~~~
+~~~~

   * For each node, add the postbootscripts to set the private route and the default gateway: 

-~~~~~~
+~~~~
     chdef &amp;lt;node&amp;gt; -p postbootscripts='setroute,setdefaultroute 50.97.240.33 bond1'
-~~~~~~
+~~~~

   * If some of the nodes you are installing are on a different private vlan than the xcat mgmt node, you need to set those nodes' xcatmaster attribute to the ip addresss of the mgmt node. For example: 

-~~~~~~
+~~~~
     chdef &amp;lt;node&amp;gt; xcatmaster=10.54.51.2
-~~~~~~
+~~~~

   * Test one node using updatenode: 

-~~~~~~
+~~~~
     updatenode &amp;lt;node&amp;gt; -P 'setroute,setdefaultroute 50.97.240.33 bond1'
-~~~~~~
+~~~~

 # Configure the Node to Install a New OS (Scripted Install)

@@ -343,24 +353,24 @@
   * Create an OS image defintion on the xCAT MN that will be used to provision the node, following Option 1 in [XCAT iDataPlex Cluster Quick Start#Installing Stateful Nodes](XCAT_iDataPlex_Cluster_Quick_Start/#installing-stateful-nodes) . 
   * Get the [driver disk for the aacraid driver](http://www.adaptec.com/en-us/speed/raid/aac/linux/aacraid_linux_driverdisks_v1_2_1-40300_tgz.htm) and add it to the osimage definition. This is necessary because many SoftLayer physical servers use that device, but that driver is not in the default initrd. For example: 

-~~~~~~
+~~~~
     chdef -t osimage &amp;lt;osimagename&amp;gt; driverupdatesrc=dud:/install/drivers/sles11.3/x86_64/aacraid-driverdisk-1.2.1-30300-sled11-sp2+sles11-sp2.img
-~~~~~~
+~~~~

      * Note: the aacraid rpm has an unusual format, so you can't use that with xCAT. 
      * Note: details about adding drivers can be found in [Using_Linux_Driver_Update_Disk]. 

   * Modify your osimage definition to use the provided autoyast template that uses a static IP for the install NIC, instead of the typical DHCP IP address that xCAT normally uses: 

-~~~~~~
+~~~~
     chdef -t osimage &amp;lt;osimagename&amp;gt; template=/opt/xcat/share/xcat/install/sles/compute.sles11.softlayer.tmpl
-~~~~~~
+~~~~

      Note: so far only a template for SLES has been provided. 

   * If desired, you can specify a specific partition layout. For example, create a file called /install/custom/my-partitions, containing: 

-~~~~~~
+~~~~
         &amp;lt;drive&amp;gt;
           &amp;lt;device&amp;gt;XCATPARTITIONHOOK&amp;lt;/device&amp;gt;
           &amp;lt;initialize config:type="boolean"&amp;gt;true&amp;lt;/initialize&amp;gt;
@@ -388,60 +398,60 @@
             &amp;lt;/partition&amp;gt;
           &amp;lt;/partitions&amp;gt;
         &amp;lt;/drive&amp;gt;
-~~~~~~
+~~~~

      Then use this file in your osimage: 

-~~~~~~
+~~~~
     chdef -t osimage &amp;lt;osimagename&amp;gt; partitionfile=/install/custom/my-partitions
-~~~~~~
+~~~~

 ## Deploy Nodes With That Osimage

   * **If the nodes to be deployed are not already up**, boot the nodes to the existing OS that is already on its hard disk: 

-~~~~~~
+~~~~
     rsetboot &amp;lt;noderange&amp;gt; hd
     rpower &amp;lt;noderange&amp;gt; boot
-~~~~~~
+~~~~

   * Copy the xCAT management node's ssh public key to the nodes: 

-~~~~~~
+~~~~
     lsdef &amp;lt;noderange&amp;gt; -ci usercomment   # note the pw of each node
     xdsh &amp;lt;node&amp;gt; -K       # enter node pw when prompted
-~~~~~~
+~~~~

      Note: you can skip this step if when you originally requested the servers from the SoftLayer portal, you gave it the xCAT management node's public key to put on the servers. 

   * Have xCAT generate the initrd, kernel, and kernel parameters for deploying the nodes: 

-~~~~~~
+~~~~
     nodeset &amp;lt;noderange&amp;gt; osimage=sles11.2-x86_64-install-compute
-~~~~~~
+~~~~

   * Use the xCAT script called pushinitrd to automatically push the initrd, kernel, kernel parameters, and static IP information to the nodes: 

-~~~~~~
+~~~~
     pushinitrd &amp;lt;noderange&amp;gt;
-~~~~~~
+~~~~

   * If this is the 1st node installation you've done in this cluster, open a console to one of the nodes before starting the installation, so that you can follow the process and see any errors that might be displayed. (See the previous section for how to get a console.) If you already know that the installation process works, you don't need to open a console. 
   * Boot the nodes to start the installation process: 

-~~~~~~
+~~~~
     rsetboot &amp;lt;noderange&amp;gt; hd
     xdsh &amp;lt;noderange&amp;gt; reboot
-~~~~~~
+~~~~

      * Note: For some physical server types in softlayer, the rsetboot command fails. You can still proceed without it, it just means you will have to wait for the nodes to time out waiting for DHCP. 
      * Note: Do not use rpower to reboot the node in this situation because that does not give the nodes a chance to sync the file changes (that pushinitrd made) to disk. 

   * Monitor the progress: 

-~~~~~~
+~~~~
     watch nodestat &amp;lt;noderange&amp;gt;
-~~~~~~
+~~~~

   * Later on, if you want to push updates to the nodes (without completely reinstalling them), you can use the [updatenode](http://xcat.sourceforge.net/man1/updatenode.1.html) command. With this command you can sync new config files to the nodes, install additional rpms, and run new postscripts. 

@@ -458,24 +468,24 @@
   * Add the Adaptec aacraid device driver to the xCAT genesis initrd. Follow the example given in [XCAT_iDataPlex_Advanced_Setup#Adding_Drivers_to_the_Genesis_Boot_Kernel](XCAT_iDataPlex_Advanced_Setup/#adding-drivers-to-the-genesis-boot-kernel) . This is needed because many of the SoftLayer servers have this device. 
   * Install SystemImager on the xCAT mgmt node. The systemimager rpms are in the xcat-dep tarball, so you should already have that configured as a zypper archive on your mgmt node. (See section [XCAT_iDataPlex_Cluster_Quick_Start#Using_the_New_Sysclone_Deployment_Method](XCAT_iDataPlex_Cluster_Quick_Start/#using-the-new-sysclone-deployment-method) and the sections before that for the full context.) 

-~~~~~~
+~~~~
     zypper install systemimager-server
-~~~~~~
+~~~~

   * Start the rsync daemon for systemimager: 

-~~~~~~
+~~~~
     service systemimager-server-rsyncd start
     chkconfig systemimager-server-rsyncd on
-~~~~~~
+~~~~

   * Make sure you have the xcat-dep rpms in an otherpkgs directory. For example: 

-~~~~~~
+~~~~
     mkdir -p /install/post/otherpkgs/sles11.3/x86_64/xcat
     cd /install/post/otherpkgs/sles11.3/x86_64/xcat
     tar jxvf xcat-dep-*.tar.bz2
-~~~~~~
+~~~~

 ## Capture the Image of a Golden Node

@@ -486,15 +496,15 @@
   * Install the operating system and other desired software on the golden node. This can be done manually, or using xCAT's scripted install method described in the previous section. 
   * Install these rpms on the golden node: systemimager-common, systemimager-client, systemconfigurator, perl-AppConfig. This is most easily done by updating the osimage definition you use for the golden node and then using xCAT's updatenode command. On the mgmt node do: 

-~~~~~~
+~~~~
     chdef -t osimage -o &amp;lt;osimage-name&amp;gt; otherpkglist=/opt/xcat/share/xcat/install/rh/sysclone.sles11.x86_64.otherpkgs.pkglist
     chdef -t osimage -o &amp;lt;osimage-name&amp;gt; -p otherpkgdir=/install/post/otherpkgs/sles11.3/x86_64
     updatenode &amp;lt;my-golden-cilent&amp;gt; -S
-~~~~~~
+~~~~

   * **If you are running a version of xCAT older than 2.8.5**, one additional step that is necessary is to add the following lines to /etc/systemimager/updateclient.local.exclude on the golden node (this will be used later when you need to update your nodes): 

-~~~~~~
+~~~~
     # These are files/dirs that are created automatically on the node, either by SLES, or by xCAT.
     /boot/grub
     /etc/grub.conf
@@ -510,13 +520,13 @@
     /var/cache
     /var/lib/*
     /xcatpost
-~~~~~~
+~~~~

   * Capture the golden node image, by running this on the xCAT mgmt node: 

-~~~~~~
+~~~~
     imgcapture &amp;lt;my-golden-client&amp;gt; -t sysclone -o &amp;lt;myimagename&amp;gt;
-~~~~~~
+~~~~

 This will rsync the golden node's file system to the xCAT mgmt node and put it under /install/sysclone/images/&amp;lt;image-name&amp;gt;. 

@@ -533,41 +543,41 @@

   * Copy the xCAT management node's ssh public key to the nodes: 

-~~~~~~
+~~~~
     lsdef &amp;lt;noderange&amp;gt; -ci usercomment   # note the pw of each node
     xdsh &amp;lt;node&amp;gt; -K       # enter node pw when prompted
-~~~~~~
+~~~~

      Note: you can skip this step if when you originally requested the servers from the SoftLayer portal, you gave it the xCAT management node's public key to put on the servers. 

   * Have xCAT generate the initrd, kernel, and kernel parameters for deploying the nodes: 

-~~~~~~
+~~~~
     nodeset &amp;lt;noderange&amp;gt; osimage=&amp;lt;captured-sysclone-image&amp;gt;
-~~~~~~
+~~~~

   * Use the xCAT script called pushinitrd to automatically push the initrd, kernel, kernel parameters, and static IP information to the nodes: 

-~~~~~~
+~~~~
     pushinitrd &amp;lt;noderange&amp;gt;
-~~~~~~
+~~~~

   * If this is the 1st node installation you've done in this cluster, open a console to one of the nodes before starting the installation, so that you can follow the process and see any errors that might be displayed. (See a previous section in this document for how to get a console.) If you already know that the installation process works, you don't need to open a console. 
   * Boot the nodes to start the installation process: 

-~~~~~~
+~~~~
     rsetboot &amp;lt;noderange&amp;gt; hd
     xdsh &amp;lt;noderange&amp;gt; reboot
-~~~~~~
+~~~~

      * Note: For some physical server types in softlayer, the rsetboot command fails. You can still proceed without it, it just means you will have to wait for the nodes to time out waiting for DHCP. 
      * Note: Do not use rpower to reboot the node in this situation because that does not give the nodes a chance to sync the file changes (that pushinitrd made) to disk. 

   * Monitor the progress: 

-~~~~~~
+~~~~
     watch nodestat &amp;lt;noderange&amp;gt;
-~~~~~~
+~~~~

 ## Update Nodes Later On

@@ -576,25 +586,25 @@
   * Make changes to your golden node. 
   * From the mgmt node, capture the image using the same command as before. Assuming &amp;lt;myimagename&amp;gt; is an existing image, this will only sync the changes to the image on the mgmt node. 

-~~~~~~
+~~~~
     imgcapture &amp;lt;my-golden-client&amp;gt; -t sysclone -o &amp;lt;myimagename&amp;gt;
-~~~~~~
+~~~~

 **If you are running xCAT 2.8.5 or later:**

   * For the nodes you want to update with this updated golden image:

-~~~~~~
+~~~~
     updatenode &amp;lt;noderange&amp;gt; -S
-~~~~~~
+~~~~

 **If you are running xCAT 2.8.4 or older:**

   * For one of the nodes you want to update, do a dry run of the update to see which files will be updated: 

-~~~~~~
+~~~~
     xdsh &amp;lt;node&amp;gt; -s 'si_updateclient --server &amp;lt;mgmtnode-ip&amp;gt; --dry-run --yes'
-~~~~~~
+~~~~

   * If it lists files/dirs that you don't think should be updated, you need to add them to the exclude list in 3 places: 
     * On the golden node: /etc/systemimager/updateclient.local.exclude 
@@ -608,9 +618,9 @@

   * Run mkinitrd on the nodes because each node needs an initrd that is appropriate for its hardware, not the initrd in the image that was just sync'd: 

-~~~~~~
+~~~~
     xdsh &amp;lt;noderange&amp;gt; -s mkinitrd      # only valide for sles/suse, for red hat use dracut
-~~~~~~
+~~~~

 If you want more information about the underlying SystemImager commands that xCAT uses, see the [SystemImager user manual](http://www.systemimager.org/documentation/systemimager-manual-4.1.6.pdf). 

@@ -624,77 +634,77 @@

   * Consider setting consoleondemand=yes, which tells conserver to only connect to the console when the rcons command is run for that node. By default, conserver tries to connect to all consoles when it starts (and tries to reconnect if a console connection ever drops), so that it can constantly be logging the console output for reference later on. This is handy, but also means that conserver can fight with SoftLayer for the single console on each bmc. 

-~~~~~~
+~~~~
     chdef -t site consoleondemand=yes
-~~~~~~
+~~~~

   * Pick one node to set up and get conserver configured: 

-~~~~~~
+~~~~
     chdef &amp;lt;node&amp;gt; cons=ipmi
     makeconservercf &amp;lt;node&amp;gt;
-~~~~~~
+~~~~

   * For convenience, transfer the MN's ssh public key to the node: 

-~~~~~~
+~~~~
     getslnodes &amp;lt;node&amp;gt;    # note the node's password
     xdsh &amp;lt;node&amp;gt; -K       # enter the password when prompted
     ssh &amp;lt;node&amp;gt; date      # verify that the date command runs on the node w/o being prompted for a pw
-~~~~~~
+~~~~

   * Then in a separate shell that you can leave open for a while, run: 

-~~~~~~
+~~~~
     rcons &amp;lt;node&amp;gt;
-~~~~~~
+~~~~

 **Now ssh to the node and do:**

   * Load the ipmi kernel module: 

-~~~~~~
+~~~~
     yum install ipmitool      # if not already installed
     modprobe ipmi_devintf
-~~~~~~
+~~~~

   * Query the speed that the bmc is currently configured to, and query the COM ports that exist on this node: 

 ~~~~~~
     ipmitool sol info 1      # note the speed that the bmc is currently using
     dmesg|grep ttyS          # to see com ports avail (the deprecated msg is ok)
-~~~~~~
+~~~~

   * Use the screen command to try each COM port to see which one is use for SOL: 

-~~~~~~
+~~~~
     yum install screen       # if not already installed
     screen /dev/ttyS1 115200   # try COM 2, use the speed the bmc is using
-~~~~~~
+~~~~

   * In the screen above, type some text and hit enter (you won't see the text echoed) and see if it comes out on rcons. If it does, that's the port. If it doesn't, try the next port. To get out of the screen, type ctrl-shift-a,shift-k (i think this is correct?) 
   * Figure out what speed the bios is using by trial &amp;amp; error (try 19200, 57600 and reboot after each one and watch rcons) 
   * Set the bmc to use the bios speed. This enables rcons to show output from both the bios portion of the booting and the booting of the OS: 

-~~~~~~
+~~~~
     ipmitool sol set volatile-bit-rate 19.2 1     # use the speed the bios is using
     ipmitool sol set non-volatile-bit-rate 19.2 1
-~~~~~~
+~~~~

 **Back on the xCAT MN:**

   * Set the console port and speed in the xCAT db (this is used by nodeset to set the node's kernel parameters), for example: 

-~~~~~~
+~~~~
     chdef &amp;lt;node&amp;gt; serialport=2 serialspeed=19200
-~~~~~~
+~~~~

   * Reboot the node to test the console display: 

-~~~~~~
+~~~~
     rsetboot &amp;lt;node&amp;gt; hd;     # set the next boot of the node to be the current OS on its local hard disk
     xdsh &amp;lt;node&amp;gt; reboot      # to test rcons
-~~~~~~
+~~~~

 # Appendix B - Manually Push the Network Installation Settings to the Nodes

@@ -702,22 +712,22 @@

   * Have xCAT generate the initrd, kernel, and kernel parameters: 

-~~~~~~
+~~~~
     nodeset &amp;lt;node&amp;gt; osimage=sles11.2-x86_64-install-compute
-~~~~~~
+~~~~

   * Display the initrd, kernel, and kernel parameters: 

-~~~~~~
+~~~~
     nodels &amp;lt;node&amp;gt; bootparams
-~~~~~~
+~~~~

   * Using the paths displayed in the nodels command above, and prefixing them with /tftpboot/, copy the kernel and initrd to the /boot file system of the node. For example: 

-~~~~~~
+~~~~
     scp /tftpboot/xcat/osimage/sles11.2-x86_64-install-compute/linux &amp;lt;node&amp;gt;:/boot/xcat-sles-kernel
     scp /tftpboot/xcat/osimage/sles11.2-x86_64-install-compute/initrd &amp;lt;node&amp;gt;:/boot/xcat-sles-initrd
-~~~~~~
+~~~~

   * Now ssh to the node and edit /boot/grub/grub.conf: 
     * In the same format as the other stanzas, add a new stanza with lines: title, root, kernel and params, initrd. You should consider **not** making this stanza the default, so that if you have trouble with rcons, you can always boot into the default OS again to fix it. 
@@ -726,9 +736,9 @@
     * To the kernel parameters, also add which NIC the node will be installing over, and increase the wait time (in seconds) before the node tries to communicate over that NIC. (The switches have a long delay after a NIC state change.). For example: netdevice=eth0 netwait=90 
   * Back on the MN, open a console for each node in a separate window (see previous section), and then boot the node and watch the progress: 

-~~~~~~
+~~~~
     rsetboot &amp;lt;node&amp;gt; hd
     xdsh &amp;lt;node&amp;gt; reboot
-~~~~~~
+~~~~

      Note: because of the slowness of the switches to respond to NICs coming up, the installation process will probably hang at one point. On the console, autoyast will ask if you want to retry. Wait about 15 seconds and then retry and the process should continue. 
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Lissa Valletta</dc:creator><pubDate>Thu, 31 Jul 2014 13:53:53 -0000</pubDate><guid>https://sourceforge.net681a91fc43a881cf81b6347dc55f4873174f9e0c</guid></item></channel></rss>