The xCAT platform-specific cookbooks explain how to initially deploy your nodes. (See the [XCAT_Documentation] page for a list of the cookbooks.) But after initial node deployment, you inevitably need to make changes/updates to your nodes. The updatenode command is for this purpose. It allows you to add or modify the following things on your nodes:
Each of these will be explained in the document. The basic way to use updatenode is to set the definition of nodes on the management node the way you want it and then run updatenode to push those changes out to the actual nodes. Using options to the command, you can control which of the above categories updatenode pushes out to the nodes.
Most of what is described in this document applies to stateful, and stateless nodes. For nfs-based statelite nodes, see [XCAT_Linux_Statelite].
In addition to the information in this document, check out the updatenode man page.
The name of the rpms that will be installed on the node are stored in the packages list files. There are two kinds of package list files:
The path to the package lists will be read from the osimage definition. Which osimage a node is using is specified by the provmethod attribute. To display this value for a node:
lsdef node1 -i provmethod
Object name: dx360m3n03
provmethod=rhels6.3-x86_64-netboot-compute
You can display this details of this osimage by running the following command, supplying your osimage name:
lsdef -t osimage rhels6.3-x86_64-netboot-compute
Object name: rhels6.3-x86_64-netboot-compute
exlist=/opt/xcat/share/xcat/netboot/rhels6.3/compute.exlist
imagetype=linux
osarch=x86_64
osname=Linux
osvers=rhels6.3
otherpkgdir=/install/post/otherpkgs/rhels6.3/x86_64
otherpkglist=/install/custom/netboot/rh/compute.otherpkgs.pkglist
pkgdir=/install/rhels6/x86_64
pkglist=/opt/xcat/share/xcat/netboot/rhels6/compute.pkglist
postinstall=/opt/xcat/share/xcat/netboot/rh/compute.rhels6.x86_64.postinstall
profile=compute
provmethod=netboot
rootimgdir=/install/netboot/rhels6.3/x86_64/compute
synclists=/install/custom/netboot/compute.synclist
You can set the pkglist and otherpkglist using the following command:
chdef -t osimage rhels6.3-x86_64-netboot-compute pkglist=/opt/xcat/share/xcat/netboot/rh/compute.pkglist\
otherpkglist=/install/custom/netboot/rh/my.otherpkgs.pkglist
For rpms from the OS distro, add the new rpm names (without the version number) in the .pkglist file. For example, file /install/custom/netboot/sles/compute.pkglist will look like this after adding perl-DBI:
bash
nfs-utils
openssl
dhcpcd
kernel-smp
openssh
procps
psmisc
resmgr
wget
rsync
timezone
perl-DBI
For the format of the .pkglist file, go to Using_Updatenode#Appendix_B:File_Format_for.pkglist_File.
If you have newer updates to some of your operating system packages that you would like to apply to your OS image, you can place them in another directory, and add that directory to your osimage pkgdir attribute. For example, with the osimage defined above, if you have a new openssl package that you need to update for security fixes, you could place it in a directory, create repository data, and add that directory to your pkgdir:
mkdir -p /install/osupdates/rhels6.3/x86_64
cd /install/osupdates/rhels6.3/x86_64
cp <your new openssl rpm> .
createrepo .
chdef -t osimage rhels6.3-x86_64-netboot-compute pkgdir=/install/rhels6/x86_64,/install/osupdates/rhels6.3/x86_64
Note:If the objective node is not installed by xCAT,please make sure the correct osimage pkgdir attribute so that you could get the correct repository data.
If you have additional rpms (rpms not in the distro) that you also want installed, make a directory to hold them, create a list of the rpms you want installed, and add that information to the osimage definition:
Create a directory to hold the additional rpms:
mkdir -p /install/post/otherpkgs/rh/x86_64
cd /install/post/otherpkgs/rh/x86_64
cp /myrpms/* .
createrepo .
NOTE: when the management node is rhels6.x, and the otherpkgs repository data is for rhels5.x, we should run createrepo with "-s md5". Such as:
createrepo -s md5 .
Create a file that lists the additional rpms that should be installed. For example, in /install/custom/netboot/rh/compute.otherpkgs.pkglist put:
myrpm1
myrpm2
myrpm3
Add both the directory and the file to the osimage definition:
chdef -t osimage mycomputeimage otherpkgdir=/install/post/otherpkgs/rh/x86_64 otherpkglist=/install/custom/netboot/rh/compute.otherpkgs.pkglist
If you add more rpms at a later time, you must run createrepo again. The createrepo command is in the createrepo rpm, which for RHEL is in the 1st DVD, but for SLES is in the SDK DVD.
If you have multiple sets of rpms that you want to keep separate to keep them organized, you can put them in separate sub-directories in the otherpkgdir. If you do this, you need to do the following extra things, in addition to the steps above:
In your otherpkgs.pkglist, list at least 1 file from each sub-directory. (During installation, xCAT will define a yum or zypper repository for each directory you reference in your otherpkgs.pkglist.) For example:
xcat/xcat-core/xCATsn
xcat/xcat-dep/rh6/x86_64/conserver-xcat
There are some examples of otherpkgs.pkglist in /opt/xcat/share/xcat/netboot/<distro>/service.*.otherpkgs.pkglist that show the format.
Note: the otherpkgs postbootscript should by default be associated with every node. Use lsdef to check:
lsdef node1 -i postbootscripts
If it is not, you need to add it. For example, add it for all of the nodes in the "compute" group:
chdef -p -t group compute postbootscripts=otherpkgs
For the format of the .otherpkg.pklist file, go to
Using_Updatenode#Appendix_A:File_Format_for.pkglist_File
Run the updatenode command to push the new software to the nodes:
updatenode <noderange> -S
The -S flag updates the nodes with all the new or updated rpms specified in both .pkglist and .otherpkgs.pkglist.
If you have a configuration script that is necessary to configure the new software, then instead run:
cp myconfigscript /install/postscripts/
chdef -p -t compute postbootscripts=myconfigscript
updatenode <noderange> ospkgs,otherpkgs,myconfigscript
The next time you re-install these nodes, the additional software will be automatically installed.
Run the updatenode command to push the new software to the nodes:
updatenode <noderange> -S
The -S flag updates the nodes with all the new or updated rpms specified in both .pkglist and .otherpkgs.pkglist.
If you have a configuration script that is necessary to configure the new software, then instead run:
cp myconfigscript /install/postscripts/
chdef -p -t compute postbootscripts=myconfigscript
updatenode <noderange> ospkgs,otherpkgs,myconfigscript
You must also do this next step, otherwise the next time you reboot the stateless nodes, the new software won't be on the nodes. Run genimage and packimage to install the extra rpms into the image:
genimage <osimage>
packimage <osimage>
Updatenode can also be used in Sysclone environment to push delta changes to target node. After capturing the delta changes from the golden client to management node, just run below command to push delta changes to target nodes. See Using_Clone_to_Deploy_Server#Update_Nodes_Later_On for more information.
updatenode <targetnoderange> -S
You can use the updatenode command to perform the following functions after the nodes are up and running:
To rerun all the postscripts for the nodes. (In general, xCAT postscripts are structured such that it is not harmful to run them multiple times.)
updatenode <noderange> -P
To rerun just the syslog postscript for the nodes:
updatenode <noderange> -P syslog
To run a list of your own postscripts, make sure the scripts are copied to /install/postscripts directory, then:
updatenode <noderange> -P "script1,script2"
If you need to, you can also pass arguments to your scripts (this will work in xCAT 2.6.7 and greater):
updatenode <noderange> -P "script1 p1 p2,script2"
As of xCAT 2.8, you can customize what attributes you want made available to the post*script, using the shipped mypostscript.tmpl file.
As of the xCAT 2.8 release, xCAT provides a way for the admin to customize the information that will be provide to the post*scripts when they run on the node. This is done by editing the mypostscript.tmpl file. The attributes that are provided in the shipped mypostscript.tmpl file should not be removed. They are needed by the default xCAT postscripts.
The mypostscript.tmpl, is shipped in the /opt/xcat/share/xcat/templates/mypostscript directory.
If the admin customizes the mypostscript.tmpl, they should copy the mypostscript.tmpl to /install/postscripts/mypostscript.tmpl, and then edit it. The mypostscript for each node will be named mypostscript.<.nodename>.. The generated mypostscript.<.nodename>. will be put in the /tftpboot/mypostscripts directory.
As of xCAT 2.8.2, this attribute is supported on both AIX and Linux. If running 2.8 or 2.8.1, it is only supported on Linux. If the site table precreatemypostscripts attribute is set to 1 or yes, it will instruct xcat at nodeset and updatenode time to query the db once for all of the nodes passed into the command and create the mypostscript file for each node and put them in a directory in $TFTPDIR(for example /tftpboot). The created mypostscript.<.nodename>. file in the /tftpboot/mypostscripts directory will not be regenerated unless another nodeset or updatenode command is run to that node. This should be used when the system definition has stabilized. It saves time on the updatenode or reboot by not regenerating the mypostscript file.
If the precreatemyposcripts attribute is yes, and a database change is made or xcat code is upgraded, then you should run a new nodeset or updatenode to regenerate the /tftpboot/mypostscript/mypostscript.<.nodename>. file to pick up the latest database setting. The default for precreatemypostscripts is no/0.
When you run nodeset or updatenode, it will search the /install/postscripts/mypostscript.tmpl first. If the /install/postscripts/mypostscript.tmpl exists, it will use that template to generate the mypostscript for each node. Otherwise, it will use /opt/xcat/share/xcat/templates/mypostscript/mypostscript.tmpl.
The attributes that are defined in the shipped mypostscript.tmpl file should not be removed. The xCAT default postscripts rely on that information to run successfully. The following will explain the entries in the mypostscript.tmpl file.
The SITE_TABLE_ALL_ATTRIBS_EXPORT line in the file directs the code to export all attributes defined in the site table. Note the attributes are not always defined exactly as in the site table to avoid conflict with other table attributes of the same name. For example, the site table master attribute is named SITEMASTER in the generated mypostscript file.
#SITE_TABLE_ALL_ATTRIBS_EXPORT#
The following line exports ENABLESSHBETWEENNODES by running the internal xCAT routine (enablesshbetweennodes).
ENABLESSHBETWEENNODES=#Subroutine:xCAT::Template::enablesshbetweennodes:$NODE# export ENABLESSHBETWEENNODES
tabdump(<TABLENAME>) is used to get all the information in the <TABLENAME> table
tabdump(networks)
These line export the node name based on its definition in the database.
NODE=$NODE export NODE
These lines get a comma separated list of the groups to which the node belongs.
GROUP=#TABLE:nodelist:$NODE:groups# export GROUP
These lines reads the nodesres table, the given attributes (nfsserver,installnic,primarynic,xcatmaster,routenames) for the node ($NODE), and exports it.
NFSSERVER=#TABLE:noderes:$NODE:nfsserver# export NFSSERVER INSTALLNIC=#TABLE:noderes:$NODE:installnic# export INSTALLNIC PRIMARYNIC=#TABLE:noderes:$NODE:primarynic# export PRIMARYNIC MASTER=#TABLE:noderes:$NODE:xcatmaster# export MASTER NODEROUTENAMES=#TABLE:noderes:$NODE:routenames# export NODEROUTENAMES
The following entry exports multiple variables from the routes table. Not always set.
#ROUTES_VARS_EXPORT#
The following lines export nodetype table attributes.
OSVER=#TABLE:nodetype:$NODE:os# export OSVER ARCH=#TABLE:nodetype:$NODE:arch# export ARCH PROFILE=#TABLE:nodetype:$NODE:profile# export PROFILE PROVMETHOD=#TABLE:nodetype:$NODE:provmethod# export PROVMETHOD
The following adds the current directory to the path for the postscripts.
PATH=`dirname $0`:$PATH export PATH
The following sets the NODESETSTATE by running the internal xCAT getnodesetstate script.
NODESETSTATE=#Subroutine:xCAT::Postage::getnodesetstate:$NODE# export NODESETSTATE
The following says the postscripts are not being run as a result of updatenode. This is changed =1, when updatenode runs.
UPDATENODE=0 export UPDATENODE
The following sets the NTYPE to compute,service or MN.
NTYPE=$NTYPE export NTYPE
The following sets the mac address.
MACADDRESS=#TABLE:mac:$NODE:mac# export MACADDRESS
IF vlan is setup, then the #VLAN_VARS_EXPORT# line will provide the following exports:
VMNODE='YES' export VMNODE VLANID=vlan1... export VLANID VLANHOSTNAME=.. .. #VLAN_VARS_EXPORT#
If monitoring is setup, then the #MONITORING_VARS_EXPORT# line will provide:
MONSERVER=11.10.34.108 export MONSERVER MONMASTER=11.10.34.108 export MONMASTER #MONITORING_VARS_EXPORT#
The OSIMAGE_VARS_EXPORT# line will provide, for example:
OSPKGDIR=/install/rhels6.2/ppc64 export OSPKGDIR OSPKGS='bash,nfs-utils,openssl,dhclient,kernel,openssh-server,openssh-clients,busybox,wget,rsyslog,dash,vim-minimal,ntp,rsyslog,rpm,rsync, ppc64-utils,iputils,dracut,dracut-network,e2fsprogs,bc,lsvpd,irqbalance,procps,yum' export OSPKGS #OSIMAGE_VARS_EXPORT#
The #NETWORK_FOR_DISKLESS_EXPORT# line will provide diskless networks information, if defined.:
NETMASK=255.255.255.0 export NETMASK GATEWAY=8.112.34.108 export GATEWAY .. #NETWORK_FOR_DISKLESS_EXPORT#
Note: the #INCLUDE_POSTSCRIPTS_LIST# and the #INCLUDE_POSTBOOTSCRIPTS_LIST# sections in /tftpboot/mypostscript* on the Management Node will contain all the postscripts and postbootscripts defined for the node. When running an updatenode command for only some of the scripts , you will see in the /xcatpost/mypostscript file on the node, the list has been redefined during the execution of updatenode to only run the requested scripts. For example, if you run updatenode <nodename> -P syslog.
The #INCLUDE_POSTSCRIPTS_LIST# flag provides a list of postscripts defined for this $NODE.
#INCLUDE_POSTSCRIPTS_LIST#
For example, you will see in the generated file the following stanzas:
# postscripts-start-here # defaults-postscripts-start-here syslog remoteshell # defaults-postscripts-end-here # node-postscripts-start-here syncfiles # node-postscripts-end-here
The #INCLUDE_POSTBOOTSCRIPTS_LIST# provides a list of postbootscripts defined for this $NODE.
#INCLUDE_POSTBOOTSCRIPTS_LIST#
For example, you will see in the generated file the following stanzas:
# postbootscripts-start-here # defaults-postbootscripts-start-here otherpkgs # defaults-postbootscripts-end-here # node-postbootscripts-end-here # postbootscripts-end-here
Type 1: For the simple variable, the syntax is as follows. The mypostscript.tmpl has several examples of this. $NODE is filled in by the code. UPDATENODE is changed to 1, when the postscripts are run by updatenode. $NTYPE is filled in as either compute,service or MN.
NODE=$NODE export NODE UPDATENODE=0 export UPDATENODE NTYPE=$NTYPE export NTYPE
Type 2: This is the syntax to get the value of one attribute from the <tablename> and its key is $NODE. It does not support tables with 2 keys. Some of the tables with two keys are (litefile,prodkey,deps,monsetting,mpa,networks).
VARNAME=#TABLE:tablename:$NODE:attribute#
For example, to get the new updatstatus attribute from the nodelist table:
UPDATESTATUS=#TABLE:nodelist:$NODE:updatestatus# export UPDATESTATUS
Type 3: The syntax is as follows:
VARNAME=#Subroutine:modulename::subroutinename:$NODE#
or
VARNAME=#Subroutine:modulename::subroutinename#
Examples in the mypostscript.tmpl are the following:
NODESETSTATE=#Subroutine:xCAT::Postage::getnodesetstate:$NODE# export NODESETSTATE ENABLESSHBETWEENNODES=#Subroutine:xCAT::Template::enablesshbetweennodes:$NODE# export ENABLESSHBETWEENNODES
Note: Type 3 is not an open interface to add extensions to the template.
Type 4: The syntax is #FLAG#. When parsing the template, the code generates all entries defined by #FLAG#, if they are defined in the database. For example: To export all values of all attributes from the site table. The tag is
#SITE_TABLE_ALL_ATTRIBS_EXPORT#
For the #SITE_TABLE_ALL_ATTRIBS_EXPORT# flag, the related subroutine will get the attributes' values and deal with the special case. such as : the site.master should be exported as ""SITEMASTER". And if the noderes.xcatmaster exists, the noderes.xcatmaster should be exported as "MASTER", otherwise, we also should export site.master as the "MASTER".
Other examples are:
#VLAN_VARS_EXPORT# - gets all vlan related itesm #MONITORING_VARS_EXPORT# - gets all monitoring configuration and setup da ta #OSIMAGE_VARS_EXPORT# - get osimage related variables, such as ospkgdir, ospkgs ... #NETWORK_FOR_DISKLESS_EXPORT# - gets diskless network information #INCLUDE_POSTSCRIPTS_LIST# - includes the list of all postscripts for the node #INCLUDE_POSTBOOTSCRIPTS_LIST# - includes the list of all postbootscripts for the node
Note: Type4 is not an open interface to add extensions to the templatel.
Type 5: Get all the data from the specified table. The <TABLENAME> should not be a node table, like nodelist. This should be handles with TYPE 2 syntax to get specific attributes for the $NODE. tabdump would result in too much data for a nodetype table. Also the auditlog, eventlog should not be in tabdump for the same reason. site table should not be specified, it is already provided with the #SITE_TABLE_ALL_ATTRIBS_EXPORT# flag. It can be used to get the data from the two key tables (like switch).
The syntax is:
tabdump(<TABLENAME>)
Add new attribute into mypostscript.tmpl
When you add new attributes into the template, you should edit the /install/postscripts/mypostscript.tmpl which you created by copying /opt/xcat/share/xcat/templates/mypostscript/mypostscript.tmpl. Make all additions before the # postscripts-start-here section. xCAT will first look in /install/mypostscript.tmpl for a file and then if not found will use the one in /opt/xcat/share/xcat/templates/mypostcript/mypostscript.tmpl.
For example:
UPDATESTATUS=#TABLE:nodelist:$NODE:updatestatus# export UPDATESTATUS ... # postscripts-start-here #INCLUDE_POSTSCRIPTS_LIST# ## The following flag postscripts-end-here must not be deleted. # postscripts-end-here
Note: If you have a hierarchical cluster, you must copy your new mypostscript.tmpl to /install/postscripts/mypostscript.tmpl on the service nodes, unless /install/postscripts directory is mounted from the MN to the service node.
Remove attribute from mypostscript.tmpl
If you want to remove an attribute that you have added, you should remove all the related lines or comment them out with ##. For example, comment out the added lines.
##UPDATESTATUS=#TABLE:nodelist:$NODE:updatestatus# ##export UPDATESTATUS
There are two quick ways to test the template. ****
If the node is up:
updatenode <nodename> -P syslog
Check your generated template
vi /tftpboot/mypostscripts/mypostscript.<nodename>
Another way, is set the precreate option
chdef -t site -o clustersite precreatemypostscripts=1
Then run
nodeset <nodename> ....
Check your generated template
vi /tftpboot/mypostscripts/mypostscript.<nodename>
This is an example of the generated postscript for a servicenode install. It is found in /xcatpost/mypostscript on the node.
# global value to store the running status of the postbootscripts,the value #is non-zero if one postbootscript failed return_value=0 # subroutine used to run postscripts run_ps () { local ret_local=0 logdir="/var/log/xcat" mkdir -p $logdir logfile="/var/log/xcat/xcat.log" if [ -f $1 ]; then echo "`date` Running postscript: $@" | tee -a $logfile #./$@ 2>&1 1> /tmp/tmp4xcatlog #cat /tmp/tmp4xcatlog | tee -a $logfile ./$@ 2>&1 | tee -a $logfile ret_local=${PIPESTATUS[0]} if [ "$ret_local" -ne "0" ]; then return_value=$ret_local fi echo "Postscript: $@ exited with code $ret_local" else echo "`date` Postscript $1 does NOT exist." | tee -a $logfile return_value=-1 fi return 0 } # subroutine end SHAREDTFTP='1' export SHAREDTFTP TFTPDIR='/tftpboot' export TFTPDIR CONSOLEONDEMAND='yes' export CONSOLEONDEMAND PPCTIMEOUT='300' export PPCTIMEOUT VSFTP='y' export VSFTP DOMAIN='cluster.com' export DOMAIN XCATIPORT='3002' export XCATIPORT DHCPINTERFACES="'xcatmn2|eth1;service|eth1'" export DHCPINTERFACES MAXSSH='10' export MAXSSH SITEMASTER=10.2.0.100 export SITEMASTER TIMEZONE='America/New_York' export TIMEZONE INSTALLDIR='/install' export INSTALLDIR NTPSERVERS='xcatmn2' export NTPSERVERS EA_PRIMARY_HMC='c76v2hmc01' export EA_PRIMARY_HMC NAMESERVERS='10.2.0.100' export NAMESERVERS SNSYNCFILEDIR='/var/xcat/syncfiles' export SNSYNCFILEDIR DISJOINTDHCPS='0' export DISJOINTDHCPS FORWARDERS='8.112.8.1,8.112.8.2' export FORWARDERS VLANNETS='|(\d+)|10.10.($1+0).0|' export VLANNETS XCATDPORT='3001' export XCATDPORT USENMAPFROMMN='no' export USENMAPFROMMN DNSHANDLER='ddns' export DNSHANDLER ROUTENAMES='r1,r2' export ROUTENAMES INSTALLLOC='/install' export INSTALLLOC ENABLESSHBETWEENNODES=YES export ENABLESSHBETWEENNODES NETWORKS_LINES=4 export NETWORKS_LINES NETWORKS_LINE1='netname=public_net||net=8.112.154.64||mask=255.255.255.192||mgtifname=eth0||gateway=8.112.154.126||dhcpserver=||tftpserver=8.112.154.69||nameservers=8.112.8.1||ntpservers=||logservers=||dynamicrange=||staticrange=||staticrangeincrement=||nodehostname=||ddnsdomain=||vlanid=||domain=||disable=||comments=' export NETWORKS_LINE2 NETWORKS_LINE3='netname=sn21_net||net=10.2.1.0||mask=255.255.255.0||mgtifname=eth1||gateway=<xcatmaster>||dhcpserver=||tftpserver=||nameservers=10.2.1.100,10.2.1.101||ntpservers=||logservers=||dynamicrange=||staticrange=||staticrangeincrement=||nodehostname=||ddnsdomain=||vlanid=||domain=||disable=||comments=' export NETWORKS_LINE3 NETWORKS_LINE4='netname=sn22_net||net=10.2.2.0||mask=255.255.255.0||mgtifname=eth1||gateway=10.2.2.100||dhcpserver=10.2.2.100||tftpserver=10.2.2.100||nameservers=10.2.2.100||ntpservers=||logservers=||dynamicrange=10.2.2.120-10.2.2.250||staticrange=||staticrangeincrement=||nodehostname=||ddnsdomain=||vlanid=||domain=||disable=||comments=' export NETWORKS_LINE4 NODE=xcatsn23 export NODE NFSSERVER=10.2.0.100 export NFSSERVER INSTALLNIC=eth0 export INSTALLNIC PRIMARYNIC=eth0 export PRIMARYNIC MASTER=10.2.0.100 export MASTER OSVER=sles11 export OSVER ARCH=ppc64 export ARCH PROFILE=service-xcattest export PROFILE PROVMETHOD=netboot export PROVMETHOD PATH=`dirname $0`:$PATH export PATH NODESETSTATE=netboot export NODESETSTATE UPDATENODE=1 export UPDATENODE NTYPE=service export NTYPE MACADDRESS=16:3d:05:fa:4a:02 export MACADDRESS NODEID=EA163d05fa4a02EA export NODEID MONSERVER=8.112.154.69 export MONSERVER MONMASTER=10.2.0.100 export MONMASTER MS_NODEID=0360238fe61815e6 export MS_NODEID OSPKGS='kernel-ppc64,udev,sysconfig,aaa_base,klogd,device-mapper,bash,openssl,nfs- utils,ksh,syslog-ng,openssh,openssh-askpass,busybox,vim,rpm,bind,bind-utils,dhcp,dhcpcd,dhcp-server,dhcp-client,dhcp-relay,bzip2,cron,wget,vsftpd,util-linux,module-init-tools,mkinitrd,apache2,apache2-prefork,perl-Bootloader,psmisc,procps,dbus-1,hal,timezone,rsync,powerpc-utils,bc,iputils,uuid-runtime,unixODBC,gcc,zypper,tar' export OSPKGS OTHERPKGS1='xcat/xcat-core/xCAT-rmc,xcat/xcat-core/xCATsn,xcat/xcat-dep/sles11/ppc64/conserver,perl-DBD-mysql,nagios/nagios-nsca-client,nagios/nagios,nagios/nagios-plugins-nrpe,nagios/nagios-nrpe' export OTHERPKGS1 OTHERPKGS_INDEX=1 export OTHERPKGS_INDEX ## get the diskless networks information. There may be no information. NETMASK=255.255.255.0 export NETMASK GATEWAY=10.2.0.100 export GATEWAY # NIC related attributes for the node for confignics postscript NICIPS="" export NICIPS NICHOSTNAMESUFFIXES="" export NICHOSTNAMESUFFIXES NICTYPES="" export NICTYPES NICCUSTOMSCRIPTS="" export NICCUSTOMSCRIPTS NICNETWORKS="" export NICNETWORKS NICCOMMENTS= export NICCOMMENTS # postscripts-start-here # defaults-postscripts-start-here run_ps test1 run_ps syslog run_ps remoteshell run_ps syncfiles run_ps confNagios run_ps configrmcnode # defaults-postscripts-end-here # node-postscripts-start-here run_ps servicenode run_ps configeth_new # node-postscripts-end-here run_ps setbootfromnet # postscripts-end-here # postbootscripts-start-here # defaults-postbootscripts-start-here run_ps otherpkgs # defaults-postbootscripts-end-here # node-postbootscripts-start-here run_ps test # The following line node-postbootscripts-end-here must not be deleted. # node-postbootscripts-end-here # postbootscripts-end-here exit $return_value
If after node deployment, the ssh keys or xCAT ssl credentials become corrupted, xCAT provides a way to quickly fix the keys and credentials on your Service and compute nodes:
updatenode <noderange> -K
Note: this option can't be used with any of the other updatenode options.
If after install, you would like to sync files to the nodes, use the instructions in the next section on "Setting up syncfile for updatenode" and then run:
updatenode <noderange> -F
With the updatenode command the syncfiles postscript cannot be used to sync files to the nodes. Therefore, if you run updatenode <noderange> -P syncfiles, nothing will be done. A messages will be logged that you must use updatenode <noderange> -F to sync files using updatenode.
For Linux and AIX nodes, xCAT uses different approach to figure out the location of the common synclist file, the method for each platform will be introduced respectively.
The sync file function is not supported for statelite installations. For statelite installations to sync files you should use the read-only option for files/directories listed in litefile table with source location specified in the litetree table. For more information on using setting up Statelite installs, see XCAT_Linux_Statelite .
In the installation process or updatenode process, xCAT needs to figure out the location of the synclist file automatically, so the synclist should be put into the specified place with the proper name.
If the provisioning method for the node is an osimage name, then the path to the synclist will be read from the osimage definition synclists attribute. You can display this information by running the following command, supplying your osimage name.
lsdef -t osimage -l rhels6-x86_64-netboot-compute Object name: rhels6-x86_64-netboot-compute exlist=/opt/xcat/share/xcat/netboot/rhels6/compute.exlist imagetype=linux osarch=x86_64 osname=Linux osvers=rhels6 otherpkgdir=/install/post/otherpkgs/rhels6/x86_64 pkgdir=/install/rhels6/x86_64 pkglist=/opt/xcat/share/xcat/netboot/rhels6/compute.pkglist profile=compute provmethod=netboot rootimgdir=/install/netboot/rhels6/x86_64/compute **synclists=/install/custom/netboot/compute.synclist**
You can set the synclist path using the following command:
chdef -t osimage -o rhels6-x86_64-netboot-compute synclists="/install/custom/netboot /compute.synclist"
If the provisioning method for the node is install,or netboot then the path to the synclist should be of the following format:
/install/custom/<inst_type>/<distro>/<profile>.<os>.<arch>.synclist <inst_type>: "install", "netboot" <distro>: "rh", "centos", "fedora", "sles" <profile>,<os> and <arch> are what you set for the node
For example:
The location of synclist file for the diskfull installation of sles11 with 'compute' as the profile
/install/custom/install/sles/compute.sles11.synclist
The location of synclist file for the diskless netboot of sles11 with 'service' as the profile
/install/custom/netboot/sles/service.sles11.synclist
For the AIX platform, the common synclist file is created base on the definition of nim image. The nim images are defined in the 'osimage' table, and the attribute of osimage.synclists is used to identify the location of the common synclist for the nodes which use this nim image to install/netboot the system.
For example:
If you want to sync files to the node 'node1' which uses the '61cosi' nim image as its profile (The profile attribute is set the osimage for an AIX node), you need to do following things to set the synclist.
Create a synclist file in any directory. For example: /tmp/61cosi.AIX.synclist Set the full path of the synclist file in the attribute osimage.synclists for the nim image '61cosi' in the osimage table.
chdef -t osimage -o 61cosi synclists=/tmp/61cosi.AIX.synclist
The otherpkgs.pklist file can contain the following types of entries:
* rpm name without version numbers
* otherpkgs subdirectory plus rpm name
* blank lines
* comment lines starting with #
* #INCLUDE: <full file path># to include other pkglist files
* #NEW_INSTALL_LIST# to signify that the following rpms will be installed with a new rpm install command (zypper, yum, or rpm as determined by the function using this file)
* #ENV:<variable list># to specify environment variable(s) for a sparate rpm install command
* rpms to remove before installing marked with a "-"
* rpms to remove after installing marked with a "--"
These are described in more details in the following sections.
A simple otherpkgs.pkglist file just contains the the name of the rpm file without the version numbers.
For example, if you put the following three rpms under /install/post/otherpkgs/<os>/<arch>/ directory,
rsct.core-2.5.3.1-09120.ppc.rpm
rsct.core.utils-2.5.3.1-09118.ppc.rpm
src-1.3.0.4-09118.ppc.rpm
The otherpkgs.pkglist file will be like this:
src
rsct.core
rsct.core.utils
If you create a subdirectory under /install/post/otherpkgs/<os>/<arch>/, say rsct, the otherpkgs.pkglist file will be like this:
rsct/src
rsct/rsct.core
rsct/rsct.core.utils
You can group some rpms in a file and include that file in the otherpkgs.pkglist file using #INCLUDE:<file># format.
rsct/src
rsct/rsct.core
rsct/rsct.core.utils
#INCLUDE:/install/post/otherpkgs/myotherlist#
where /install/post/otherpkgs/myotherlist is another package list file that follows the same format.
Note the trailing "#" character at the end of the line. It is important to specify this character for correct pkglist parsing.
The #NEW_INSTALL_LIST# statement is supported in xCAT 2.4 and later.
You can specify that separate calls should be made to the rpm install program (zypper, yum, rpm) for groups of rpms by specifying the entry #NEW_INSTALL_LIST# on a line by itself as a separator in your pkglist file. All rpms listed up to this separator will be installed together. You can have as many separators as you wish in your pkglist file, and each sublist will be installed separately in the order they appear in the file.
For example:
compilers/vacpp.rte
compilers/vac.lib
compilers/vacpp.lib
compilers/vacpp.rte.lnk
#NEW_INSTALL_LIST#
pe/IBM_pe_license
The #ENV statement is supported on Redhat and SLES in xCAT 2.6.9 and later.
You can specify environment variable(s) for each rpm install call by entry "#ENV:<variable list>#". The environment variables also apply to rpm(s) remove call if there is rpm(s) needed to be removed in the sublist.
For example:
#ENV:INUCLIENTS=1 INUBOSTYPE=1#
rsct/rsct.core
rsct/rsct.core.utils
rsct/src
Be same as,
#ENV:INUCLIENTS=1#
#ENV:INUBOSTYPE=1#
rsct/rsct.core
rsct/rsct.core.utils
rsct/src
The "-" syntax is supported in xCAT 2.3 and later.
You can also specify in this file that certain rpms to be removed before installing the new software. This is done by adding '-' before the rpm names you want to remove. For example:
rsct/src
rsct/rsct.core
rsct/rsct.core.utils
#INCLUDE:/install/post/otherpkgs/myotherlist#
-perl-doc
If you have #NEW_INSTALL_LIST# separators in your pkglist file, the rpms will be removed before the install of the sublist that the "-<rpmname>" appears in.
The "--" syntax is supported in xCAT 2.3 and later.
You can also specify in this file that certain rpms to be removed after installing the new software. This is done by adding '--' before the rpm names you want to remove. For example:
pe/IBM_pe_license
--ibm-java2-ppc64-jre
If you have #NEW_INSTALL_LIST# separators in your pkglist file, the rpms will be removed after the install of the sublist that the "--<rpmname>" appears in.
The .pklist file is used to specify the rpm and the group/pattern names from os distro that will be installed on the nodes. It can contain the following types of entries:
* rpm name without version numbers
* group/pattern name marked with a '@' (for full install only)
* rpms to removed after the installation marked with a "-" (for full install only)
These are described in more details in the following sections.
A simple .pkglist file just contains the the name of the rpm file without the version numbers.
For example,
openssl
xntp
rsync
glibc-devel.i686
The #INCLUDE statement is supported in the pkglist file.
You can group some rpms in a file and include that file in the pkglist file using #INCLUDE:<file># format.
openssl
xntp
rsync
glibc-devel.1686
#INCLUDE:/install/post/custom/rh/myotherlist#
where /install/post/custom/rh/myotherlist is another package list file that follows the same format.
Note: the trailing "#" character at the end of the line. It is important to specify this character for correct pkglist parsing.
It is only supported for statefull deployment.
In Linux, a groups of rpms can be packaged together into one package. It is called a group on RedHat, CentOS, Fedora and Scientific Linux. To get the a list of available groups, run
yum grouplist
On SLES, it is called a pattern. To list all the available patterns, run
zypper se -t pattern
You can specify in this file the group/pattern names by adding a '@' and a space before the group/pattern names. For example:
@ base
It is only supported for statefull deployment.
You can specify in this file that certain rpms to be removed after installing the new software. This is done by adding '-' before the rpm names you want to remove. For example:
wget
Internally updatenode command uses the xdsh in the following ways:
Linux: xdsh <noderange> -e /install/postscripts/xcatdsklspost -m <server> <scripts>
AIX: xdsh <noderange> -e /install/postscripts/xcataixspost -m <server> -c 1 <scripts>
where <scripts> is a comma separated postscript like ospkgs,otherpkgs etc.
Wiki: FLEXCAT_system_x_support_for_IBM_Flex
Wiki: Making_Images_Portable_in_xCAT
Wiki: Monitoring_an_xCAT_Cluster
Wiki: Postscripts_and_Prescripts
Wiki: Setting_Up_DB2_as_the_xCAT_DB
Wiki: Setting_Up_IBM_HPC_Products_on_a_Stateful_Login_Node
Wiki: Setting_Up_IBM_HPC_Products_on_a_Statelite_or_Stateless_Login_Node
Wiki: Setting_Up_MySQL_as_the_xCAT_DB
Wiki: Setting_Up_a_Linux_Hierarchical_Cluster
Wiki: Setting_up_ESSL_and_PESSL_in_a_Stateful_Cluster
Wiki: Setting_up_GPFS_in_a_Stateful_Cluster
Wiki: Setting_up_LoadLeveler_in_a_Stateful_Cluster
Wiki: Setting_up_PE_in_a_Stateful_Cluster
Wiki: Setting_up_RSCT_in_a_Stateful_Cluster
Wiki: Setting_up_all_IBM_HPC_products_in_a_Stateful_Cluster
Wiki: XCAT_BladeCenter_Linux_Cluster
Wiki: XCAT_Documentation
Wiki: XCAT_Linux_Statelite
Wiki: XCAT_NeXtScale_Clusters
Wiki: XCAT_iDataPlex_Cluster_Quick_Start
Wiki: XCAT_system_x_support_for_IBM_Flex