<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Recent changes to XCAT_zVM</title><link>https://sourceforge.net/p/xcat/wiki/XCAT_zVM/</link><description>Recent changes to XCAT_zVM</description><atom:link href="https://sourceforge.net/p/xcat/wiki/XCAT_zVM/feed" rel="self"/><language>en</language><lastBuildDate>Tue, 02 Sep 2014 23:37:23 -0000</lastBuildDate><atom:link href="https://sourceforge.net/p/xcat/wiki/XCAT_zVM/feed" rel="self" type="application/rss+xml"/><item><title>XCAT_zVM modified by &lt;REDACTED&gt;</title><link>https://sourceforge.net/p/xcat/wiki/XCAT_zVM/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v561
+++ v562
@@ -66,7 +66,7 @@

-[[img src=architecture.png]] **Figure 1**. Shows the layout of xCAT on System z.
+[[img src=Architecture.png]] **Figure 1**. Shows the layout of xCAT on System z.

 xCAT can be used to manage virtual servers spanning across multiple z/VM partitions. The xCAT management node (MN) runs on any Linux virtual server. It manages each z/VM partition using a System z hardware control point (zHCP) running on a privileged Linux virtual server. The zHCP interfaces with z/VM systems management API (SMAPI), directory manager (DirMaint), and control program layer (CP) to manage the z/VM partition. It utilizes a C socket interface to communicate with the SMAPI layer and VMCP Linux module to communicate with the CP layer. 

&lt;/pre&gt;
&lt;/div&gt;</description><pubDate>Tue, 02 Sep 2014 23:37:23 -0000</pubDate><guid>https://sourceforge.net87978a6d81ec7548353c963768226edcf5ac6f18</guid></item><item><title>XCAT_zVM modified by &lt;REDACTED&gt;</title><link>https://sourceforge.net/p/xcat/wiki/XCAT_zVM/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v560
+++ v561
@@ -1294,7 +1294,7 @@
         MASTER=$ms_ip
         gmond_conf="/etc/ganglia/gmond.conf"
         gmond_conf_old="/etc/gmond.conf"
-        if [ $OS != "AIX" ]; then
+        if [ $OS != "AIX" ]; then
             if [ -f  $gmond_conf ]; then
                 grep "xCAT gmond settings done" $gmond_conf
                 if [ $? -gt 0 ]; then
@@ -1310,7 +1310,7 @@
             fi
         fi

-        if [ $OS != "AIX" ]; then
+        if [ $OS != "AIX" ]; then
             if [ -f $gmond_conf_old ]; then
                 grep "xCAT gmond settings done" $gmond_conf_old
                 if [ $? -gt 0 ]; then
@@ -1551,9 +1551,9 @@
             Running Transaction Test
             Transaction Test Succeeded
             Running Transaction
-              Installing : ganglia-gmetad-3.1.1-1.s390x                                 1/3 
-              Installing : ganglia-web-3.1.1-1.s390x                                    2/3 
-              Installing : ganglia-gmond-3.1.1-1.s390x                                  3/3 
+              Installing : ganglia-gmetad-3.1.1-1.s390x                                 1/3 
+              Installing : ganglia-web-3.1.1-1.s390x                                    2/3 
+              Installing : ganglia-gmond-3.1.1-1.s390x                                  3/3 
             duration: 73(ms)
             Installed products updated.

@@ -2488,9 +2488,9 @@
   5. Add the disk and mount point to the template using the following format: 

-        clearpart --initlabel –drives=dasda,dasdb
-        part / --fstype ext3 --size=100 --grow –ondisk=dasda
-        part /usr --fstype ext3 --size=100 --grow –ondisk=dasdb
+        clearpart --initlabel --drives=dasda,dasdb
+        part / --fstype ext3 --size=100 --grow  --ondisk=dasda
+        part /usr --fstype ext3 --size=100 --grow  --ondisk=dasdb

 In the example above, a disk is added with a device name of _dasdb_. The disk will be mounted at _/usr_ and will have a _ext3_ file system. 
@@ -2662,7 +2662,7 @@

         lo        Link encap:Local Loopback  
                   inet addr:127.0.0.1  Mask:255.0.0.0
-                  inet6 addr: ::1/128 Scope:Host
+                  inet6 addr: ::1/128 Scope:Host
                   UP LOOPBACK RUNNING  MTU:16436  Metric:1
                   RX packets:29 errors:0 dropped:0 overruns:0 frame:0
                   TX packets:29 errors:0 dropped:0 overruns:0 carrier:0
@@ -2771,7 +2771,7 @@

         lo        Link encap:Local Loopback  
                   inet addr:127.0.0.1  Mask:255.0.0.0
-                  inet6 addr: ::1/128 Scope:Host
+                  inet6 addr: ::1/128 Scope:Host
                   UP LOOPBACK RUNNING  MTU:16436  Metric:1
                   RX packets:29 errors:0 dropped:0 overruns:0 frame:0
                   TX packets:29 errors:0 dropped:0 overruns:0 carrier:0
&lt;/pre&gt;
&lt;/div&gt;</description><pubDate>Mon, 18 Aug 2014 12:40:20 -0000</pubDate><guid>https://sourceforge.netb173735eb8d3e50662d1160692ee95bfabe02e35</guid></item><item><title>XCAT_zVM modified by &lt;REDACTED&gt;</title><link>https://sourceforge.net/p/xcat/wiki/XCAT_zVM/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v559
+++ v560
@@ -1,4 +1,4 @@
-[[img src=Official-xcat-doc.png]] 
+![](http://sourceforge.net/p/xcat/wiki/XCAT_Documentation/attachment/Official-xcat-doc.png)

 [TOC]

&lt;/pre&gt;
&lt;/div&gt;</description><pubDate>Tue, 12 Aug 2014 22:47:59 -0000</pubDate><guid>https://sourceforge.netb5d612972b7c9bfaef206f9f401e3ccf63551a35</guid></item><item><title>Discussion for XCAT_zVM page</title><link>https://sourceforge.net/p/xcat/wiki/XCAT_zVM/</link><description>&lt;div class="markdown_content"&gt;&lt;p&gt;{{:XCAT Discussion Page Header}} &lt;/p&gt;&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Bruce</dc:creator><pubDate>Mon, 23 Jun 2014 19:18:21 -0000</pubDate><guid>https://sourceforge.net7b6bb38212b9a88e701bfc0e8d96fa27cd2b9d34</guid></item><item><title>XCAT_zVM modified by Thang Pham</title><link>https://sourceforge.net/p/xcat/wiki/XCAT_zVM/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v558
+++ v559
@@ -225,7 +225,7 @@

 Options supported are: 

-  * Add a disk to a disk pool defined in the EXTENT CONTROL. The disk has to already be attached to SYSTEM and formatted using CPFORMAT.  
+  * Add a disk to a disk pool defined in the EXTENT CONTROL. The disk has to already be attached to SYSTEM and formatted using CPFMTXA or CPFORMAT.  
 The syntax is: `chhypervisor &amp;lt;node&amp;gt; --adddisk2pool [function] [region] [volume] [group]`. Function type can be either: (4) Define region as full volume and add to group OR (5) Add existing region to group. If the volume already exists in the EXTENT CONTROL, use function 5. If the volume does not exist in the EXTENT CONTROL, but is attached to SYSTEM, use function 4.  

&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Thang Pham</dc:creator><pubDate>Mon, 23 Jun 2014 19:18:20 -0000</pubDate><guid>https://sourceforge.netd6887b717cd7c98fb8370a39d37defffe4d8357b</guid></item><item><title>XCAT_zVM modified by Thang Pham</title><link>https://sourceforge.net/p/xcat/wiki/XCAT_zVM/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v557
+++ v558
@@ -225,7 +225,7 @@

 Options supported are: 

-  * Add a disk to a disk pool defined in the EXTENT CONTROL. The disk has to already be attached to SYSTEM.  
+  * Add a disk to a disk pool defined in the EXTENT CONTROL. The disk has to already be attached to SYSTEM and formatted using CPFORMAT.  
 The syntax is: `chhypervisor &amp;lt;node&amp;gt; --adddisk2pool [function] [region] [volume] [group]`. Function type can be either: (4) Define region as full volume and add to group OR (5) Add existing region to group. If the volume already exists in the EXTENT CONTROL, use function 5. If the volume does not exist in the EXTENT CONTROL, but is attached to SYSTEM, use function 4.  

&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Thang Pham</dc:creator><pubDate>Mon, 23 Jun 2014 19:18:15 -0000</pubDate><guid>https://sourceforge.netc9e5345d2a3362b94120a9a22f3e58d98e309cbf</guid></item><item><title>XCAT_zVM modified by Thang Pham</title><link>https://sourceforge.net/p/xcat/wiki/XCAT_zVM/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v556
+++ v557
@@ -1033,7 +1033,7 @@

 ## Installing Linux Using SCSI/FCP

-This section provides details on the installation of Linux using SCSI/FCP. This feature is only available in the development build of xCAT at this time. 
+This section provides details on the installation of Linux using SCSI/FCP. This feature is only available in the development build of xCAT at this time. xCAT has limited support for SCSI/FCP devices. Features such as NPIV are not currently supported, but will be in subsequent releases. 

   1. Logon the xCAT MN as root using a Putty terminal
   2. Create the z/VM hypervisor definition (if not already) 
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Thang Pham</dc:creator><pubDate>Mon, 23 Jun 2014 19:18:13 -0000</pubDate><guid>https://sourceforge.netb84cdd00416bbd8135f8e5466868c5bc06617f3a</guid></item><item><title>XCAT_zVM modified by Thang Pham</title><link>https://sourceforge.net/p/xcat/wiki/XCAT_zVM/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v555
+++ v556
@@ -291,7 +291,7 @@

   * Add a zFCP device to a device pool defined in xCAT. The device must have been carved up in the storage controller and configured with a WWPN/LUN before it can be added to the xCAT storage pool. z/VM does not have the ability to communicate directly with the storage controller to carve up disks dynamically.  
-The syntax is: `chhypervisor &amp;lt;node&amp;gt; --addzfcp2pool [pool] [state (free or used)] [wwpn] [lun] [size] [owner (optional)]`. Multiple WWPNs can be specified for the same LUN (multi-pathing), each separated with a semi-colon. 
+The syntax is: `chhypervisor &amp;lt;node&amp;gt; --addzfcp2pool [pool] [state (free or used)] [wwpn] [lun] [size] [range (optional)] [owner (optional)]`. Multiple WWPNs can be specified for the same LUN (multi-pathing), each separated with a semi-colon. 

         # chhypervisor pokdev61 --addzfcp2pool zfcp1 free 500501234567C890 4012345600000000 8G
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Thang Pham</dc:creator><pubDate>Mon, 23 Jun 2014 19:18:11 -0000</pubDate><guid>https://sourceforge.netb9aee7f1ece687381d5b995ffc23cfa62f0dd4dd</guid></item><item><title>XCAT_zVM modified by Thang Pham</title><link>https://sourceforge.net/p/xcat/wiki/XCAT_zVM/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v554
+++ v555
@@ -826,12 +826,12 @@
   * `action` can be: (MOVE) initiate a VMRELOCATE MOVE of the VM, (TEST) determine if VM is eligible to be relocated, or (CANCEL) stop the relocation of VM.
   * `force` can be: (ARCHITECTURE) attempt relocation even though hardware architecture facilities or CP features are not available on destination system, (DOMAIN) attempt relocation even though VM would be moved outside of its domain, or (STORAGE) relocation should proceed even if CP determines that there are insufficient storage resources on destination system.
   * `immediate` can be: (YES) VMRELOCATE command will do one early pass through virtual machine storage and then go directly to the quiesce stage, or (NO) specifies immediate processing.
-  * `max_total` is the maximum wait time for relocation to complete.
-  * `max_quiesce` is the maximum quiesce time a VM may be stopped during a relocation attempt.
-    
-    
-    # rmigrate gpok3 destination=poktst62
-    gpok3: Running VMRELOCATE against LNX3... Done
+  * `max_total` is the maximum wait time for relocation to complete. Optional.
+  * `max_quiesce` is the maximum quiesce time a VM may be stopped during a relocation attempt. Optional.
+    
+    
+    # rmigrate gpok3 destination=poktst62 action=MOVE immediate=NO force="ARCHITECTURE DOMAIN STORAGE"
+    gpok3: Running VMRELOCATE action=MOVE against LNX3... Done

 `xdsh` \- Concurrently runs commands on multiple nodes.  
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Thang Pham</dc:creator><pubDate>Mon, 23 Jun 2014 19:18:05 -0000</pubDate><guid>https://sourceforge.netd12829d26b375b4e0f64771de6367fab75e6783e</guid></item><item><title>XCAT_zVM modified by Thang Pham</title><link>https://sourceforge.net/p/xcat/wiki/XCAT_zVM/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v553
+++ v554
@@ -90,8 +90,6 @@
 Note: You should cleanly shutdown the node by issuing `rpower &amp;lt;node&amp;gt; softoff`.

-
-
 `mkvm` \- Creates a new virtual server with the same profile/resources as the specified node (cloning). Alternatively, creates a new virtual server based on a directory entry.  
 The syntax is: `mkvm &amp;lt;new node&amp;gt; /tmp/&amp;lt;directory_entry_text_file&amp;gt;`

@@ -249,14 +247,15 @@
         gpok2: Adding ECKD disk to system... Done

-  * Dynamically add a SCSI disk to a running z/VM system.  
-The syntax is: `chhypervisor &amp;lt;node&amp;gt; --addscsi [scsi_device_number] [device_path] [option] [persist]`. 
+  * Dynamically add a SCSI disk to a running z/VM system. The SCSI disk is added to the system as an EDEV.  
+The syntax is: `chhypervisor &amp;lt;node&amp;gt; --addscsi [device_number] [device_path] [option] [persist]`. 
+    * `device_number` is the device number.
     * `device_path` is a comma separated string containing the FCP device number, WWPN, and LUN.
     * `option` can be: (1) add new SCSI (default), (2) add new path, or (3) delete path.
     * `persist` can be: (YES) SCSI device updated in active and configured system, or (NO) SCSI device updated only in active system.

-        # chhypervisor pokdev61 --addscsi 1 "1A89,500512345678c411,4012345100000000;1A89,500512345678c411,4012345200000000" 2 YES
+        # chhypervisor pokdev61 --addscsi 9000 "1A23,500512345678c411,4012345100000000;1A89,500512345678c411,4012345200000000" 2 YES
         gpok2: Adding a real SCSI disk to system... Done

@@ -330,12 +329,12 @@
         gpok2: Removing POOL1... Done

-  * Delete a real SCSI disk.  
+  * Delete a real SCSI disk (EDEV).  
 The syntax is: `chhypervisor &amp;lt;node&amp;gt; --removescsi [device number] [persist (YES or NO)]`. 
     * `persist` can be: (NO) SCSI device is deleted on the active system, or (YES) SCSI device is deleted from the active system and permanent configuration for the system.

-        # chhypervisor pokdev61 --removescsi 1B89 YES
+        # chhypervisor pokdev61 --removescsi 9000 YES
         pokdev61: Deleting a real SCSI disk for system... Done

&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Thang Pham</dc:creator><pubDate>Mon, 23 Jun 2014 19:18:03 -0000</pubDate><guid>https://sourceforge.netad782021eb66a41b50822277a64b48f31dbe7855</guid></item></channel></rss>