<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Recent changes to SpawningMaster</title><link>https://sourceforge.net/p/kluster/wiki/SpawningMaster/</link><description>Recent changes to SpawningMaster</description><atom:link href="https://sourceforge.net/p/kluster/wiki/SpawningMaster/feed" rel="self"/><language>en</language><lastBuildDate>Fri, 05 Jun 2015 02:51:48 -0000</lastBuildDate><atom:link href="https://sourceforge.net/p/kluster/wiki/SpawningMaster/feed" rel="self" type="application/rss+xml"/><item><title>SpawningMaster modified by Carl Pupa</title><link>https://sourceforge.net/p/kluster/wiki/SpawningMaster/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v7
+++ v8
@@ -10,7 +10,7 @@
 Now we will spawn the master node.  There is an easier way to do this, that we will demonstrate later, but for now we'll do it manually.  Using the AMI-ID that you created at the end of the last section and the cluster-name you chose ("mycluster" in this example), do as follows:

     :::text
-    ec2run ami-74128a1d -g mycluster -k mycluster -z us-east-1c -t m1.large
+    ec2run ami-74128a1d -g mycluster -k mycluster -z us-east-1c -t c3.large

 **It is important to tag this machine**, as some of the scripts we'll use later expect it to have a particular type of tag.  Type as follows, changing the instance-id to your instance-id and 'mycluster' to whatever name you chose for your cluster:

&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Carl Pupa</dc:creator><pubDate>Fri, 05 Jun 2015 02:51:48 -0000</pubDate><guid>https://sourceforge.net3dfe3c0b846f4f42a55fc1f7cd97df799a62ba94</guid></item><item><title>SpawningMaster modified by Daniel Povey</title><link>https://sourceforge.net/p/kluster/wiki/SpawningMaster/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v6
+++ v7
@@ -44,7 +44,7 @@

 Check that it says 1% or a similarly small number.  If not, the start-up script that formats that ephemeral storage may still be running: type `ps -A | grep mkfs` to see.  

-The disks associated with the running nodes will become available automatically as `/export/`, for example, `/export/n1`, if we have a node called `n1`.  But be careful because these will only exist as long as the nodes are running, and the data will be lost if you shut them down or even stop them with `ec2stop`.  Also note that these directories are automounted so if you do `ls /export` you won't see anything there; you have to do `ls /export/master` or the name of some node.  Later, in [we'll discuss how to attach more permanent storage.
+The disks associated with the running nodes will become available automatically as `/export/`, for example, `/export/n1`, if we have a node called `n1`.  But be careful because these will only exist as long as the nodes are running, and the data will be lost if you shut them down or even stop them with `ec2stop`.  Also note that these directories are automounted so if you do `ls /export` you won't see anything there; you have to do `ls /export/master` or the name of some node.  Later, in [Attaching EBS storage to nodes](AttachingEBS), we'll discuss how to attach more permanent storage.

 Previous: [Customizing your Image (Phase 2)](CustomizingImage2)
 Next: [Setting up your Kluster Config](SettingConfig)
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Daniel Povey</dc:creator><pubDate>Sun, 06 Oct 2013 21:44:17 -0000</pubDate><guid>https://sourceforge.netee625cefea92c18cb30c9a1589f7e0bf82c37fc8</guid></item><item><title>WikiPage SpawningMaster modified by Daniel Povey</title><link>https://sourceforge.net/p/kluster/wiki/SpawningMaster/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v5
+++ v6
@@ -15,7 +15,7 @@
 **It is important to tag this machine**, as some of the scripts we'll use later expect it to have a particular type of tag.  Type as follows, changing the instance-id to your instance-id and 'mycluster' to whatever name you chose for your cluster:

     :::text
-    ec2tag i-41401821 --tag Name=mycluster_master
+    ec2tag i-41401821 --tag Name=mycluster-master

 Now get the internet name of this machine using `ec2din` and ssh to it, as before.  First let's check that a few things are working.  It might be best to wait a few minutes after the machine comes up, as the startup scripts might take some time.  First check that NIS is working:

&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Daniel Povey</dc:creator><pubDate>Mon, 25 Mar 2013 04:28:10 -0000</pubDate><guid>https://sourceforge.net2932da79038cc6d57e5b6775fa6863c2f86dc0bb</guid></item><item><title>WikiPage SpawningMaster modified by Daniel Povey</title><link>https://sourceforge.net/p/kluster/wiki/SpawningMaster/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v4
+++ v5
@@ -2,20 +2,24 @@

 Let's suppose you have created an AMI as described in the previous sections and you want to set up a master node.  There are a few things you should probably do beforehand.  If you had *not* already created a security group and key with a suitable name (such as "mycluster"), you would do it now: from your local machine, in the kluster directory:

+    :::text
     ## you won't really have to do this as it was done earlier:
     kl-create-sg  mycluster
     kl-create-key mycluster

-Now we will spawn the master node.  Using the AMI-ID that you created at the end of the last section and the cluster-name you chose ("mycluster" in this example), do as follows:
+Now we will spawn the master node.  There is an easier way to do this, that we will demonstrate later, but for now we'll do it manually.  Using the AMI-ID that you created at the end of the last section and the cluster-name you chose ("mycluster" in this example), do as follows:

+    :::text
     ec2run ami-74128a1d -g mycluster -k mycluster -z us-east-1c -t m1.large

 **It is important to tag this machine**, as some of the scripts we'll use later expect it to have a particular type of tag.  Type as follows, changing the instance-id to your instance-id and 'mycluster' to whatever name you chose for your cluster:

-    tag i-41401821 --tag Name=mycluster_master
+    :::text
+    ec2tag i-41401821 --tag Name=mycluster_master

 Now get the internet name of this machine using `ec2din` and ssh to it, as before.  First let's check that a few things are working.  It might be best to wait a few minutes after the machine comes up, as the startup scripts might take some time.  First check that NIS is working:
-    
+
+    :::text
     # ypcat -k auto.master
     /export auto.export     -rw,intr,rsize=8192,wsize=8192,timeo=1000,retrans=5,bg,retry=5,proto=tcp,actimeo=10
     /home auto.home       -rw,intr,rsize=8192,wsize=8192,timeo=1000,retrans=5,bg,retry=5,proto=tcp,actimeo=10
@@ -23,6 +27,7 @@

 Next check that GridEngine is working:

+    :::text
     # qhost -q
     HOSTNAME                ARCH         NCPU  LOAD  MEMTOT  MEMUSE  SWAPTO  SWAPUS
     -------------------------------------------------------------------------------
@@ -32,12 +37,15 @@

 Next check that the scripts which mount the ephemeral storage of the node and export it via NFS are working:

+    :::text
     # df /export/master
     Filesystem           1K-blocks      Used Available Use% Mounted on
     /mnt/local           433455904    203012 411234588   1% /export/master

-Check that it says 1% or a similarly small number.  If not, the start-up script that formats that ephemeral storage may still be running: type `ps -A | grep mkfs` to see.  The disks associated with the running nodes will become available automatically as `/export/`, for example, `/export/n1`, if we have a node called `n1`.  But be careful because these will only exist as long as the nodes are running, and the data will be lost if you shut them down or even stop them with `ec2stop`.  Also note that these directories are automounted so if you do `ls /export` you won't see anything there; you have to do `ls /export/master` or the name of some node.  In the next section we'll discuss how to attach more permanent storage.
+Check that it says 1% or a similarly small number.  If not, the start-up script that formats that ephemeral storage may still be running: type `ps -A | grep mkfs` to see.  

-Next: [Attaching EBS storage](AttachingEBS)
+The disks associated with the running nodes will become available automatically as `/export/`, for example, `/export/n1`, if we have a node called `n1`.  But be careful because these will only exist as long as the nodes are running, and the data will be lost if you shut them down or even stop them with `ec2stop`.  Also note that these directories are automounted so if you do `ls /export` you won't see anything there; you have to do `ls /export/master` or the name of some node.  Later, in [we'll discuss how to attach more permanent storage.
+
+Previous: [Customizing your Image (Phase 2)](CustomizingImage2)
+Next: [Setting up your Kluster Config](SettingConfig)
 Up: [Kluster Wiki](Home)
-
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Daniel Povey</dc:creator><pubDate>Mon, 25 Mar 2013 03:10:21 -0000</pubDate><guid>https://sourceforge.net3a2b7a8423db9cd7e5d64b7552bf87faa44216ca</guid></item><item><title>WikiPage SpawningMaster modified by Daniel Povey</title><link>https://sourceforge.net/p/kluster/wiki/SpawningMaster/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v3
+++ v4
@@ -1,4 +1,4 @@
-# Spawning the Master Node
+## Spawning the Master Node

 Let's suppose you have created an AMI as described in the previous sections and you want to set up a master node.  There are a few things you should probably do beforehand.  If you had *not* already created a security group and key with a suitable name (such as "mycluster"), you would do it now: from your local machine, in the kluster directory:

&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Daniel Povey</dc:creator><pubDate>Sun, 24 Mar 2013 00:16:00 -0000</pubDate><guid>https://sourceforge.netecbada29036c595787c92eaa2cd7695643d40c57</guid></item><item><title>WikiPage SpawningMaster modified by Daniel Povey</title><link>https://sourceforge.net/p/kluster/wiki/SpawningMaster/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v2
+++ v3
@@ -10,8 +10,11 @@

     ec2run ami-74128a1d -g mycluster -k mycluster -z us-east-1c -t m1.large

-Get the internet name of this machine and ssh to it, as before.  First let's check that a few things are working.  It might be best to wait a few minutes after the machine comes up, as the startup scripts might take some time.  First check that NIS is working:
+**It is important to tag this machine**, as some of the scripts we'll use later expect it to have a particular type of tag.  Type as follows, changing the instance-id to your instance-id and 'mycluster' to whatever name you chose for your cluster:

+    tag i-41401821 --tag Name=mycluster_master
+
+Now get the internet name of this machine using `ec2din` and ssh to it, as before.  First let's check that a few things are working.  It might be best to wait a few minutes after the machine comes up, as the startup scripts might take some time.  First check that NIS is working:

     # ypcat -k auto.master
     /export auto.export     -rw,intr,rsize=8192,wsize=8192,timeo=1000,retrans=5,bg,retry=5,proto=tcp,actimeo=10
@@ -33,9 +36,8 @@
     Filesystem           1K-blocks      Used Available Use% Mounted on
     /mnt/local           433455904    203012 411234588   1% /export/master

-Check that it says 1% or a similarly small number.  If not, the start-up script that formats that ephemeral storage may still be running: type `ps -A | grep mkfs` to see.  The disks associated with the running nodes will become available automatically as `/export/`, for example, `/export/n1`, if we have a node called `n1`.  But be careful because these will only exist as long as the nodes are running, and the data will be lost if you shut them down or even stop them with `ec2stop`.  In the next section we'll discuss how to attach more permanent storage.
+Check that it says 1% or a similarly small number.  If not, the start-up script that formats that ephemeral storage may still be running: type `ps -A | grep mkfs` to see.  The disks associated with the running nodes will become available automatically as `/export/`, for example, `/export/n1`, if we have a node called `n1`.  But be careful because these will only exist as long as the nodes are running, and the data will be lost if you shut them down or even stop them with `ec2stop`.  Also note that these directories are automounted so if you do `ls /export` you won't see anything there; you have to do `ls /export/master` or the name of some node.  In the next section we'll discuss how to attach more permanent storage.

 Next: [Attaching EBS storage](AttachingEBS)
 Up: [Kluster Wiki](Home)

-
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Daniel Povey</dc:creator><pubDate>Sat, 23 Mar 2013 21:53:10 -0000</pubDate><guid>https://sourceforge.net954ab9268d8cbf5882e09a2e4c1b9ed516a8f15f</guid></item><item><title>WikiPage SpawningMaster modified by Daniel Povey</title><link>https://sourceforge.net/p/kluster/wiki/SpawningMaster/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v1
+++ v2
@@ -2,8 +2,40 @@

 Let's suppose you have created an AMI as described in the previous sections and you want to set up a master node.  There are a few things you should probably do beforehand.  If you had *not* already created a security group and key with a suitable name (such as "mycluster"), you would do it now: from your local machine, in the kluster directory:

+    ## you won't really have to do this as it was done earlier:
     kl-create-sg  mycluster
     kl-create-key mycluster

+Now we will spawn the master node.  Using the AMI-ID that you created at the end of the last section and the cluster-name you chose ("mycluster" in this example), do as follows:

-**Unfinished**
+    ec2run ami-74128a1d -g mycluster -k mycluster -z us-east-1c -t m1.large
+
+Get the internet name of this machine and ssh to it, as before.  First let's check that a few things are working.  It might be best to wait a few minutes after the machine comes up, as the startup scripts might take some time.  First check that NIS is working:
+
+    
+    # ypcat -k auto.master
+    /export auto.export     -rw,intr,rsize=8192,wsize=8192,timeo=1000,retrans=5,bg,retry=5,proto=tcp,actimeo=10
+    /home auto.home       -rw,intr,rsize=8192,wsize=8192,timeo=1000,retrans=5,bg,retry=5,proto=tcp,actimeo=10
+
+
+Next check that GridEngine is working:
+
+    # qhost -q
+    HOSTNAME                ARCH         NCPU  LOAD  MEMTOT  MEMUSE  SWAPTO  SWAPUS
+    -------------------------------------------------------------------------------
+    global                  -               -     -       -       -       -       -
+    master                  lx26-amd64      2  0.00    7.5G  276.5M    2.9G     0.0
+   all.q                BIP   0/0/1         
+
+Next check that the scripts which mount the ephemeral storage of the node and export it via NFS are working:
+
+    # df /export/master
+    Filesystem           1K-blocks      Used Available Use% Mounted on
+    /mnt/local           433455904    203012 411234588   1% /export/master
+
+Check that it says 1% or a similarly small number.  If not, the start-up script that formats that ephemeral storage may still be running: type `ps -A | grep mkfs` to see.  The disks associated with the running nodes will become available automatically as `/export/`, for example, `/export/n1`, if we have a node called `n1`.  But be careful because these will only exist as long as the nodes are running, and the data will be lost if you shut them down or even stop them with `ec2stop`.  In the next section we'll discuss how to attach more permanent storage.
+
+Next: [Attaching EBS storage](AttachingEBS)
+Up: [Kluster Wiki](Home)
+
+
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Daniel Povey</dc:creator><pubDate>Sat, 23 Mar 2013 06:01:13 -0000</pubDate><guid>https://sourceforge.net60122673916e29f3399a01a93956b58faae0a88e</guid></item><item><title>WikiPage SpawningMaster modified by Daniel Povey</title><link>https://sourceforge.net/p/kluster/wiki/SpawningMaster/</link><description>&lt;div class="markdown_content"&gt;&lt;h1 id="spawning-the-master-node"&gt;Spawning the Master Node&lt;/h1&gt;
&lt;p&gt;Let's suppose you have created an AMI as described in the previous sections and you want to set up a master node.  There are a few things you should probably do beforehand.  If you had &lt;em&gt;not&lt;/em&gt; already created a security group and key with a suitable name (such as "mycluster"), you would do it now: from your local machine, in the kluster directory:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;kl&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;create&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;sg&lt;/span&gt;  &lt;span class="n"&gt;mycluster&lt;/span&gt;
&lt;span class="n"&gt;kl&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;create&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt; &lt;span class="n"&gt;mycluster&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Unfinished&lt;/strong&gt;&lt;/p&gt;&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Daniel Povey</dc:creator><pubDate>Fri, 22 Mar 2013 23:48:13 -0000</pubDate><guid>https://sourceforge.netec3eba0f23911424096cd688d0605c1bda4cdc3b</guid></item></channel></rss>