Let's suppose you have created an AMI as described in the previous sections and you want to set up a master node. There are a few things you should probably do beforehand. If you had not already created a security group and key with a suitable name (such as "mycluster"), you would do it now: from your local machine, in the kluster directory:
## you won't really have to do this as it was done earlier:
kl-create-sg mycluster
kl-create-key mycluster
Now we will spawn the master node. There is an easier way to do this, that we will demonstrate later, but for now we'll do it manually. Using the AMI-ID that you created at the end of the last section and the cluster-name you chose ("mycluster" in this example), do as follows:
ec2run ami-74128a1d -g mycluster -k mycluster -z us-east-1c -t c3.large
It is important to tag this machine, as some of the scripts we'll use later expect it to have a particular type of tag. Type as follows, changing the instance-id to your instance-id and 'mycluster' to whatever name you chose for your cluster:
ec2tag i-41401821 --tag Name=mycluster-master
Now get the internet name of this machine using ec2din and ssh to it, as before. First let's check that a few things are working. It might be best to wait a few minutes after the machine comes up, as the startup scripts might take some time. First check that NIS is working:
# ypcat -k auto.master
/export auto.export -rw,intr,rsize=8192,wsize=8192,timeo=1000,retrans=5,bg,retry=5,proto=tcp,actimeo=10
/home auto.home -rw,intr,rsize=8192,wsize=8192,timeo=1000,retrans=5,bg,retry=5,proto=tcp,actimeo=10
Next check that GridEngine is working:
# qhost -q
HOSTNAME ARCH NCPU LOAD MEMTOT MEMUSE SWAPTO SWAPUS
-------------------------------------------------------------------------------
global - - - - - - -
master lx26-amd64 2 0.00 7.5G 276.5M 2.9G 0.0
all.q BIP 0/0/1
Next check that the scripts which mount the ephemeral storage of the node and export it via NFS are working:
# df /export/master
Filesystem 1K-blocks Used Available Use% Mounted on
/mnt/local 433455904 203012 411234588 1% /export/master
Check that it says 1% or a similarly small number. If not, the start-up script that formats that ephemeral storage may still be running: type ps -A | grep mkfs to see.
The disks associated with the running nodes will become available automatically as /export/<node-name>, for example, /export/n1, if we have a node called n1. But be careful because these will only exist as long as the nodes are running, and the data will be lost if you shut them down or even stop them with ec2stop. Also note that these directories are automounted so if you do ls /export you won't see anything there; you have to do ls /export/master or the name of some node. Later, in Attaching EBS storage to nodes, we'll discuss how to attach more permanent storage.
Previous: Customizing your Image (Phase 2)
Next: Setting up your Kluster Config
Up: Kluster Wiki
Wiki: AttachingEBS
Wiki: CustomizingImage2
Wiki: Home
Wiki: SettingConfig