Menu

AttachingEBS

Daniel Povey

Attaching EBS storage to nodes

In order to have relatively permanent storage attached to our cluster, we will use Amazon's "Elastic Block Store"-- these are basically virtual disks that you can attach to nodes. You can create these disks in any size you specify. Our experience for speech recognition applications is that the cost of CPU is orders of magnitude more than the cost of storage, so don't obsess too much over the size. I usually allocate them with a size of 100GB. We can give them an arbitrary name but try to make it unique:

# ec2addvol -s 100 -z us-east-1c
VOLUME  vol-995797ea    100     us-east-1c  creating    2013-03-23T06:10:10+0000    standard
# ec2tag vol-995797ea  --tag Name=mycluster_home1
TAG volume  vol-995797ea    Name    mycluster_home1

We will be making these disks available by attaching each one to a particular machine-- not necessarily the master-- and exporting them to the other machines via NFS. Be careful what you use these EBS volumes for though-- Amazon will not give you fast access to "vanilla" EBS volumes except in short-term bursts. If you want faster there are two options: either

  • Use the "iops-provisioned" EBS volumes which are a bit more expensive (add e.g. -t io1 -i 1000 to the command above), or
  • Use the "ephemeral storage" of the nodes, which our setup makes available by default as /export/<node-name>.</node-name>

If you use the second option, be careful as it will be lost when you stop or terminate the node, or if it crashes. You will have to copy it to somewhere else before you shut the node down, if you want to keep it permanently.

It can be attached to a machine-- not necessarily the master, in fact it is not a good idea to have all volumes attached to the same machine, and I prefer to leave the master without NFS volumes so its network does not get overloaded. We'll attach this one to the node m1-01. From your local machine, do:

ec2-attach-volume vol-995797ea -i i-7705cd17 -d /dev/xvdz

You can tell whether it attached correctly by doing ec2dvol vol-995797ea; it should say attached. Since the volume is newly created, you will have to format it as a file system. From m1-01, do:

device=/dev/xvdz
cmp <(head -c 1000 $device)  <(head -c 1000 /dev/zero) && mkfs -t ext3 $device

This command is the same as mkfs -t ext3 /dev/xvdz; the part with cmp is intended to make it harder for you to accidentally format an already formatted disk. If you attach more than one EBS volume to the same machine, you will have to use a different device name: choose /dev/xvdy, for example.

You can now mount the volume in /mnt: from m1-01, do:

mkdir /mnt/home1
mount /dev/xvdz /mnt/home1
chmod a+rwx /mnt/home1

You can check that it worked as follows: df should tell you it is device /dev/xvdz.

root@m1-01:~# df /mnt/home1
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/xvdz             10321208    154232   9642688   2% /mnt/home1

We'll want to export this via NFS, so do as follows on m1-01, do:

echo '/mnt/home1      *(rw,sync,no_root_squash)' >> /etc/exports
service nfs-kernel-server reload

The command service nfs-kernel-server reload will print some harmless warnings about 'subtree_check'. If we wanted this directory to be automounted as /exports/home1, we could so as follows: on master,

cd /var/yp
echo "home1 m1-01:/mnt/home1" >> ypfiles/auto.export
make

If you did this, to verify that it worked you can do ls /export/home1:

# ls /export/home1
lost+found

Later we'll show you an easier way to do this using a kluster script.

Previous: Adding Nodes
Next: Adding a user
Up: Kluster Wiki


Related

Wiki: AddingNodes
Wiki: AddingUser
Wiki: Home
Wiki: SpawningMaster

MongoDB Logo MongoDB