You aren't going to run all your computations as root, so you will need at least one normal user on the cluster. Because the user information is propagated via NIS, you only have to add the user on the master. But you can't put the user information in the normal place, i.e. in /etc/passwd, because we made /var/yp/ypfiles/ the home of this information. We did so in order to make it easier to share the same image between the NIS master and clients.
In order to add a user you have to be on master
. Either type kl-sshmaster
from your local machine, or ssh master
from one of the other machines.
For non-system users and their groups we will start from user-id 1001 (I'm skipping 1000 as newer Debian images seem to already have a user with that userid). Starting from around 1000 is both for traditional reasons and because NIS, as we have configured it, only transmit password information for users with userid at least 1000. Everyone on this cluster is a close collaborator, so we'll all put them in the same group, users
. Add the following line to the currently empty file /var/yp/ypfiles/group:
users:!:2000:
I decided to start the GIDs from 2000 just to be different from the userids, to reduce confusion. Now, suppose the user is dpovey
. We'll add a line to the currently empty file /var/yp/ypfiles/passwd, reading something like the following:
dpovey::1001:2000:Daniel Povey, dpovey@gmail.com:/home/dpovey:/bin/bash
The format is, Username:Password-hash:Userid:Group-id:Name,Office:Homedir:Shell
. Next time you add a user you'll likely increment the user-id by one, e.g. you'll add a line like:
dvader::1002:2000:Darth Vader, dvader@empire.gov:/home/dvader:/bin/bash
This is assuming Darth Vader prefers bash as his shell-- since he is very old, this is not a sure bet, he might prefer a c-based shell such as tcsh. But he can always change it later using the command ypchsh
once his account is set up. You should also decide where to put the user's home directory. It has to be somewhere that is exported via NFS. I decided to put it on the directory /mnt/home1, on m1-01. So I added the following line to /var/yp/ypfiles/auto.home:
dpovey m1-01:/mnt/home1/dpovey
At this point none of the changes you made in /var/yp/ypfiles will have propagated: if you type id dpovey
it will give an error, meaning the user does not exist, and if you do ypcat -k auto.home
the output is empty. You have to do:
cd /var/yp make
which will cause NIS to update its databases with the changed information. The user will likely need a password. Setting this is done with the program yppasswd
if you are using NIS: This requires a root password to be set up, so just this first time, set up a root password using the command
passwd
Next, change the password of the user, typing yppasswd dpovey
:
# yppasswd dpovey Changing NIS account information for dpovey on master. Please enter root password: Changing NIS password for dpovey on master. Please enter new password: Please retype new password: The NIS password has been changed on master.
Look carefully at the output: on failure it will say The NIS password has not been changed
, which superficially looks similar. Next, you need to set up the user's home directory:
ssh m1-01 cp -r -T /etc/skel /mnt/home1/dpovey chown -R dpovey:users /home/dpovey
If the chown
command says something like no such user
then something went wrong setting or propagating the password or group information: check /var/yp/ypfiles/password
and /var/yp/ypfiles/group
, do make
in /var/yp
, again, and check the NIS maps with ypcat -k passwd.byname
and ypcat -k group
. Also check that the files /etc/passwd and /etc/group have the crucial last lines with the "+", that pull in the NIS maps, and that your userid and group-id are no less than the MINUID and MINGID in /var/yp/Makefile.
To check that the user was added correctly and everything is working, do as follows:
su dpovey cd touch foo; rm foo ssh master
and then enter the user's password that you just set up. It should accept it. You will now have to inform the user how to get to the master node. Use ec2din
from your local machine to figure out your master node's public IP address. In our case it was 107.21.154.75
. It is not the one that starts with 10.something
, which is the private IP (IP address ranges 10.*, and 192.168.*, are reserved for local networks). You can tell the user to add something like the following in their .ssh/config
on their local machine (but changing the IP address and user-id to the relevant values):
Host master HostName 107.21.154.75 ServerAliveInterval 60 User dpovey
Then they should be able to just type ssh master
to get there. You'll have to give users instructions about where they should put data if they have large quantities of it, e.g. in /export/<node-name> instead of their home directory.
If you want to give this user root access via the sudo
command from the machine master
, you can do as follows (I do it this way because I prefer to use emacs as my editor; by default it will use vi
):
export EDITOR=emacs visudo ## And edit the file /etc/sudoers by adding the line dpovey ALL=(ALL) NOPASSWD:ALL # note: you can also set it to something like: # dpovey:ALL=(ALL:ALL) ALL # which will require a password to execute sudo commands.
Previous: Attaching EBS storage
Next: [Things remaining to do] (ToDo)
Up: Kluster Wiki