High Performance Computing Service

Connecting to HPCS systems

Login to the HPCS

The HPCS has several login nodes, for reliability and to share the interactive workload. For reasons of security it is necessary to use SSH in order to connect - this ought not to be a problem since any decent Linux distribution will contain the ssh client which can be used for interactive logins, and also scp, sftp and rsync which employ SSH natively for file transfers. If your Linux/MacOSX/UNIX machine does not have ssh, firstly please check your distribution for it (ensuring you get the most recent version available). Alternatively, download and build OpenSSH yourself, or get PuTTY for Microsoft Windows.

From Linux/MacOSX/UNIX (or from within the Cygwin environment in Windows) do the following to connect:

ssh <username>@login.hpc.cam.ac.uk

The above connects to a random member of the set of default login nodes (at the time of writing these are the Sandy Bridge login nodes). It is also possible to specify a particular node by name: in the case of the Sandy Bridge nodes, one can specify simply login-sand or one of login-sand1 - login-sand8. The login-sand nodes are appropriate for the development of code intended to run on the Sandy Bridge nodes (sand-[1-10]-[1-60]). Development work for code intended to run on the Nehalem nodes (i.e. the Tesla GPU subcluster, nodes tesla[01-32]) or Westmere nodes (west-[11-18]-[1-16]) should be carried out on the Westmere-type login nodes (login-west or explicitly one of login-west1 - login-west3).

Note that you may need to add the -X option to the above ssh command in order to enable transparent forwarding of X applications to your local screen (this assumes that you have an X server running on your local machine). The OpenSSH version of ssh sometimes requires that you use -Y instead of -X (try -Y if applications don't appear or die with errors). These options to ssh direct the X communication through an encrypted tunnel, and should `just work'. In the bad old days before ssh the highly insecure method of persuading X windows to display on remote screens involved commands such as xauth and xhost - you do not and never will need to use either of these with ssh, and if you try to use them you may open up everything you type (including passwords) to be read, and even changed, by evil-doers.

The HPCS uses CRSids where these exist and so the <username>@ can usually be omitted if your local system (in Cambridge) does also. Note that although systems in cam.ac.uk should be able to connect immediately, other systems may require explicit permission - please see this faq and contact support'at'hpc.cam.ac.uk in case of difficulty.

Similarly scp, sftp or rsync should also be accepted, incoming to the HPCS. Of course doing the same in an outward direction should also work, provided the other system does not block SSH.

Password

It is possible to change your initial password using the usual unix command passwd on a login node. However, the security of both users' data and the service itself depends strongly on choosing a password sensibly, which in the age of automated cracking programs unfortunately means the following:

  • Use at least 8 characters (preferably 10)
  • Use a mixture of upper and lower case letters, numbers and at least one non-alphanumeric character
  • Do not use dictionary words, common proper nouns or simple rearrangements of these
  • Do not use family names, user identifiers, car registrations, media references,...
  • Do not re-use a password in use on another system (this is for damage limitation in case of a compromise somewhere).

Passwords should be treated like credit card numbers (and not left around, or shared etc). The above rules are similar to those which apply to multiuser systems elsewhere.

Remote desktops

It is also possible to connect to a remote VNC desktop session on a darwin login node. Please see the page on Remote desktops & 3D visualization for details.

File transfers

Any method of file transfer that operates over SSH (e.g. scp, sftp, rsync) should work to or from darwin, provided SSH access works in the same direction. Thus systems from which it is possible to login should likewise have no difficulty using scp/sftp/rsync, and from darwin out to remote machines such connections should also work provided the other system does not block SSH (unfortunately, some sites do, and even more unfortunately, some even block SSH coming out). In whichever direction the initial connection is made, files can then be transferred in either direction. Note that obsolete and insecure methods such as ftp and rcp will not work (nor should you wish to use such things).

Any UNIX-like system (such as a Linux or MacOSX machine) should already have scp, sftp or rsync (or be able to install them from native media). Similarly these tools can be installed on Windows systems as part of the Cygwin environment. An alternative providing drag-and-drop operation under Windows is WinSCP, and in the same vein MacOSX or Windows users might consider cyberduck.

Of the command-line tools mentioned here, rsync is possibly the fastest, the most sophisticated and also the most dangerous. The man page is extensive but for example the following command will copy a directory called results in your home directory on darwin to the directory from_darwin/results on the local side (where rsync is being run on your local machine and your username is assumed to be abc123):

rsync -av abc123@login.hpc.cam.ac.uk:results from_darwin

Note that a final / on the source directory is significant for rsync - it would indicate that only the contents of the directory would be transferred (so specifying results/ in the above example would result in the contents being copied straight to from_darwin instead of to from_darwin/results). A pleasant feature of rsync is that repeating the same command will lead to only files which appear to have been updated (based on the size and modification timestamp) being transferred. Rsync also validates each actual transfer by comparing checksums.

On directories containing many files rsync can be slow (as it has to examine each file individually). A less sophisticated but faster way to transfer such things may be to pipe tar through ssh, although the final copy should probably be verified by explicitly computing and comparing checksums, or perhaps by using rsync -avc between the original and the copy (which will do the equivalent thing and automatically re-transfer any files which fail the comparison). For example, here is the same copy of /home/abc123/results on darwin copied to from_darwin/results on the local machine using this method:

cd from_darwin
ssh -e none -o cipher=arcfour abc123@login.hpc.cam.ac.uk 'cd /home/abc123 ; tar -cf - results' | tar -xvBf -

In the above the cd command is not actually necessary, but serves to illustrate how to navigate to a transfer directory in a different location to the /home directory.

More on login nodes

A login node is in terms of hardware equivalent to a cluster node, but connected to the external network, and does not run jobs itself. It is intended for:

  • compiling code
  • developing applications
  • submitting applications to the cluster for execution
  • monitoring running applications
  • post-processing and managing data.

On Darwin there are multiple login nodes, representing each CPU type present in the cluster, to improve reliability and share the interactive workload and provide suitable development platforms for each architecture. The name login-sand.hpc.cam.ac.uk (alias login.hpc.cam.ac.uk) points automatically to one of the Sandy Bridge login nodes - the IP address to which it resolves will belong to one of:

login-sand1.hpc.cam.ac.uk
login-sand2.hpc.cam.ac.uk
login-sand3.hpc.cam.ac.uk
login-sand4.hpc.cam.ac.uk
login-sand5.hpc.cam.ac.uk
login-sand6.hpc.cam.ac.uk
login-sand7.hpc.cam.ac.uk
login-sand8.hpc.cam.ac.uk

The name login-west.hpc.cam.ac.uk points automatically to one of the Westmere login nodes - the IP address to which this resolves will belong to one of:

login-west1.hpc.cam.ac.uk
login-west2.hpc.cam.ac.uk
login-west3.hpc.cam.ac.uk

Rather than specify one of the above nodes explicitly by name, it's better usually to use one of the login, login-sand or login-west aliases as this is more likely to land on a relatively low usage login node.

Additionally, there are two (Nehalem) login nodes equipped with NVIDIA Quadro FX5800 graphics cards, for CUDA development and remote (3D) visualization:

login-gfx1.hpc.cam.ac.uk
login-gfx2.hpc.cam.ac.uk

If you have a VNC session running on one of these nodes (or on any of the non-graphical login nodes), you will of course want to connect to that node specifically using its individual name, in order to connect to the correct session.