All users should contact Darwin initially through the login nodes. To access these do:
Individual login nodes login-sand1 - login-sand8 may be explicitly accessed in the same way (the generic names login and login-sand select a random node from this set).
See this faq for an explanation of "Sandy Bridge", "Nehalem" and "Westmere". All types of login node have the same access to filesystems. In addition nodes login-gfx1 and login-gfx2 also have NVIDIA Quadro FX5800 graphics cards and can be used (by both paying and non-paying users) for remote visualisation.
Note that compiling code with maximum optimisation requires the compiler to make different choices according to the type of CPU on which it is intended to run the final binary. Incorrect choices of optimisation options may create a binary that, for example, will run on Sandy Bridge nodes but fails to run on the older Westmere or Nehalem/Tesla nodes. In general, when compiling on the login-sand (Sandy Bridge type) login nodes, using the -xHOST Intel compiler flag will optimise to run on the newer, Sandy Bridge compute nodes exclusively. On the other hand, using the same flag on the login-gfx nodes will generate a binary that will run on everything but sub-optimally on the Sandy Bridge nodes. To generate a binary capable of running efficiently on all current types of CPU present in Darwin, use the -xSSE4.2 -axAVX Intel compiler options (without -xHOST).
The HPCS uses CRSids where they exist and so the <username>@ can usually be omitted if your local system does also. Note that although systems in cam.ac.uk should be able to connect, other systems may not - please see this faq and contact support'at'hpc.cam.ac.uk in case of difficulty.
It is possible to change your initial password using the usual unix command passwd on a login node. However, the security of both users' data and the service itself depends strongly on choosing a password sensibly, which in the age of automated cracking programs unfortunately means the following:
- Use at least 8 characters
- Use a mixture of upper and lower case letters, numbers and at least one non-alphanumeric character
- Do not use dictionary words, common proper nouns or simple rearrangements of these
- Do not use family names, user identifiers, car registrations, media references,...
- Do not re-use a password in use on another system (this is for damage limitation in case of a compromise somewhere).
Passwords should be treated like credit card numbers (and not left around, emailed or shared etc). The above rules are similar to those which apply to systems elsewhere.
Please see here for a brief summary of available filesystems and the rules governing them.
We use the modules environment extensively. A module can for instance be associated with a particular version of Intel compiler, or different MPI libraries etc. Loading a module establishes the environment required to find the related include and library files at compile-time and run-time.
By default the environment is such that the most commonly required modules are already loaded. It is possible to see what modules are loaded by using the command 'module list':
[sjr20@login-sand1 ~]$ module list Currently Loaded Modulefiles: 1) dot 5) java/jdk1.7.0_07 9) global 13) default-impi 2) torque 6) turbovnc/1.1 10) intel/cce/126.96.36.1999 3) scheduler 7) vgl/2.3.1/64 11) intel/fce/188.8.131.529 4) gold/184.108.40.206 8) intel/impi/4.0.3.008 12) intel/mkl/10.3.10.319
The above shows that torque and the scheduler (the job queueing system software), as well as the Intel compilers and the Intel MPI environment are loaded (these are actually loaded as a result of loading the default- module, which is loaded automatically on login).
To permanently change what modules are loaded by default, edit your ~/.bashrc file, e.g. adding
module load cfitsio/intel/3.300
will ensure that the next shell (or the current shell if you issue source ~/.bashrc) will also have the Intel compiler-compiled version of cfitsio 3.300 loaded into the environment.
module load <module> -> load module module unload <module> -> unload module module purge -> unload all modules module list -> show currently loaded modules module avail -> show available modules module whatis -> show available modules with brief explanation
In some cases, alternative modules may exist with sandybridge in their name. These are the best choices for use on the new Sandy Bridge compute nodes, but they may not work on the Westmere/Tesla nodes (as they may contain instructions not supported by the Westmere/Nehalem CPUs). To list such modules one can do:
module av sandybridge
The default environment should be correctly established automatically via the modules system and the shell initialization scripts. For example, essential system software for compilation, credit and quota management, job execution and scheduling, error-correcting wrappers and MPI recommended settings are all applied in this way. This works by setting the PATH and LD_LIBRARY_PATH environment variables, amongst others, to particular values. Please be careful when editing your ~/.bashrc file, if you wish to do so, as this can wreck the default settings and create many problems if done incorrectly, potentially rendering the account unusable until administrative intervention. In particular, if you wish to modify PATH or LD_LIBRARY_PATH please be sure to preserve the existing settings, e.g. do
export PATH=/your/custom/path/element:$PATH export LD_LIBRARY_PATH=/your/custom/librarypath/element:$LD_LIBRARY_PATH
and don't simply overwrite the existing values, or you will have problems. If you are trying to add directories relating to centrally-installed software, please note that there is probably a module available which can be loaded to adjust the environment correctly.
Users who are returning to Darwin after some time should check their ~/.bashrc files and if necessary DELETE any pre-existing line
module load default-infinipath
as this will now interfere with the proper environment settings on the Sandy Bridge/Westmere nodes.
Note that the default-impi module, which is loaded by default, arranges for mpicc, mpif90 etc to be found and to use the Intel compilers automatically when invoked. These wrapper commands supply the correct flags for compiling with Intel MPI (which is the recommended and default MPI implementation for all current compute nodes).
When compiling code, it is usually possible to remove any direct MPI library references in your Makefile as mpicc & mpif90 will take care of these details. In the Makefile, simply set
etc, or define
etc before compilation.
To generate a binary capable of running efficiently on all current types of CPU present in Darwin, use the -xSSE4.2 -axAVX Intel compiler options.
If some required libraries are missing, please let us know and we can try to install them centrally (as a module).
Please note that the following resource limits apply:
Please see the example job submission scripts /usr/local/Cluster-Docs/Torque/mpi_submit*. These are example scripts for launching an MPI application on the various types of node via the queueing system:
Copying this example file and then modifying the top section (where indicated) will create a script suitable for submission to the batch queueing system via the command qsub.
Darwin operates the Torque batch queueing system (once called OpenPBS) with the Maui scheduler:
Some useful commands:
showq -> show global cluster information checkjob -v -> check your job qsub -> submits an executable script to the queueing system qdel -> delete a job mybalance -h -> show current balance of core hour credits
Once your application is compiled, e.g. to a binary called prog, it can be submitted to the queueing system as follows (we assume it is destined for Sandy Bridge).
Firstly, copy the template PBS submission script:
cp /usr/local/Cluster-Docs/Torque/mpi_submit.sandybridge mpi_submit
(Note that for convenience newer users may have symbolic links to these template files in their home directories - these are read-only so making a copy is still necessary.) Edit the copy mpi_submit, setting application to "prog" and workdir to the current directory. Set options to contain any desired command line options, e.g ">outfile 2>errfile" would redirect stdout and stderr to files which could be monitored while the job runs. Note the comment line in the script:
#PBS -l nodes=8:ppn=16,mem=512000mb,walltime=02:00:00
This is a comment to bash, but is interpreted by PBS as a request to use 8 nodes, with 16 CPU cores per node (which because each node has 16 cores, completely utilizes each node), and 64000MB of memory (i.e. all the memory) on each node, for 2 hours of wall time (i.e. actual time as measured by a clock on the wall, rather than cpu time). Finally, the following command submits the job to the queueing system:
Please note that each Sandy Bridge node has 16 CPU cores, and each Westmere node has 12 cores, hence ppn should not be set larger than these numbers in each case (or the job will never run). It may however be set less than the maximum, if for example your MPI job requires more memory per task than there is available per core (and so you want to user fewer cores in each node).
Furthermore, the Sandy Bridge/Westmere nodes each have no more than 64000 MB/36000 MB of memory respectively available in practice. Thus mem should not be set to more than number of nodes x 64000 MB on Sandy Bridge (or x 36000 MB for Westmere). In general it is good practice to specify memory requirements as it enables the scheduler to make better decisions. Note however that we always allocate entire nodes exclusively to a job, therefore it is recommended that the full amount of memory across the desired number of nodes is always requested. This prevents Maui from incorrectly reducing the number of nodes if the job appears to fit into a smaller number (because, for example, you are using a ppn less than the maximum value).
The job can be monitored with qstat or qstat -an; alternatively use showq (e.g. showq -r, showq -i, showq -b for running/idle/blocked jobs, and add -u username to focus on a particular user's jobs).
The job can be deleted with qdel <job_id>.
When the job finishes (in error or correctly) there will normally be two files created in the submission directory: one normal output file with a name ending in .oNNNN, and an error file with a name ending in .eNNNN (where NNNN is the job id).