skip to primary navigationskip to content

High Performance Computing

University Information Services

Studying at Cambridge

 

Architecture

Nodes

There are currently (November 2013) two distinct flavours of compute node making up Darwin, with two distinct CPU microarchitectures and Infiniband interconnects. GPU resources are now provided by the Wilkes cluster.

Sandy Bridge

9600 Sandy Bridge cores are provided by 600 quad server Dell C6220 chassis. Each node consists of two 2.60 GHz eight core Intel Sandy Bridge E5-2670 processors, giving sixteen cores in total, forming a single NUMA (Non-Uniform Memory Architecture) server with 64 GB of RAM (4 GB per core), 376 GB of local storage and Mellanox FDR ConnectX3 interconnect. Code run on these nodes should be optimised for CPUs supporting AVX instructions and using Intel MPI (normally satisfied by compiling with mpicc/mpifort on a login-sand node using the -xHOST optimisation flag). Modules with sandybridge in their name are the best choices where available on these nodes.

Westmere

The Westmere nodes were retired in 2014. The information below is now historical.
1536 Westmere cores are provided by 128 dual socket Dell PowerEdge M610 half-height blade servers. Each node consists of two 2.67 GHz six core Intel Westmere processors, giving twelve cores in total, forming a single NUMA (Non-Uniform Memory Architecture) unit with 36 GB of RAM (3 GB per core), 114 GB of local storage and Mellanox ConnectX2 interconnect. Code run on these nodes should be optimised for CPUs supporting SSE4.2 instructions and using Intel MPI (normally satisfied by compiling with mpicc/mpifort on a login-west node using the -xHOST optimisation flag). Modules with nehalem in their name are the best choices where available on these nodes.

Compute Racks

The cluster is arranged into 16 compute racks: 8 racks form the Sandy Bridge sector of the system, 8 `racks' (really, blade chassis) make up the Westmere sector. All nodes within the Sandy Bridge sector, and all nodes within individual racks from the Westmere sectors, are connected to a full bisectional bandwidth Infiniband network. Each Sandy Bridge rack consists of 20 or 15 chassis, each containing 4 servers, and each server containing 16 cores. Each Westmere `rack' consists of 16 12-core nodes (1 chassis) providing 192 cores. Between Sandy Bridge nodes there is a full bisectional bandwidth FDR Infiniband network (6000 MB/sec) and between Westmere racks a 75% bisectional bandwidth QDR Infiniband network (3200 MB/sec).

In addition to the Infiniband networks, each compute rack has a full bisectional bandwidth gigabit ethernet network for data and administration; also each node is connected to a power/management network through which console functions for each box can be accessed remotely.