The history of high performance computing at the University of Cambridge 



  • Launch of Research Storage Services, a new offering from RCS providing petabyte scale high performance storage to all University of Cambridge researchers.

  • RCS reaches multi-petaflop scale with the installation of the Cambridge Service for Data Driven Discovery (CSD3) facility, including CPU, GPU, many-core and big data analytic capability.

  • Peta4 achieves 1,696.7 PFlops and is positioned at 75 in the November 2017 Top500 list, making it the fastest academic high performance computer system in the UK.



  • Launch of the Clinical Cloud service. The Wolfson Brain Imaging Centre and eligible researcher groups from the Clinical School are able to accelerate their research using cutting edge secure computing and storage platforms.



  • Relocation of all research computing infrastructure to the West Cambridge Data Centre (WCDC). Built by the University at a cost of £20m the WCDC is a state-of-the-art data centre designed to accommodate the rapid growth in demand for high performance computing. It enables the provision of physical space, resilient power and cooling, and the high level of security required by a research facility, providing research computing services at national level. This secures growth and enables RCS to enter the petascale supercomputing era.



  • The HPCS becomes part of the newly formed University Information Services (UIS). It is renamed Research Computing Services (RCS) and sits in a dedicated UIS division, led by Dr Calleja.



  • ‘Darwin III’ is relocated to a temporary data centre in order to accommodate increasing demand for data centre capacity.

  • The HPCS doubles its overall performance with the addition of the new GPU cluster ‘Wilkes I’ with 128 GPU nodes (256 Nvidia K20c Tesla GPUs).

  • Wilkes secures 2nd place on the green500 list of the most energy efficient clusters in the world and is named as the most energy efficient cluster in the world using conventional air cooling.



  • HPCS undergoes its next major upgrade, increasing the performance of its Darwin HPC system by a factor of 10. The ‘Darwin III’ system consists of 600 nodes (9600 Intel Sandybridge cores) and attains performance of 183.38 Tflop/s. It is positioned at 93 in the June 2012 Top500 list.

  • ‘Darwin III’ extends its research computing service to other UK Universities as part of the DiRAC2 facility, funded by the Science and Technology Funding Council (STFC).

  • HPCS joins the Square Kilometre Array (SKA) Science Data Processor (SDP) consortium.



  • The HPCS becomes part of the DiRAC facility for high performance computing.

  • With the support of DiRAC the Darwin supercomputer is extended by an additional 128 node (1536 Intel Westmere cores) creating ‘Darwin II’.

  • HPCS installs its first GPU cluster, consisting of 32 Nvidia Tesla GPU nodes.



  • HPCS migrates from DAMTP to the School of Physical Sciences.

  • HPCS supports Cambridge research groups working on the  European Space Agency’s Planck Satellite project.



  • Cambridge HPCS and Dell enter into partnership to form the Cambridge-Dell Solution Centre.



  • CCHPCF is renamed the High Performance Computing Service (HPCS) and extends its offers to all University of Cambridge researchers.

  • HPCS takes 20th position on the ‘Top500’ list of the fastest HPC systems in the world, and the No. 1 fastest HPC system in the UK with the arrival of the ‘Darwin I’ supercomputer. ‘Darwin I’ is a 18.27 TFlop machine Comprising 585 nodes and 2340 Intel Woodcrest cores. It was funded by a SRIF3 grant.



  • Dr Paul Calleja joins as full time director of the Cambridge-Cranfield High Performance Computing Facility (CCHPCF), sitting in the Department of Applied Mathematics and Theoretical Physics (DAMTP). Its vision is to provide a high performance computing service for the University.


1996 – 2005 

  • The former High Performance Computing Facility (HPCF) provides HPC to select researchers within Cambridge through a variety of proprietary systems.