High Performance Computing

Complete cluster documentation, including detailed hardware specifications, can be found on the cluster documentation page. If you have already requested access to Turing and would like to learn more about how to use Turing, please see our Turing Basic User Guide.

Turing Research Cluster

Housed in the Fuller Laboratories server room, Turing is the primary research cluster for computational science across WPI, serving 73 different faculty members across 14 departments.

Turing was initially acquired through an NSF MRI grant (DMS-1337943), and consisted of the head node and 24 compute nodes connected with a high speed Infiniband interconnect.  The main system remains in place, but has undergone a complete management and software overhaul, and numerous expansions since July 2016.

A full list of the departments using Turing for research:

  • Bioinformatics & Computational Biology
  • Biology
  • Biomedical Engineering
  • Civil Engineering
  • Chemical Engineering
  • Chemistry & Biochemistry
  • Computer Science
  • Data Science
  • Electrical & Computer Engineering
  • Fire Protection Engineering
  • Mathematics
  • Mechanical Engineering
  • Physics
  • Robotics Engineering

A representative list of the computational science applications running on Turing include:

  • Abaqus
  • Ansys
  • Blast
  • C
  • Caffe
  • Canu
  • COMSOL
  • CP2K
  • Dealii
  • FDS
  • Fluent
  • Fortran
  • Java
  • Keras
  • LAMMPS
  • LS-DYNA
  • Matlab
  • MOPAC
  • NWChem
  • OpenFOAM
  • Petsc
  • Python
  • Quantum Espresso
  • R
  • Tensorflow
  • VASP

Turing is managed using Bright Cluster Manager version 8, and is running Ubuntu 20.04.

Hardware Summary

Turing consists of a head node, login node, and 46 compute nodes (total of 48 servers).

Total CPU/RAM/GPU counts across all 46 compute nodes are as follows:

Primary Purpose
CPU
RAM
GPU
Compute 3492 40.5 TB 60

Cluster Network Connections

The entire cluster is connected through three internal networks:

  • In-band Gigabit Ethernet (provisioning, single 48 port switch)
  • Out-of-band management network (BMC, single 48 port switch)
  • FDR Infiniband (compute, three 36 port switches wired in a fat tree configuration)

Both the Turing head node and login node are each connected to the WPI network through a single 10 Gb ethernet connection.

Storage

The Turing cluster is connected to a 560 TB high-performance VAST scale-out storage system, providing home directories and scratch space. Additionally, VAST provides hourly snapshots of the home directories, enabling self-service file recovery.

Last Updated on October 12, 2023