High Performance Computing

Complete cluster documentation, including detailed hardware specifications, can be found on the cluster documentation page. If you have already requested access to Turing and would like to learn more about how to use Turing, please see our Turing Basic User Guide.

Turing Research Cluster

Housed in the Fuller Laboratories server room, Turing is the primary research cluster for computational science across WPI, serving over 75 different faculty members across 14 departments.

Turing was initially acquired through an NSF MRI grant (DMS-1337943), and consisted of the head node and 24 compute nodes connected with a high speed Infiniband interconnect.  The main system remains in place, but has undergone a complete management and software overhaul, and numerous expansions since July 2016.

Examples of departments using Turing for research:

Bioinformatics & Computational Biology
Biology
Biomedical Engineering
Civil Engineering
Chemical Engineering
Chemistry & Biochemistry
Computer Science
Data Science
Electrical & Computer Engineering
Fire Protection Engineering
Mathematics
Mechanical Engineering
Physics
Robotics Engineering

Examples of software accessible on Turing include:

  • Languages: Python, MATLAB, R, Java
  • Compilers: GCC, LLVM, Intel, CUDA
  • Proprietary Simulation: Ansys/Fluent, COMSOL, Abaqus, VASP
  • Open Source Apps: OpenFOAM, NWChem, FDS, LAMMPS
  • AI Libraries: Keras, Tensorflow
  • Libraries: BLAS, Petsc, deal.II

Turing is managed using Bright Cluster Manager, and is running Ubuntu 20.04.

Hardware Summary

Turing consists of a 4-node hyperconverged head that controls 79 compute nodes.

Total CPU/RAM/GPU counts across all 79 compute nodes are as follows:

CPURAMGPU
522449 TB84

The distribution of GPUs available across the compute nodes are:

GPU TypeA100A30H100V100
Count28361010

Cluster Network Connections

The cluster uses an internal 100 Gb Ethernet network for low-latency message passing and storage access.

The cluster is connected to the WPI network through eight aggregated 10 Gb Ethernet connections.

Storage

The Turing cluster has a 560 TB high-performance VAST scale-out storage system, providing home directories and scratch space. Additionally, VAST provides hourly snapshots of the home directories, enabling self-service file recovery.

Last Updated on June 17, 2024