Complete cluster documentation, including detailed hardware specifications, can be found on the cluster documentation page.
Turing Research Cluster
Housed in the Gateway Park server room, Turing is the primary research cluster for computational science across WPI, serving 73 different faculty members across 14 departments.
Turing was initially acquired through an NSF MRI grant (DMS-1337943), and consisted of the head node and 24 compute nodes connected with a high speed Infiniband interconnect. The main system remains in place, but has undergone a complete management and software overhaul, and three expansions since July 2016.
A full list of the departments using Turing for research:
- Bioinformatics & Computational Biology
- Biology
- Biomedical Engineering
- Civil Engineering
- Chemical Engineering
- Chemistry & Biochemistry
- Computer Science
- Data Science
- Electrical & Computer Engineering
- Fire Protection Engineering
- Mathematics
- Mechanical Engineering
- Physics
- Robotics Engineering
A representative list of the computational science applications running on Turing include:
- Abaqus
- Ansys
- Blast
- C
- Caffe
- Canu
- COMSOL
- CP2K
- Dealii
- FDS
- Fluent
- Fortran
- Java
- Keras
- LAMMPS
- LS-DYNA
- Matlab
- MOPAC
- NWChem
- OpenFOAM
- Petsc
- Python
- Quantum Espresso
- R
- Tensorflow
- VASP
Turing is managed using Bright Cluster Manager version 8, and is running Red Hat Enterprise Linux version 7.2.
Hardware Summary
Turing consists of a head node, login node, and 46 compute nodes (total of 48 servers).
Total CPU/RAM/GPU counts across all 46 compute nodes are as follows:
Primary Purpose
|
CPU
|
RAM
|
GPU
|
---|---|---|---|
Compute | 1326 | 9.2 TB | 64 |
Cluster Network Connections
The entire cluster is connected through three internal networks:
- In-band Gigabit Ethernet (provisioning, single 48 port switch)
- Out-of-band management network (BMC, single 48 port switch)
- FDR Infiniband (compute, three 36 port switches wired in a fat tree configuration)
Both the Turing head node and login node are each connected to the WPI network through a single 10 Gb ethernet connection.
Storage
The Turing head node houses a 28 TB disk array used for user storage of large data sets or scratch files from simulations. Users are advised that this storage space is not backed up in any way.
Turing shares users home directories on a network storage array (Qumulo storage cluster) with the Ace cluster for high performance storage during simulations. The shared nature of the user home directories allows for seamless code development/testing/production simulation life cycle of computational science research.
Ace Teaching & Development Cluster
Housed in the Atwater Kent server room, the Ace cluster is designed and maintained to serve a variety of purposes related to teaching and research throughout the academic year.
Ace consists of a heterogeneous collection of hardware and software allow the cluster to effectively serve the variety of use cases. In addition, Ace also runs a Hadoop instance for Big Data analytics that can be used for either teaching, development, or research (made possible by the shared storage infrastructure with Turing).
A representative list of current uses of Ace include:
- Code development and testing in the batch queue
- Interactive CPU or GPU code development and testing (ace-devXX/ace-vizXX)
- Pre-flight test simulations (production runs moved to Turing)
- Big Data analytics (Hadoop) on data generated on Turing
- MQP projects
- Faculty courses that require HPC resources and specialty software stacks
- Training sessions related to HPC/parallel programming/Linux command line
- Faculty hardware maintained under a “condo model”
Ace is managed and maintained using Bright Cluster Manager version 7, and is running Red Hat Enterprise Linux version 7.2.
Hardware Summary
Total CPU/RAM/GPU counts for the cluster are as follows:
Primary Purpose
|
CPU
|
RAM
|
GPU
|
---|---|---|---|
Interactive Login | 96 | 896 GB | 6x K20 |
Compute | 192 | 896 GB | 14x K20 |
Big Data | 44 | 776 GB | N/A |
Faculty Condo | 128 | 320 GB | N/A |
Cluster Network Connections
The entire cluster is connected through a single internal in-band Gigabit Ethernet connection. Compute jobs on the Ace cluster are currently restricted to single compute nodes (e.g. no distributed memory applications).
The Ace head node is connected to the WPI network through a single 10 Gb ethernet connection.
Storage
The Ace head node houses a 28 TB disk array used for user storage of large data sets or scratch files from simulations. Users are advised that this storage space is not backed up in any way.
Ace shares users home directories on a network storage array (Qumulo storage cluster) with the Turing cluster for high performance storage during simulations. The shared nature of the user home directories allows for seamless code development/testing/production simulation life cycle of computational science research.
Last Updated