Wilson HPC Computing Facility

The Accelerator Simulations Wilson cluster is a joint acquisition by the Accelerator Physics Center, Computing Sector, and Technical Division. This cluster is being used for development and testing of accelerator and radio frequency simulation codes. These calculations can only be done using tightly coupled parallel processing techniques. The nodes (described below) are all connected by high speed (double-data-rate) Infiniband network fabric. For maximum flexibility, the code uses the Open MPI package for controlling parallel calculations that can make use of any parallel network hardware.

2018The cluster was upgraded with the addition of a single IBM Power9 based server with four NVIDIA V100 GPUs connected via NVLINK. This server has similar hardware configuration as the "Summit" supercomputer at Oak Ridge National Laboratory.
2017The cluster upgraded with the addition of a single Intel-based host with two NVIDIA P100 GPUs. During this same time the original Intel Xeon Phi 5110P accelerators were decommissioned and the Xeon Phi host servers repurposed as CPU based worker nodes. Also added a single Intel-based host with eight NVIDIA P100 GPUs bringing the total GPU host count to four.
2016The cluster was upgraded with the addition of a single Intel Knight's Landing host. During late 2016 the entire Wilson cluster was relocated from the Lattice Computing Center to the Grid Computing Center.
2014The cluster was upgraded with the addition of four Intel-based hosts with four Intel Xeon Phi 5110P accelerators per host and two Intel-based hosts with two NVIDIA Kepler GPUs per host.
2011The current cluster (pictured below) was upgraded with the addition of 34 quad-socket, eight-core (32 cores/node, 1088 cores total) AMD Opteron CPU based systems which delivered 6.2 TFlop/s Linpack performance. After this upgrade, the original 20 dual-socket, single-core systems were decommissioned.
2010The cluster was upgraded with the addition of 26 dual-socket, six-core (12 cores/node, 312 cores total) Intel Westmere CPU based systems which delivered 2.37 TFlop/s Linpack performance.
2005The cluster started as 20 dual-socket, single-core (2 cores/node, 40 cores total) Intel Xeon CPU based systems which delivered 0.13 TFlop/s Linpack performance.
updated 5 racks photo

Contact: Amitoj Singh
Last modified: Nov 6, 2017