The NCAR-Wyoming Supercomputing Center (NWSC) is a high-performance computing (HPC) and data archival facility located in Cheyenne, Wyoming, that provides advanced computing services to researchers in the Earth system sciences.[1]
NWSC provides researchers’ needs for computing, data analysis, and scientific visualization resources combined with powerful data management capabilities to support finer Earth system model resolution, increased model complexity, better statistics, more predictive power, and longer simulation times.[2][3] The data storage and archival facility[4] at NWSC holds unique historical climate records and a wealth of other scientific data.[5] Scientists at U.S. universities and research institutions access NWSC resources remotely via the Internet from desktop or laptop computers.
The NWSC data center is funded by the National Science Foundation (NSF) and the State of Wyoming, and is operated by the National Center for Atmospheric Research. It was created through a partnership[6] of the University Corporation for Atmospheric Research (UCAR), the State of Wyoming, the University of Wyoming, Cheyenne LEADS,[7] the Wyoming Business Council, and Cheyenne Light Fuel and Power Company (now named Black Hills Corporation). Consistent with NCAR’s mission, this supercomputing center is a leader in energy efficiency, incorporating the newest and most efficient designs and technologies available. Planning[8][9] for this data center began in 2003, groundbreaking[10] at the North Range Business Park in Cheyenne took place in June 2010, and computing operations began in October 2012.[11]
The facility design is based on modular and expandable spaces that can be adapted for computing system upgrades. Its sustainable design makes it 89% more efficient than a typical data center and up to 10% more efficient than state-of-the-art data centers operating in 2010.[12][13] Almost 92% of the energy it uses goes directly to its core purpose of powering supercomputers[14] to enable scientific discovery. Part of its efficiency comes from the regionally integrated design that uses Wyoming’s climate to provide natural cooling during 96% of the year and local wind energy[15] that supplies at least 10% of its power.[16] The main energy source used is coal. [17] The NWSC achieved LEED Gold certification[18][19] for its sustainable design and construction. In 2013 it won first place[20] for Facility Design Implementation in the Uptime Institute’s Green Enterprise IT awards.[21] This award recognizes pioneering projects and innovations that significantly improve energy productivity and resource use in information technology. In June 2013, the NWSC won the Datacenter Dynamics[22] North American ‘Green’ Data Center award[23][24] for demonstrated sustainability in the design and operation of facilities.[25]
The center currently has a total of 153,000 square feet with 24,000 square feet of raised floor modules for supercomputing systems in its expandable design.[26] It incorporates numerous resource conservation features to reduce its environmental impact. Water consumption at the NWSC is reduced by about 40% compared to traditional designs by using innovative technologies, specialized cooling tower equipment, and low-flow plumbing fixtures. Waste heat from the systems is recycled to pre-heat components in the power plant, to heat the office spaces, and to melt snow and ice on outdoor walkways and rooftops. Windows supply natural light; combined with room occupancy sensors, the building saves 20-30% in lighting and electricity compared to typical office buildings. A building automation system saves energy by continuously optimizing pumps, fans, and controls that heat or cool only occupied areas of the facility. During construction, sustainable practices were used with emphasis on recycled and locally sourced materials.[27][28]
NSF grants for computing, data, and scientific visualization resources are allocated to researchers who investigate the Earth system through simulation. The current HPC environment includes two petascale supercomputers, data analysis and visualization servers, an operational weather forecasting system, an experimental supercomputing architecture platform, a centralized file system, a data storage resource, and an archive of historical research data.[29] All computing and support systems required for scientific workflows are attached to the shared, high-speed, central file system to improve scientific productivity and reduce costs by analyzing and visualizing their data files in place at the NWSC.[30]
In 2012, the Yellowstone supercomputer[31] was installed in the NWSC as its inaugural HPC resource. Yellowstone is an IBM iDataPlex[32] cluster consisted of 72,288 Intel Sandy Bridge EP processor cores in 4,518 16-core nodes, each with 32 gigabytes of memory.[33] All nodes are interconnected with a full fat tree Mellanox FDR InfiniBand network.[34] Yellowstone has a peak performance of 1.504 petaflops and has demonstrated a computational capability of 1.2576 petaflops as measured by the High-Performance LINPACK (HPL) benchmark.[35] It debuted as the world’s 13th fastest computer[36] in the November 2012 ranking by the TOP500 organization. Also in November 2012, Yellowstone debuted as the 58th most energy efficient supercomputer in the world[37] by operating at 875.34 megaflops per watt as ranked by the Green500 organization.
Becoming operational in 2017, the 5.34-petaflops Cheyenne supercomputer is currently providing more than three times the computational capacity of Yellowstone. Cheyenne is an SGI ICE XA system with 4,032 dual-socket scientific computation nodes running 145,152, 18-core 2.3-GHz Intel Xeon E5-2697v4 (Broadwell) processing cores and has 315 terabytes of memory.[38] Interconnecting these nodes is a Mellanox EDR InfiniBand network with 9-D enhanced hypercube topology that performs with a latency of only 0.5 microsecond.[39] Cheyenne runs the SUSE Linux Enterprise Server 12 SP1 operating system.[40] Similar to Yellowstone, Cheyenne’s design and configuration will provide balanced I/O and exceptional computational capacity for the data-intensive needs of its user community.[41] Cheyenne debuted as the world's 20th most powerful computer in the November 2016 Top500 ranking.[42]
Cheyenne was scheduled to go offline on December 31 2023.[43]
A new super computer was announced on January 27, 2021, capable of 20 quadrillion calculations per second, said to be 3.5 times faster than the current system in operation at the facility. Scheduled to be operational in early 2022,[44] it officially launched on July 7, 2023.[45]
Derecho is a 19.87 petaflop HPE Cray System. It has 2488 homogenous (CPU) compute nodes and 82 heterogenous (GPU) compute nodes, 382 A100 GPU's, and 692 terabytes of total memory.[46]
The Geyser and Caldera clusters are specialized data analysis and visualization resources within the data-centric Yellowstone environment. The Geyser data analysis server is a 640-core cluster of 16 nodes, each with 1 terabyte of memory. With its large per-node memory, Geyser is designed to facilitate large-scale data analysis and post-processing tasks, including 3D visualization, with applications that do not support distributed-memory parallelism.[47] The Caldera computational cluster has 256 cores in 16 nodes, each with 64 gigabytes of memory and two Graphics Processing Units (GPUs) for use as either computational processors or graphics accelerators. Caldera’s two NVIDIA Tesla GPUs[48] per node support parallel processing, visualization activities, and development and testing of general-purpose GPU (GPGPU) code.
The center also houses a separate, smaller IBM iDataPlex cluster named Erebus[49] to support the operational forecasts of the NSF Office of Polar Programs’[50] Antarctic Mesoscale Prediction System[51] (AMPS).[52] Erebus has 84 nodes similar to Yellowstone’s, an FDR-10 InfiniBand interconnect, and a dedicated 58-terabyte file system. If needed, Yellowstone will run Erebus’ daily weather forecasts for the Antarctic continent to ensure that the worldwide community of users receives these forecasts without interruption.[53]
Pronghorn’s architecture has promise for meeting the Earth system sciences’ demanding requirements for data analysis, visualization, and GPU-assisted computation. As part of a partnership between Intel, IBM, and NCAR, this exploratory system is being used to evaluate the effectiveness of the Xeon Phi coprocessor’s Many Integrated Core (MIC) architecture for running climate, weather, and other environmental applications. If these coprocessors prove beneficial to key NCAR applications, they can be easily added to the standard IBM iDataPlex nodes in Yellowstone as a cost-effective way to extend its capabilities.
Pronghorn has 16 dual-socket IBM x360 nodes[54] featuring Intel’s Xeon Phi 5110P coprocessors[55] and 2.6-gigahertz Intel Sandy Bridge (Xeon E5-2670) cores.[56] The system has 64 gigabytes of DDR3-1600 memory per node (63 GB usable memory per node) and is interconnected with a full fat tree Mellanox FDR InfiniBand network.
Geyser, Caldera, and Yellowstone all mount the central file system named GLobally Accessible Data Environment[57] (GLADE), which provides work spaces common to all HPC resources at NWSC for computation, analysis, and visualization. This allows users to analyze data files in place, without sending large amounts of data across a network or creating duplicate copies in multiple locations. GLADE provides centralized high-performance file systems spanning supercomputing, data post-processing, data analysis, visualization, and HPC-based data transfer services.[58] GLADE also hosts data from NCAR’s Research Data Archive,[59] NCAR’s Community Data Portal,[60] and the Earth System Grid[61] that curates CMIP5/AR5 data. The GLADE central disk resource has a usable storage capacity of 36 petabytes as of February 2017.[62] GLADE has a sustainable aggregate I/O bandwidth of more than 220 gigabits per second.
Archival data storage at the NWSC is provided by a High Performance Storage System (HPSS) that consists of tape libraries with storage capacity of 320 petabytes. These scalable, robotic systems consist of six Oracle StorageTek SL8500 tape libraries using T10000C tape drives[63] with an I/O rate of 240 megabits per second.
NWSC’s data-intensive computing strategy includes a full suite of community data services. NCAR develops data products and services that address the future challenges of data growth, preservation, and management. The Research Data Archive[64] (RDA) contains a large collection of meteorological and oceanographic datasets that support scientific studies in climate, weather, hydrology, Earth system modeling, and other related sciences.[65] It is an open resource; the global research community also uses it.
The NWSC also serves an educational role.[66] Its public outreach program features the NWSC visitor center[67] that explains the science goals and the technology of NCAR and the University of Wyoming.[68][69] NCAR's higher education internship program[70] places two engineering interns at the NWSC each summer.
The facilities at NWSC are being used in a research collaboration[71][72] with Colorado State University, Oak Ridge National Laboratory, Lagrange Systems,[73] and NCAR to produce resilient resource management strategies for HPC environments, increase the number of researchers and scientific problems that can use HPC, and help achieve sustainable computing at extreme scales within realistic power budgets.[74]