Active |
|
---|---|
Operators | Oak Ridge National Laboratory and U.S. Department of Energy |
Location | Oak Ridge Leadership Computing Facility |
Power | 24.6 MW[1] |
Operating system | HPE Cray OS |
Space | 680 m2 (7,300 sq ft) |
Speed | 1.353 exaFLOPS (Rmax) / 2.055 exaFLOPS (Rpeak)[1] |
Cost | US$600 million (estimated cost) |
Purpose | Scientific research and development |
Website | www |
Hewlett Packard Enterprise Frontier, or OLCF-5, is the world's first exascale supercomputer. It is hosted at the Oak Ridge Leadership Computing Facility (OLCF) in Tennessee, United States and became operational in 2022. As of November 2024[update], Frontier is the second fastest supercomputer in the world. It is based on the Cray EX and is the successor to Summit (OLCF-4). Frontier achieved an Rmax of 1.102 exaFLOPS, which is 1.102 quintillion floating-point operations per second, using AMD CPUs and GPUs.[2][3][4][5][6]
Measured at 62.86 gigaflops/watt, the smaller Frontier TDS (test and development system) topped the Green500 list for most efficient supercomputer[6] until it was dethroned in efficiency by the Flatiron Institute's Henri supercomputer in November 2022.[7]
Frontier was superseded as the fastest supercomputer in the world by El Capitan in November 2024.
Frontier uses 9,472 AMD Epyc 7713 "Trento" 64 core 2 GHz CPUs (606,208 cores) and 37,888 Instinct MI250X GPUs (8,335,360 cores). They can perform double-precision operations at the same speed as single precision.[8]
"Trento" is an optimized third-generation EPYC CPU[9] ("Milan"), which is based on the Zen 3 microarchitecture.
It occupies 74 19-inch (48 cm) rack cabinets.[10] Each cabinet hosts 64 blades, each consisting of 2 nodes.
Blades are interconnected by HPE Slingshot 64-port switches that provides 12.8 terabits/second of bandwidth. Groups of blades are linked in a dragonfly topology with at most three hops between any two nodes. Cabling is either optical or copper, customized to minimize cable length. Total cabling runs 145 km (90 mi). Frontier is liquid-cooled by 4 350-horsepower pumps, which flow around 6,000 gallons (22,712.47 Liters) of non-pre chilled water through the system each minute, allowing 5x the density of air-cooled architectures.[8][11]
Each node consists of one CPU, 4 GPUs and 4 terabytes of flash memory. Each GPU has 128 GB of RAM soldered onto it, and each CPU has 512GB of local DDR4 memory.[8] [12]
Frontier has coherent interconnects between CPUs and GPUs, allowing GPU memory to be accessed coherently by code running on the Epyc CPUs.[13]
Frontier uses an internal 75 TB/s read / 35 TB/s write / 15 billion IOPS flash storage system, along with the 700 PB Orion site-wide Lustre filesystem.[14]
Frontier consumes around 21 megawatts (MW) (which is equivalent to the power needed for 15,000 single-family homes), compared to its predecessor Summit's 13 MW. [11]
One of the largest challenges during development was power consumption. Existing information pointed to hundreds of thousands of GPUs being necessary to achieve 1 exaFLOP, with a total power consumption of 150-500 MW. Thus, high efficiency was a primary target of the project.[8]
Oak Ridge partnered with HPE Cray and AMD to build the system at a cost of US$600 million. It began deployment in 2021[15] and reached full capability in 2022.[16] It clocked 1.1 exaflops Rmax in May 2022, making it the world's fastest supercomputer as measured in the June 2022 edition of the TOP500 list, replacing Fugaku.[1][17]
Upon its release, the supercomputer topped the Green500 list for most efficient supercomputer, measured at 62.68 gigaflops/watt.[6] ORNL Director Thomas Zacharia declared, "Frontier is ushering in a new era of exascale computing to solve the world’s biggest scientific challenges." He added, "This milestone offers just a preview of Frontier’s unmatched capability as a tool for scientific discovery. It is the result of more than a decade of collaboration among the national laboratories, academia and private industry, including DOE's Exascale Computing Project, which is deploying the applications, software technologies, hardware and integration necessary to ensure impact at the exascale."[14]