Several centers for supercomputing exist across Europe, and distributed access to them is coordinated by European initiatives to facilitate high-performance computing. One such initiative, the HPC Europa project, fits within the Distributed European Infrastructure for Supercomputing Applications (DEISA), which was formed in 2002 as a consortium of eleven supercomputing centers from seven European countries. Operating within the CORDIS framework, HPC Europa aims to provide access to supercomputers across Europe.[1]
Germany 's JUWELS (booster module)[2] is the fastest European supercomputer in 7th place (followed by Italian Eni company supercomputer) in November 2020, and Switzerland 's Piz Daint was the fastest European supercomputer, in October 2016, ranked 3rd in the world with a peak of over 25 petaflops.[3]
In June 2011, France 's Tera 100 was certified the fastest supercomputer in Europe, and ranked 9th in the world at the time (has now dropped off the list).[4][5][6][7] It was the first petascale supercomputer designed and built in Europe.[8]
There are several efforts to coordinate European leadership in high-performance computing. The ETP4HPC Strategic Research Agenda (SRA) outlines a technology roadmap for exascale in Europe, with a key motivation being an increase in the global market share of the HPC technology developed in Europe.[9] The Eurolab4HPC Vision provides a long-term roadmap, covering the years 2023 to 2030, with the aim of fostering academic excellence in European HPC research.[10]
There have been several projects to organise supercomputing applications within Europe. The first was the Distributed European Infrastructure for Supercomputing Applications (DEISA). This ran from 2002–2011. The organisation of supercomputing has been taken over by the Partnership for Advanced Computing in Europe (PRACE).
From 2018-2026 further supercomputer development is taking place under the European High-Performance Computing Joint Undertaking within the Horizon 2020 framework. Under Horizon 2020, European HPC Centres of Excellence are being funded to promote Exascale capabilities and scale up existing parallel codes in the domains of renewable energy, materials modelling and design, molecular and atomic modeling, climate change, global system science, and bio-molecular research.[11][12]
In addition to advances being shared with the HPC research community such as the “Putting the Ocean into the Center” visualization[13][14] and progress on the “Digital Twin” that is already being used to run in silico clinical trials,[15][16] EU countries are already beginning to directly benefit from work done by the Centres of Excellence under Horizon 2020: In summer 2021, software from a European Centre of Excellence was used to forecast ash clouds from the La Palma volcano.[17] Additionally, EU Centres of Excellence are providing support throughout the Covid19 pandemic creating models to guide policy makers, expediting the discovery of possible treatments, and generally facilitating the sharing of research data during the race to understand the corona virus.[18] [19][20]
PRACE provides "access to leading-edge computing and data management resources and services for large-scale scientific and engineering applications at the highest performance level".[21] PRACE categorises European HPC facilities into 3 tiers: Tier-0 are European Centres with petaflop machines, Tier-1 are national centres, and Tier-2 are regional centres.
PRACE has 8 Tier-0 systems:[22]
The Vienna Scientific Cluster is a collaboration between several Austrian universities. The current flagship of the VSC family is VSC-4, a Linux cluster with approximately 790 compute nodes, 37,920 cores and a theoretical peak performance is 3.7 PFlop/s.[25] The VSC-4 cluster was ranked 82nd in the Top-500 list in June 2019.[25] VSC-4 was installed in summer 2019 at the Arsenal TU building in Vienna.
On 25 October 2012, Ghent University (Belgium) inaugurated the first Tier 1 supercomputer of the Flemish Supercomputer Centre (VSC). The supercomputer is part of an initiative by the Flemish government to provide the researchers in Flanders with a very powerful computing infrastructure. The new cluster was ranked 163rd in the worldwide Top500 list of supercomputers in November 2012.[26][27] In 2014, a supercomputer started operating at Cenaero in Gosselies. In 2016, VSC started operating the BrENIAC supercomputer (NEC HPC1816Rg, Xeon E5-2680v4 14C 2.4 GHz, Infiniband EDR) in Leuven. It has 16,128 cores providing 548,000 Gflops (Rmax) or 619,315 Gflops (Repack).[28]
The National Center for Supercomputing Applications in Sofia operates an IBM Blue Gene/P supercomputer, which offers high-performance processing to the Bulgarian Academy of Sciences and Sofia University, among other organizations.[29] The system was on the TOP500 list until November 2009, when it ranked as number 379.[30]
A second supercomputer, the "Discoverer", was installed in 2020 and ranked 91st in the TOP500 in 2021.[31] "Discoverer", Bulgaria's supercomputer was the third launched under the program on 21 October 2021. Located on the territory of the Bulgarian Science and Technology Park "Sofia Tech Park" in Sofia, Bulgaria. The cost is co-financed by Bulgaria and EuroHPC JU with a joint investment of €11.5 million completed by Atos. Discoverer has a stable performance of 4.5 petaflops and a peak performance of 6 petaflops.[32][33][34][35][36][37]
A third supercomputer "Hemus", owned by the Bulgarian Academy of Sciences and the Institute of Information and Communication Technologies was launched on 19 October 2023. The supercomputer's performance of 3 petaflops will aid in science research, data processing, application development and medical imaging. The project was completed by HP and is jointly financed by Bulgaria and the European Regional Development Fund for a total cost of €15 million.[38]
The Center for Advanced Computing and Modelling (CNRM) in Rijeka was established in 2010 and conducts multidisciplinary scientific research through the use of advanced high-performance solutions based on CPU and GPGPU server technologies and technologies for data storage.[39] They operate the supercomputer "Bura" which consists of 288 computing nodes and has a total of 6912 CPU cores, its peak performance is 233.6 teraflops and it ranked at 440th on the November 2015 TOP500 list.[40]
CSC – IT Center for Science operated a Cray XC30 system called "Sisu" with 244 TFlop/s.[41] In September 2014 the system was upgraded to Cray XC40, giving a theoretical peak of 1,688 TFLOPS. Sisu was ranked 37th in the November 2014 Top500 list,[42] but had dropped to 107th by November 2017.[43]
The Commissariat à l'énergie atomique et aux énergies alternatives (CEA) operates the Tera 100 machine in the Research and Technology Computing Center in Essonne, Île-de-France.[4] The Tera 100 has a peak processing speed of 1,050 teraflops, making it the fastest supercomputer in Europe in 2011.[5] Built by Groupe Bull, it had 140,000 processors.[44]
The National Computer Center of Higher Education (French acronym: CINES) was established in Montpellier in 1999, and offers computer services for research and higher education.[45][46] In 2014 the Occigen system was installed, which was manufacturered by the Bull, Atos Group. It has 50,544 cores and a peak performance of 2.1 Petaflops.[47][48]
In Germany, supercomputing is organized at two levels. The three national centers at Garching (LRZ), Juelich (JSC) and Stuttgart (HLRS) together form the Gauss Center for Supercomputing, and provide both the European Tier 0 level of HPC and the German national Tier 1 level. A number of medium-sized centers are also organized in the Gauss Alliance.
The Jülich Supercomputing Centre (JSC) and the Gauss Centre for Supercomputing jointly owned the JUGENE computer at the Forschungszentrum Jülich in North Rhine-Westphalia. JUGENE was based on IBM's Blue Gene/P architecture, and in June 2011 was ranked the 12th fastest computer in the world by TOP500.[49] It was replaced by the Blue Gene/Q system JUQUEEN on 31 July 2012.[50]
The Leibniz-Rechenzentrum, a supercomputing center in Munich, houses the SuperMUC system, which began operations in 2012 at a processing speed of 3 petaflops. This was, at the time it entered service, the fastest supercomputer in Europe. The High Performance Computing Center in Stuttgart fastest computing system is Hawk with a peak performance of 26 petaflops,[51] replacing Hazel Hen with a peak performance of more than 7.4 petaflops. (As of November 2015) Hazel Hen, which is based on Cray XC40 technology, was ranked the 8th fastest system worldwide.[52]
Greece's main supercomputing institution is GRNET SA, a Greek state-owned company that is supervised by the General Secretariat for Research and Technology of the Ministry of Education, Research and Religious Affairs. GRNET's high-performance computing system is called ARIS (Advanced Research Information System) and during its introduction to the TOP500 list, in June 2015, it got the 467th place.[53] ARIS infrastructure consists of four computing systems islets: thin nodes, fat nodes, GPU nodes and Phi Nodes. GRNET is the Greek member in the Partnership for Advanced Computing in Europe[54] and ARIS is a Tier-1 PRACE node.
The Irish Centre for High-End Computing (ICHEC) is the national supercomputing centre and operates the "Kay" supercomputer, commissioned in August 2018. The system, which was provided by Intel, consists of a cluster of 336 high-performance servers with 13,440 CPU (Central Processing Unit) cores and 64 terabytes of memory for general purpose computations. Additional components aimed at more specialised requirements include 6 large memory nodes with 1.5 terabytes of memory per server, plus 32 accelerator nodes divided between Intel Xeon Phi and NVidia V100 GPUs (Graphics Processing Units). The network linking all of these components together is Intel's 100Gbit/s Omnipath technology and DataDirect Networks are providing 1 petabyte of high-performance storage over a parallel file system. Penguin Computing has integrated this hardware and provided the software management and user interface layers.[55]
The main supercomputing institution in Italy is CINECA, a consortium of many universities and research institutions scattered throughout the country. As of June 2023, the highest CINECA supercomputer in the TOP500 list (4th place) is Leonardo, an accelerated petascale cluster based on Xeon Platinum processors, NVIDIA A100 Tensor Core GPUs, and NVIDIA Mellanox HDR100 InfiniBand connectivity with 1,824,768 total cores for 238.70 petaFLOPS (Rmax) and 7,404 kW.[56]
Due to the involvement of the National Institute for Nuclear Physics (INFN) in the main experiments taking place at CERN, Italy also hosts some of the largest nodes of the Worldwide LHC Computing Grid, including one Tier 1 facility and 11 Tier 2 facilities out of 151 total nodes.[57][58]
The Luxembourg supercomputer Meluxina was officially launched on 7 June 2021 and is part of the European High-Performance Computing Joint Undertaking (EuroHPC JU). It is located at the LuxProvide data center in Bissen, Luxembourg. It is the second supercomputer to be launched after Vega of eight planned supercomputers (EuroHPC JU). The system was completed by company Atos. Luxembourg paid for two thirds of the project. The European Commission funded the other third, with 35% of the computing power to be made available to the 32 countries taking part in the EuroHPC joint venture. The value of the joint investment is €30.4 million euros. Meluxina has a stable performance of 10 petaflops and a peak performance of 15 petaflops.[59][60][61][62]
The supercomputer Snellius is operated by the organization SURF (formerly known as SURFsara) and it is hosted in the Amsterdam Science Park. Since 1984 the organization has been operating the Dutch national supercomputing facilities for research.
Additionally, the European Grid Infrastructure, a continent-wide distributed computing system, is also headquartered at the Science Park in Amsterdam.[63]
UNINETT Sigma2 AS maintains the national infrastructure for large-scale computational science in Norway and provides high-performance computing and data storage for all Norwegian universities and colleges, as well as other publicly funded organizations and projects. Sigma2 and its projects are financed by the Research Council of Norway and the Sigma2 consortium partners (the universities of Oslo, Bergen, and Tromsø, and the Norwegian University of Science and Technology in Trondheim) Its head office in Trondheim.[64] Sigma2 operates three systems: Stallo and Fram (located in Tromsø) and Saga (in Trondheim).[65] An additional machine (named Betzy after Elizabeth Stephansen) was inaugurated on 7 December 2020.[66][67]
The Norwegian University of Science and Technology (NTNU) in Trondheim operates the "Vilje" supercomputer, owned by NTNU and the Norwegian Meteorological Institute. "Vilje" is operating at 275 teraflops.[68]
Decommissioned systems include Hexagon (2008-2017) at the University of Bergen; Gardar (2012 to 2015); and Abel (2012 to 2020) at the University of Oslo.[69] The "Abel" supercomputer was named after the famous Norwegian mathematician Niels Henrik Abel (1802–1829). It operated at 258 teraflops through over 650 nodes and over 10000 cores (CPU's), where each node typically has 64GiB of RAM.[70] It was ranked 96th in the TOP500 list in June 2012 when it was installed.[71]
Currently, since 2015, the fastest supercomputer in Poland is "Prometheus" that belongs to the AGH University of Science and Technology in Kraków.[72] It provides 2399 teraflops of computing power and has 10 petabytes of storage.[73] It currently holds 21st place in Europe, and was 77th in the world according to the November 2017 TOP500 list.[74]
The Polish Grid Infrastructure PL-Grid was built between 2009 and 2011 as a nationwide computing infrastructure, and will remain within the PLGrid Plus project until 2014. At the end of 2012, it provided 230 teraflops of computing power and 3,600 terabytes of storage for the Polish scientific community.
The Galera computer cluster at the Gdańsk University of Technology was ranked 299th on the TOP500 list in November 2010.[75][76] The Zeus computer cluster at the ACK Cyfronet AGH in Kraków was ranked 106th on the TOP500 list in November 2012, but had dropped to 386th by November 2015.[77]
In November 2011, the 33,072-processor Lomonosov supercomputer in Moscow was ranked the 18th-fastest supercomputer in the world, and the third-fastest in Europe. The system was designed by T-Platforms, and used Xeon 2.93 GHz processors, Nvidia 2070 GPUs, and an Infiniband interconnect.[78] In July 2011, the Russian government announced a plan to focus on constructing larger supercomputers by 2020.[79] In September 2011, T-Platforms stated that it would deliver a water-cooled supercomputer in 2013.[80]
Since 2016, Russia has had the most powerful military supercomputer in the world with a speed of 16 petaflops, called the NDMC Supercomputer.[citation needed]
The Slovenian supercomputer Vega was officially launched on 20 April 2021 and is part of the European High-Performance Computing Joint Undertaking (EuroHPC JU). It is located at the Institute of Information Science Maribor (IZUM) in Maribor, Slovenia. This is the first of eight planned supercomputers (EuroHPC JU). The system was completed by local company Atos. Vega supercomputer was jointly financed by EuroHPC JU through EU funds and the Institute of Information Science Maribor (IZUM). The value of the joint investment is €17.2 million euros. Vega has a stable performance of 6.9 petaflops and a peak performance of 10.1 petaflops.[81][82]
The Slovenian National Grid Initiative (NGI) provides resources to the European Grid Initiative (EGI). It is represented in the EGI Council by ARNES. ARNES manages a cluster for testing computing technology where users can also submit jobs. The cluster consists of 2300 cores and is growing.[83]
Arctur also provides computer resources on its Arctur-2 and previously Arctur-1 supercomputers to the Slovenian NGI and industry as the only privately owned HPC provider in the region.[84]
The Jožef Stefan Institute has most of the HPC installations in Slovenia. They are not however a single uniform HPC system, but several dispersed systems at separate research departments (F-1,[85] F-9[86] and R-4[87]).
The Barcelona Supercomputing Center is located at the Technical University of Catalonia and was established in 2005.[88] The center operates the Tier-0 11.1 petaflops MareNostrum 4 supercomputer and other supercomputing facilities. This centre manages the Red Española de Supercomputación (RES). The BSC is a hosting member of the Partnership for Advanced Computing in Europe (PRACE) HPC initiative. In Galicia CESGA established in 1993, operates the FinisTerrae II, a 328 TFlops supercomputer, which will be replaced by FinisTerrae III in 2021 with 1,9 PFlops. The Supercomputing and Visualization Center of Madrid (CeSViMa) at the Technical University of Madrid operates the 182,78 TFlops Magerit 3 supercomputer. The Spanish Supercomputing Network furthermore provides access to several supercomputers distributed across Spain.
The National Supercomputer Centre in Sweden (NSC) is located in Linköping and operates the Triolith supercomputer which achieved 407.2 Teraflop/s on the Linpack benchmark which placed it 79th on the November 2013 TOP500 list of the fastest supercomputers in the world.[89] In mid-2018 "Triolith" will be superseded by "Tetralith", which will have an estimated maximum speed of just over 4 petaflops.[90]
Sweden's Royal Institute of Technology operates the Beskow supercomputer, which consists of 53,632 processors and has achieved sustained 1.397 Petaflops/s.[91]
The Swiss National Supercomputing Centre was founded in 1991 and is operated by ETH Zurich. It is based in Lugano, Ticino, and provides supercomputing services to national research institutions and Swiss universities, as well as the international CERN organisation and MeteoSchweiz, the Swiss weather service.[92] In February 2011, the center placed an order for a Cray XMT massively parallel supercomputer.[93]
The IBM Aquasar supercomputer became operational at ETH Zurich in 2010. It uses hot water cooling to achieve heat efficiency, with the computation-heated water used to heat the buildings of the university campus.[94][95]
The EPCC supercomputer center was established at the University of Edinburgh in 1990.[96] The HECToR project at the University of Edinburgh provided supercomputing services using a 360-teraflop Cray XE6 system, the fastest supercomputer in the UK at the time.[97] In 2013, HECToR was replaced by ARCHER, a Cray XC30 system.[98] In 2021, ARCHER was replaced by its successor ARCHER2, an HPE Cray EX system with an estimated peak performance of 28 petaflop/s.[99] ARCHER2 is the tier one national supercomputing service for the Engineering and Physical Sciences Research Council (EPSRC) and the Natural Environment Research Council.[100] The EPCC also provides the UK's connection to PRACE.[22]
In addition to the ARCHER2 tier one facility, EPSRC supports a number of tier two facilities:[100]
The DiRAC supercomputing facility is the Science and Technology Facilities Council's tier one facility for particle physics and astronomy research.[100] It comprises a data intensive service hosted by the universities of Cambridge and Leicester, a memory intensive service (1.37 PF Rmax; 1.9 PF peak performance with 360 nodes in phase 1 (2021), upgraded with a further 168 nodes in phase 2 (2023) giving 528 TB RAM)[102][103][104] hosted by the Institute for Computational Cosmology at Durham University, and an extreme scaling service hosted by EPCC at the University of Edinburgh.[105][106]
The European Centre for Medium-Range Weather Forecasts (ECMWF) in Reading, Berkshire, operates a 100-teraflop IBM pSeries-based system. The Met Office has a 14 PFlops computer.[107] The Atomic Weapons Establishment has two supercomputers, a 4.3 petaflop Bull Sequana X1000 supercomputer, and a 1.8 petaflop SGI IceX supercomputer.[108] Both these platforms are used for running nuclear weaponry simulations, required after the Comprehensive Nuclear-Test-Ban Treaty was signed by the UK.[108]
The University of Bristol, was chosen in 2023 to host the UK's tier one Artificial Inteligence Research Resource (AIRR), Isambard-AI, building on the success of GW4's Isambard supercomputer.[109][110] The UK government has awarded £225 million to Bristol to develop the system, which will be installed in the National Composites Centre, in collaboration with the universities of Bath, Cardiff and Exeter.[111] The AIRR is also planned to take in the Dawn supercomputer at the University of Cambridge, which was launched in late 2023 with further development expected in 2024.[112][113]
The University of Edinburgh was also announced in 2023 as the host of the UK's first exascale supercomputer, which will build on experience with Isambard-AI.[114]
Original source: https://en.wikipedia.org/wiki/Supercomputing in Europe.
Read more |