You are here: Home / Infrastructures / Res. Infrastructure
Partnership for Advanced Computing in Europe (PRACE)
Identification
Hosting Legal Entity
Self-standing RI
Legal Status
Association
Location
Rue du Trône 98, PRACE aisbl, Brussels, PO: 1050 (Belgium)
Structure
Type Of RI
Distributed
Coordinating Country
Belgium
Participating Countries
Netherlands
Belgium
Greece
Austria
Hungary
Switzerland
Cyprus
Portugal
Germany
Spain
Finland
Slovakia
Bulgaria
Norway
Poland
Israel
France
Denmark
Sweden
Turkey
Luxembourg
Italy
Czech Republic
Ireland
Slovenia
United Kingdom
Nodes
Provider Country:
France
GENCI
Provider Country:
Germany
GCS
Provider Country:
Italy
CINECA
Provider Country:
Spain
Barcelona Supercomputing Center
Provider Country:
Switzerland
CSCS
Status
Status
Current Status:
Operational since 2010
Scientific Description
The Partnership for Advanced Computing in Europe (PRACE) is an international non-profit association with its seat in Brussels. The PRACE Research Infrastructure provides a persistent world-class high performance computing service for scientists and researchers from academia and industry in Europe. The computer systems and their operations accessible through PRACE are provided by 5 PRACE members (BSC representing Spain, CINECA representing Italy, ETH Zurich / CSCS representing Switzerland, GCS representing Germany and GENCI representing France). The Implementation Phase of PRACE receives funding from the EU’s Horizon 2020 Research and Innovation Programme (2014-2020) under grant agreement 730913.

RI Keywords
HPC, High performance computing, RI, Research, Call for proposals, Science, HeC, Peer review, Research infrastructure, Industry, Engineering
Classifications
RI Category
Distributed Computing Facilities
Scientific Domain
Physics, Astronomy, Astrophysics and Mathematics
Chemistry and Material Sciences
Social Sciences
Biological and Medical Sciences
Earth and Environmental Sciences
Engineering and Energy
Humanities and Arts
Information Science and Technology
ESFRI Domain
E-Infrastructure
Services
PRACE Training Portal

http://www.training.prace-ri.eu/.

PRACE Best Practice Guides

http://www.prace-ri.eu/Best-Practice-Guides.

PRACE User Documentation

http://www.prace-ri.eu/User-Documentation.

PRACE White Papers

http://www.prace-ri.eu/white-papers.

PRACE Outreach to Universities

The Partnership for Advanced Computing in Europe (PRACE) Outreach to Universities programme is the one-stop shop for the latest student-centric information, educational opportunities and more. These activities are especially designed to encourage advanced HPC, Computational Science, Simulation and Data Science studies and highlight their benefits and value. http://www.prace-ri.eu/outreach-to-universities/

PRACE Documentation and User Support

http://www.prace-ri.eu/Documentation-and-User-Support.

Equipment
Piz Daint

Piz Daint supercomputer is a Cray XC50 system and the flagship system at CSCS – Swiss National Supercomputing Centre, Lugano. Piz Daint is a hybrid Cray XC50 system with a 4’400 nodes available to the User Lab. The compute nodes are equipped with an Intel® Xeon® E5-2690 v3 @ 2.60GHz (12 cores, 64GB RAM) and NVIDIA® Tesla® P100 16GB. The nodes are connected by the “Aries” proprietary interconnect from Cray, with a dragonfly network topology. Please visit for further information the CSCS website. For technical questions: help(at)cscs.ch Please visit for further information the CSCS website. For technical questions:

SuperMUC

SuperMUC is the Tier-0 supercomputer at the Leibniz Supercomputing Centre (LRZ) in Garching, Germany. It provides resource to PRACE via the German Gauss Centre. SuperMUC Phase 1 consists of 18 Thin Node Islands with Intel Sandy Bridge processors and one Fat Node Island with Intel Westmere processors. Each compute Island contains (512 compute nodes, each node having 16 physical cores) 8192 cores for the user applications. Each of these cores has approx. 1.6 GB/core available for running applications. Peak performance is 3.1 Petaflops. All compute nodes within an individual Island are connected via a fully non-blocking Infiniband network (FDR10 for the Thin Nodes and QDR for the Fat Nodes). A pruned tree network connects the Islands. SuperMUC Phase 2 consists of 6 Islands based on Intel Haswell-EP processor technology (512 nodes/island, 28 physical cores/node and available memory 2.0 GB/core for applications, 3072 nodes, 3.6 PF). All compute nodes within an individual Island are connected via a fully non-blocking Infiniband network (FDR14). A pruned tree network connects the Islands. Both system phases share the same Parallel and Home filesystems. For technical assistance: lrzpost@lrz.de or https://servicedesk.lrz.de/?lang=en

Marconi

Italian supercomputer systems complement the PRACE infrastructure from spring 2012. CINECA’s Tier-0 system named MARCONI provides access to PRACE users since July 2016. The MARCONI system is equipped with the new Intel Xeon processors and it has two different partitions: Marconi – Broadwell (A1 partition) consists of ~7 Lenovo NeXtScale racks with 72 nodes per rack. Each node contains 2 Broadwell processors each with 18 cores and 128 GB of DDR4 RAM. Marconi – KNL (A2 partition) was deployed at the end of 2016 and consists of 3600 Intel server nodes integrated by Lenovo. Each node contains 1 Intel Knights Landing processor with 68 cores, 16 GB of MCDRAM and 96 GB of DDR4 RAM. The entire system is connected via the Intel OmniPath network. The global peak performance of the Marconi system is 13 Petaflops. In Q3 2017 the MARCONI Broadwell partition will be replaced by a new one based on Intel Skylake processors and Lenovo Stark architecture, reaching a total computational power in excess of 20 Petaflops. For technical assistance: superc@cineca.it

MareNostrum

MareNostrum is based on Intel latest generation general purpose Xeon E5 processors with 2.1 GHz (two CPUs with 24 cores each per node, 48 cores/node), 2 GB/core and 240 GB of local SSD disk acting as local /tmp. A total of 48 racks, each with 72 compute nodes, for a total of 3456 nodes. A bit more than 200 nodes have 8GB/core. All nodes are interconnected through an Intel Omni-Path 100Gbits/s network, with a non-blocking fat tree network topology. MareNostrum has a peak performance of 11,14 Petaflops. For technical assistance: support@bsc.es

Juwels
Hawk
Joliot-Curie
Access
Access Type
Remote
Access Mode
Excellence Driven
Users
Number of Users
Number Year
75 2019
Users Definition
Teams of individual researchers
User Demographics
European Users - 95.0% in 2019
Extra-European Users - 5.0% in 2019
Type of Users
Academic - 60.0%
Public research organisations - 35.0%
Industry/private companies - 5.0%
Collaborations
ESFRI
RI is an ESFRI landmark
Date of last update: 16/04/2019
Printable version