You are here: Home / Infrastructures / Res. Infrastructure
Large Scale Multiprocessor Supercomputer (SGI UltraViolet 3000) (MACH-2)
Hosting Legal Entity
Johannes Kepler University of Linz
Altenberger Street 69, Johannes Kepler University Linz, Linz, PO: 4040, Upper Austria (Austria)
Type Of RI
Coordinating Country
Current Status:
Operational since 2011
Scientific Description
MACH-2 is a massively parallel shared memory supercomputer (overview presentation). It is operated by the Scientific Computing Administration of the Johannes Kepler University (JKU) Linz, Austria, on behalf of a consortium consisting of JKU Linz, the University of Innsbruck, the Paris Lodron University of Salzburg, Technische Universität Wien, and the Johann Radon Institute for Computational and Applied Mathematics (RICAM). The machine has been purchased for the purpose of a cooperation project that runs from 2017 to 2021 and is supported by a grant of the Federal Ministry of Education, Science, and Research (BMBWF) in the frame of the HRSM 2016 call. The system was installed in October 2017 and is fully operational since January 2018 (opening celebration). MACH-2 is named after the Austrian physicist and philosopher Ernst Mach and replaces the previous MACH supercomputer that was operated jointly by JKU and the University of Innsbruck from 2011 to 2017.MACH-2 is a machine of type SGI UV 3000 of the former company Silicon Graphics International (SGI), now Hewlett Packard Enterprise (HPE). It belongs to the class of cache coherent Non-Uniform Memory (ccNUMA) architectures which are massively parallel supercomputers that implement a global shared memory model on top of scalable distributed hardware. MACH-2 is housed in three racks of this kind and has the following characteristics:1728 processor cores with 20 TB global shared memory distributed among 72 blades, whereeach blade holds two 12-core processors of type Intel Xeon E5-4650V3 operating at 2.1-2.8 GHz with 30 MB L3 cache,64 blades are equipped with 256 GB and 8 blades are equipped with 512 GB memory, andall blades are connected by a NUMAlink 6 network in 7D enhanced hypercube topology which yields a "full bandwidth" configuration;the system provides as mass storage 4 SSD drives with 400 GB each, 12 NVME drives with 1.6 TB each, and 24 HDD drives with 10 TB each (260 TB mass storage in total);network connectivity is established by 10 GigaBit Ethernet and 40 GigaBit Infiniband.MACH-2 provides a peak performance of 77.4 TeraFLOPS; it is the largest shared memory machine in Austria, the largest European installation of an SGI UV 3000 system, and the second largest installation of this type world-wide (see this press report and this press report for similar configurations that set benchmark records).The machine runs a single GNU/Linux operating system image (SuSE Linux Enterprise Server 12) with SGI Foundation Software, the SLES XFS & Volume Manager XVM, the SGI Performance Suite with SGI MPI, the Intel Parallel Studio Composer Edition, and the workload management software PBSpro OpenSource.

RI Keywords
Cache coherent non-uniform memory architecture (CcNUMA), Shared memory computer, Big data number cruncher, Large scale SMP, Single system image
RI Category
Centralised Computing Facilities
Mechanical Engineering Facilities
Micro- and Nanotechnology facilities
Distributed Computing Facilities
Conceptual Models
Software Service Facilities
Biomedical Imaging Facilities
Mathematics Centres of Competence
Bio-informatics Facilities
Environmental Health Research Facilities
Complex Data Facilities
Scientific Domain
Social Sciences
Engineering and Energy
Physics, Astronomy, Astrophysics and Mathematics
Chemistry and Material Sciences
Biological and Medical Sciences
ESFRI Domain
Physical Sciences and Engineering
Server Pool for Scientific Computing

Mach ( - big SMP system with 2048 CPU cores and 16 TB RAM. Use this if you need massive parallellism or huge amounts of RAM for your jobs. Unfortunately the size of the system also means that it is not that well suited lots of smaller jobs, so for those you might want to use our cluster instead: Alex ( - this is a cluster with 48 nodes with 8 CPU cores each, and either 96 or 48 GB RAM. If this is enough for your jobs you should run them here, as it is much more robust and reliable, more readily available and thanks to being partitioned in a lot of smaller units instead of a single huge one, it also has a slight performance edge. Of course you can also run jobs which don't fit on a single node using MPI, but that might have a perceptible performance penalty - please test on both systems and pick the one offering better performance and reliability. While Alex can still be used to good effect for existing projects, it is nevertheless a legacy system and for new projects you should preferably use Lise. Lise ( - like Alex, this is an Altix ICE 8200, but bigger and with a faster interconnect. There are 128 compute nodes available (with a total of 1024 CPU cores), and the interconnect is a dual-rail infiniband, which can be used to full extent via a specifically configured Mvapich2 MPI implementation. The software infrastructure on Lise is developed and implemented entirely in-house by the department for scientific computing and almost exclusively uses free software (Debain GNU/Linux for OS, Torque/Maui as batch system and scheduler, Mvapich2 for MPI etc.). If you need to choose among Alex and Lise, pick Lise.

Casual Cooperational Activities (e.g. guest researchers) with other Academic Institutions in Austria, Germany, Czech, Italy, Spain, ...
Austrian Centre for Scientific Comuting (ACSC)
Date of last update: 03/10/2018
Printable version