Email
Main Content

High Performance Computing

With the commissioning of MaRC3a, a powerful compute cluster is available for research and science. The cluster is continuously adapted to the requirements of modern high-performance computing (HPC).

Not only to meet current requirements, but also to provide the best possible support for top-level research in the field of BigData and artificial intelligence, the approximately 10-year-old HPC cluster MaRC2 was replaced by MaRC3a in April 2022. In addition to classical HPC tasks, MaRC3a also supports massively parallel applications such as image processing, physical simulations or neural networks by using high-performance GPU accelerators. More details about the cluster can be found under Worth knowing (see below).

Further expansion stages are being planned. With a focus on medical-psychological computations, the compute cluster is expected to be expanded by MaRC3b in August 2022. Preparations are currently underway for the third expansion stage MaRC3c (MaRCQuant), which will be optimized for quantum physics calculations.

The HPC cluster is complemented by the highly available Marburg Storage Cluster (MaSC), which enables the efficient handling of large data volumes. In addition to storage for the purpose of processing big data, MaSC is also explicitly available as "hot" and "cold" storage for working groups at Philipps University. For example, imaging and omics data can be stored there directly from the measurement devices.

The two clusters MaRC3a and MaSC are generally available to all researchers at the university for smaller projects. The Philipps University is pursuing a consolidation approach for high-performance computing and compute-related storage. By sharing the HPC and storage cluster, the available capacities can be optimally utilized. There is no longer any need for administration and maintenance of individual systems for individual workgroups. Optional participations offer the possibility to contribute storage space (MaSC) or compute nodes as well as GPUs (MaRC3) and thus to specifically expand the available resources for own projects. GPUs can usually be integrated promptly. Requests for additional compute nodes will continue to be collected in order to initiate a collaborative procurement process if necessary. Please feel free to contact our HPC team, see Help and Support (see below).

Target group

Scientists and researchers of the university

Requirement

  • Central User Account (Uni-Account / staff)
  • Declaration of use signed by working group leader (cf. Registration)
  • Registration

    Access to High Performance Computing (HPC) within the Philipps-Universität Marburg is centrally arranged by the University Computer Center (HRZ). For this purpose, one or more so-called HPC managers must first be appointed by the workgroup management, who can then independently activate individual university accounts (staff) from their workgroup for HPC use. Web forms are available for both processes:

  • Manuals

    MaRC3a & MaSC Guide (Login required)

  • Help and support

    You can reach the HPC team at . Please indicate your account and a meaningful subject when making inquiries.

  • Worth knowing

    MaRC3a is located in the Synthetic Microbiology Research Building (ZSM2). The cluster was made possible in cooperation with various research groups at Philipps University and is jointly operated by the Center for Synthetic Microbiology (Synmikro) and the University Computer Center (HRZ). Currently, MaRC3a consists of 26 compute nodes, which together provide 1,664 processor cores and 256 to 1,024 GB RAM. In addition, a total of 45 Data Center GPUs of different performance classes (NVIDIA V100S, A40 and A100) are available for massively parallel applications such as AI, image processing or physical simulations. As shared storage, MaRC3 has 134 TB for user data and programs and another 134 TB for temporary data storage. The HPC cluster also has direct access to the MaSC storage cluster, which provides access to over 3 PB of storage capacity for project data of individual workgroups.