• Parallel Programming Workshop (MPI, OpenMP)
  • 2024-10-14T09:00:00+02:00
  • 2024-10-18T10:00:00+02:00
  • This course offers lectures and hands-on training from 0 to 100, i.e., from the beginning of parallel programming up to the high end needed for efficient parallelization on current clusters of shared memory and ccNUMA nodes.
When

Oct 14, 2024 09:00 AM to Oct 18, 2024 10:00 AM
(Europe/Berlin / UTC200)

Where

Online and HLRS, Room 0.439 / Rühle Saal University of Stuttgart Nobelstraße 19 70569 Stuttgart, Germany

Contact Name

Contact Phone

0711 685 65796

Add event to calendar

iCal

Distributed memory parallelization with the Message Passing Interface MPI (Mon, for beginners):
On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an introduction into MPI-1. Hands-on sessions (in C, Fortran, and Python) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI).

Shared memory parallelization with OpenMP (Tue, for beginners):
The focus is on shared memory parallelization with OpenMP, the key concept on hyper-threading, dual-core, multi-core, shared memory, and ccNUMA platforms. This course teaches shared memory OpenMP parallelization. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the directives and other interfaces of OpenMP. Race-condition debugging tools are also presented.

Intermediate and advanced topics in parallel programming (Wed-Fri):
Topics are advanced usage of communicators and virtual topologies, one-sided communication, derived datatypes,  MPI-2 parallel file I/O. MPI-3.0 introduced a new shared memory programming interface, which can be combined with MPI message passing and remote memory access on the cluster interconnect. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. Several aspects of hybrid mixed model MPI+OpenMP parallelization are discussed in the MPI and OpenMP advanced topics.

  • Language: English
  • Entry level: Basic
Prerequisites and content levels

         Unix / C or Fortran (or Python for the MPI part)

Content levels
  • Basic: 14 hours
  • Intermediate: 12 hours
  • Advanced: 10:30 hours
  • Community: 0:45 hours

Learn more about course curricula and content levels.

Agenda

All times are local times in the Central European Summer Time zone (Berlin). See link to detailed program (preliminary program)

Handouts

Each participant will get all slides as pdf and all exercises as tar.gz and zip archive. Most MPI exercises are (in addition to C and Fortran) also available for Python+mpi4py+numpy.

HLRS concept for on-site courses

Besides the content of the training itself, an important aspect of this event is the scientific exchange among the participants. We try to facilitate such communication by

  • a social event on the evening of the first course day,
  • offering common coffee and lunch breaks and
  • working together in groups of two during the exercises.

Please be informed that recomendations according to the Occupational Safety and Health Measures of the University of Stuttgart at the time of event and additional rules might be applied. 

Registration information

Register via the button at the top of this page (will be available soon). Registration closes on September 25, 2024.

Fees
  • Students without master’s degree or equivalent: 40 Euro
  • PhD students or employees at a German university or public research institute: 90 Euro
  • PhD students or employees at a university or public research institute in an EU, EU-associated or PRACE country other than Germany: 180 Euro
  • PhD students or employees at a university or public research institute outside of EU, EU-associated or PRACE countries: 360 Euro
  • Other participants, e.g., from industry, other public service providers, or government: 960 Euro

Link to the EU and EU-associated (Horizon Europe), and PRACE countries. Our course fees include coffee breaks (in classroom courses only).

Train the Trainer - TtT

In conjunction with this course, a Train the Trainer Program is provided. Whereas this regular course teaches parallel programming, the Train the Trainer Program is an education for future trainers in parallel programming. For further details, see here.

HLRS Training Collaborations in HPC

HLRS is part of the Gauss Centre for Supercomputing (GCS), together with JSC in Jülich and LRZ in Garching near Munich. EuroCC@GCS is the German National Competence Centre (NCC) for High-Performance Computing. HLRS is also a member of the Baden-Württemberg initiative bwHPC.

This course is provided within the framework of the bwHPC training program.

Further courses

See the training overview and the Supercomputing Academy pages.