• Parallel Programming with MPI
  • 2024-11-04T11:00:00+01:00
  • 2024-12-06T12:00:00+01:00
  • This flexible course covers the MPI part of our regular course on MPI and OpenMP. You will learn MPI from beginner's to advanced level at your own pace over a few weeks. Virtual seminars will give you the opportunity to ask experts all your questions.
When

Nov 04, 2024 11:00 AM to Dec 06, 2024 12:00 PM
(Europe/Berlin / UTC100)

Where

Online (flexible)

Contact Name

Add event to calendar

iCal

MPI is a communication protocol for parallel programming based on message exchange between individual processes. These processes can be executed on distributed memory systems with multiple nodes. This makes MPI scalable on systems larger than single computers. MPI offers a range of tools to maintain the flow of information between individual processes. This enables the execution of a parallel program that can be divided into several smaller parts. In the course of necessary communication, overhead always occurs, which normally limits the scalability of a parallel program. However, a properly optimized program opens up the possibility of using MPI on a distributed memory system (e.g., cluster or supercomputer) with satisfactory efficiency, where thousands or tens of thousands of nodes can be used. This course also offers the opportunity for intensive exchange with the instructors and other course participants.

Location

Flexible online course: Combination of self-study and live seminars (HLRS Supercomputing Academy)

Prerequisites
  • Basic knowledge in C or Fortran
  • Basic knowledge in Linux
  • Basic knowledge in Hardware Understanding
Content levels
  • Beginners: 12 hours
  • Intermediate: 16 hours
  • Advanced: 12 hours
Target audience

This course is intended for, but is not limited to, the following groups:

  • Software developers
  • Software architects
  • Computer scientists
  • IT enthusiasts
  • Simulation engineers