- https://www.nat-esm.de/services/trainings/events/from-zero-to-multi-node-gpu-programming
- From Zero to Multi-Node GPU Programming
- 2025-03-12T09:00:00+01:00
- 2025-03-12T17:30:00+01:00
- Part 1 - Fundamentals of Accelerated Computing with CUDA C/C++
Mar 12, 2025 from 09:00 AM to 05:30 PM
(Europe/Berlin / UTC100)
Date and Time
The course will be held online on March 12 from 9:00 a.m. to 5:30 p.m. (CET).
Registered participants will receive the BBB participation link via email the day before the course begins.
This course is part one of the three-event series, "From Zero to Multi-Node GPU Programming". Please register individually for each day you wish to attend:
- Part 1: Fundamentals of Accelerated Computing with CUDA C/C++ (March 12)
- Part 2: Accelerating CUDA C++ Applications with Multiple GPUs (March 19)
- Part 3: Scaling CUDA C++ Applications to Multiple Nodes (March 26)
Prerequisites
A free NVIDIA developer account is required to access the course material. Please register before the training at https://learn.nvidia.com/join.
Participants should additionally meet the following requirements:
- Basic C/C++ competency, including familiarity with variable types, loops, conditional statements, functions, and array manipulations
- No previous knowledge of CUDA programming is assumed
Learning Objectives
At the conclusion of the workshop, participants will have a solid understanding of the fundamental tools and techniques for GPU-accelerating C/C++ applications with CUDA. Participants will be able to:
- Write code that can be executed by a GPU accelerator
- Identify and express data and instruction-level parallelism in C/C++ applications using CUDA
- Utilize CUDA-managed memory and optimize memory migration through asynchronous prefetching
- Use command-line and visual profilers to guide optimization efforts
- Leverage concurrent streams to achieve instruction-level parallelism
- Write GPU-accelerated CUDA C/C++ applications or refactor existing CPU-only applications using a profile-driven approach
Course Structure
Accelerating Applications with CUDA C/C++
- Writing, compiling, and running GPU code
- Controlling the parallel thread hierarchy
- Allocating and freeing memory for the GPU
Managing Accelerated Application Memory with CUDA C/C++
- Profiling CUDA code with the command-line profiler
- Details on unified memory
- Optimizing unified memory management
Asynchronous Streaming and Visual Profiling for Accelerated Applications with CUDA C/C++
- Profiling CUDA code with NVIDIA Nsight Systems
- Using concurrent CUDA streams
Certification
Upon successfully completing the course assessments, participants will receive an NVIDIA DLI Certificate, recognizing their subject matter expertise and supporting their professional career growth.
Instructors
Markus Velten, Dr. Sebastian Kuckuk, both certified NVIDIA DLI Ambassadors.
The course is co-organised by NHR@FAU, NHR@TUD and the NVIDIA Deep Learning Institute (DLI).
Prices and Eligibility
This course is open and free of charge for participants affiliated with academic institutions in European Union (EU) member states and Horizon 2020-associated countries.