Join us in Frankfurt Germany for ISC 2019, the International Conference on High Performance Computing.  First held in 1986, ISC High Performance is the world’s oldest and Europe’s most important conference and networking event for the HPC community. It offers a strong five-day technical program focusing on HPC technological development and its application in scientific fields, as well as its adoption in commercial environments.

This year we have the following OpenMP events taking place:

OpenMP BOF: OpenMP 5.0 is Here: What do I Need to Know About it and What’s Coming Next?

  • BoF Organizer/Speakers: James H. Cownie, Michael Klemm, Bronis R. de Supinski, Barbara Chapman, Simon McIntosh-Smith, Christian Terboven and James Beyer.
  • Event Type: Birds of a Feather
  • Date: Tuesday, June 18th
  • Time: 3:45pm – 4:45pm
  • Room: Analog 1,2
  • Description: OpenMP is the most popular way in which HPC codes exploit shared memory parallelism. OpenMP 5.0 has extended that support to include a comprehensive, vendor neutral, way to describe offloading computation to attached accelerators, as well as adding enhancements to the support for CPUs.  This BoF will provide you with the information you need to understand the new standard, as well as giving you the opportunity to question OpenMP experts, many of whom were involved in defining the standard.  After short (3 minutes maximum) presentations from the experts, most of the BoF time will be used for open discussion of questions from the audience.  If you want to improve your knowledge of modern OpenMP, and understand how its new features can be useful to you, you should attend this BoF.
  • Additional information and registration

Tutorial: OpenMP Common Core: Learning Parallelization of Real Applications from the Ground-Up

  • Tutorial Authors: Toni Collis, Manuel Arenaz, Barbara Chapman, Oscar Hernandez, Javier Novo Rodriguez
  • Event Type: Tutorial
  • Date: Sunday, June 16th
  • Time: 9am – 1pm
  • Room: Analog 1
  • Description:  As HPC continues to move towards a model of multi-core and accelerator programming, a detailed understanding of shared-memory models and how best to use accelerators has never been more important. OpenMP is the de facto standard for writing multi-threaded code to take advantage of shared memory platforms, but to make optimal use of it can be incredibly complex.  With a specification running to over 500 pages, OpenMP has grown into an intimidating API viewed by many as for “experts only”. This tutorial will focus on the 16 most widely used constructs that make up the ‘OpenMP common core’. We will present a unique, productivity-oriented approach by introducing its usage based on common motifs in scientific code, and how each one will be parallelized. This will enable attendees to focus on the parallelization of components and how components combine in real applications.  Attendees will use active learning through a carefully selected set of exercises, building knowledge on parallelization of key motifs (e.g. matrix multiplication, map reduce) that are valid across multiple scientific codes in everything from CFD to Molecular Simulation.  Attendees will need to bring their own laptop with an OpenMP compiler installed.
  • Target Audience:  HPC programmers with little/no formal software development training. Knowledge of sequential programming in C or Fortran is necessary. As C examples will be used, some knowledge of C programming is beneficial but not necessary. We also welcome HPC educators to attend and try out this new approach to HPC training.
  • Prerequisites:  Please bring a laptop that has an OpenMP compliant compiler installed. Necessary: familiarity with sequential programming in C or Fortran. Attendees will be presented with C example codes. Optional: An OpenACC compliant compiler to test conversion from OpenACC to OpenMP.
  • Additional information and registration

Tutorial: Advanced OpenMP: Performance and 5.0 Feature

  • Tutorial Authors: Christian Terboven, Michael Klemm, Kelvin Li, Bronis R. de Supinski
  • Event Type: Tutorial
  • Date: Sunday, June 16th
  • Time: 2pm – 6pm
  • Room: Analog 1
  • Description: With the increasing prevalence of multi-core processors, shared-memory programming models are essential. OpenMP is a popular, portable, widely supported and easy-to-use shared-memory model. Developers usually find OpenMP easy to learn. However, they are often disappointed with the performance and scalability of the resulting code. This disappointment stems not from shortcomings of OpenMP but rather from the lack of depth with which it is employed. Our “Advanced OpenMP Programming” tutorial addresses this critical need by exploring the implications of possible OpenMP parallelization strategies, both in terms of correctness and performance.  While we quickly review the basics of OpenMP programming, we assume attendees understand basic parallelization concepts and will easily grasp those basics. In two parts we discuss language features in-depth, with emphasis on advanced features like vectorization and compute acceleration. In the first part, we focus on performance aspects, such as data and thread locality on NUMA architectures, and exploitation of the comparably new language features. The second part is a presentation of the directives for attached compute accelerators.
  • Target Audience: Our primary target is HPC programmers with some knowledge of OpenMP that want to implement efficient shared-memory code.
  • Prerequisites:  Common knowledge of general computer architecture concepts (e.g., SMT, multi-core, and NUMA), A basic knowledge of OpenMP, Good knowledge of either C, C++ or Fortran..
  • Additional information and registration

Panel Discussion: Why Does OpenMP Matter to Me?

  • Panelists: Bronis R. de Supinski, Barbara Chapman, Michael Klemm, Simon McIntosh-Smith and Martin Schulz
  • Moderator: Jim Cownie
  • Event Type: Panel discussion on the Intel booth
  • Date: Tuesday 18th June.
  • Time: 5:20pm
  • Location: Intel Booth | F-930
  • Description: All of the time will be devoted to audience questions relating to OpenMP, MPI and Open-Standards in general.