A number of compilers and tools from various vendors or open source community initiatives implement the OpenMP API. If we are missing any please Contact Us with your suggestions.

Vendor/Source Compiler/Language Information
Absoft Pro Fortran Fortran Versions 11.1 and later of the Absoft Fortran 95 compiler for Linux, Windows and Mac OS X include integrated OpenMP 3.0 support. Version 21.0 supports OpenMP 3.1. Compile with -openmp.

More information

AMD C/C++ AOMP is AMD’s LLVM/Clang based compiler that supports OpenMP and offloading to multiple GPU acceleration targets (multi-target).

More Information

The AMD Optimizing C/C++ Compiler (AOCC) is a high performance compiler suite supporting C/C++ and Fortran applications, and providing advanced optimizations. This is a clang/LLVM and flang based compiler suite with complete OpenMP 4.5 and partial OpenMP 5.0 support for C/C++ and complete OpenMP 4.0 and partial OpenMP 4.5 support for Fortran.

More information

ARM C/C++/Fortran
Available on Linux
C/C++ – Support for OpenMP 3.1 and all non-offloading features of OpenMP 4.0/4.5. Offloading features are under development. Fortran – Full support for OpenMP 3.1 and limited support for OpenMP 4.0/4.5. Compile and link your code with -fopenmp

More information

Barcelona Supercomputing Center Mercurium
C/C++/Fortran
Mercurium is a source-to-source research compiler that is available to download at https://github.com/bsc-pm/mcxx. OpenMP 3.1 is almost fully supported for C, C++, Fortran. Apart from that, almost all tasking features introduced in newer versions of OpenMP are also supported.

More Information

Classic
Flang
Flang
Fortran
Classic Flang is a Fortran compiler for LLVM.

Classic Flang implements substantially full OpenMP 4.5 on Linux/x86-64, Linux/ARM, Linux/OpenPOWER with limited target offload support on NVIDIA GPUs.

By default, TARGET regions are mapped to the multicore host CPU as the target with DO and DISTRIBUTE loops parallelized across all OpenMP threads. SIMD works by passing vectorisation metadata to LLVM. Known limitations: DECLARE SIMD has no effect on SIMD code generation; TASK DEPEND/PRIORITY, TASKLOOP FIRSTPRIVATE/LASTPRIVATE, DECLARE REDUCTION and the LINEAR/SCHEDULE/ORDERED(N) clauses on the DO construct are not supported. The limited support for target offload to NVIDIA GPUs includes basic support for offload of !$omp target combined constructs.

Compile with -mp to enable OpenMP for multicore CPUs on all platforms.
Compile with -fopenmp -fopenmp-targets=nvptx64-nvidia-cuda to enable target offload to NVIDIA GPUs.

More information.

Fujitsu C/C++/Fortran The compilers in the software package of ‘Technical Computing Suite for the PRIMEHPC FX100′ support OpenMP 3.1.

More information.

GNU GCC

C/C++/Fortran

The free and open-source GNU Compiler Collection (GCC) supports among others Linux, Solaris, AIX, MacOSX, Windows, FreeBSD, NetBSD, OpenBSD, DragonFly BSD, HPUX, RTEMS, for architectures such as x86_64, PowerPC, ARM, and many more.

Code offloading to NVIDIA GPUs (nvptx) and the AMD Radeon (GCN) GPUs Fiji and Vega is supported on Linux.

OpenMP 4.0 is fully supported for C, C++ and Fortran since GCC 4.9; OpenMP 4.5 is fully supported for C and C++ since GCC 6 and partially for Fortran since GCC 7. OpenMP 5.0 is partially supported for C and C++ since GCC 9 and extended in GCC 10. The next release, GCC 11, will fully support OpenMP 4.5 for Fortran and extend the OpenMP 5.0 support for C, C++ and Fortran; the devel/omp/gcc-10 (og10) branch augments the GCC 10 branch with OpenMP and offloading features, mostly from GCC 11 development branch.

Compile with -fopenmp to enable OpenMP.

GCC binary builds are provided by Linux distributions, often with offloading support provided by additional packages, and by multiple entities for other platforms – and you can build it from source.
Releases and release notes: https://gcc.gnu.org/
OpenMP documentation: https://gcc.gnu.org/onlinedocs/libgomp/
Building and using GCC for offloading: https://gcc.gnu.org/wiki/Offloading

HPE CCE
C/C++/Fortran
CCE Compiling Environment (CCE) 11.0 (November 2020) supports OpenMP 4.5 for C, C++ and Fortran. Partial support for OpenMP 5.0 is also available (see links below). OpenMP is turned off by default for all languages.
For more information on OpenMP support in current and past versions of CCE, see:

IBM XL

C/C++/Fortran

XL C/C++ for Linux V16.1.1 and XL Fortran for Linux V16.1.1 fully support OpenMP 4.5 features including the target constructs.

Compile with -qsmp=omp to enable OpenMP directives and with -qoffload for offloading the target regions to GPUs.

For more information, please visit: IBM XL C/C++ for Linux and IBM XL Fortran for Linux

Intel C/C++/Fortran Windows, Linux, and MacOSX.

  • OpenMP 3.1 C/C++/Fortran fully supported in version 12.0, 13.0, 14.0 compilers
  • OpenMP 4.0 C/C++/Fortran supported in version 15.0 and 16.0 compilers
  • OpenMP 4.5 C/C++/Fortran supported in version 17.0, 18.0, and 19.0  compilers
  • OpenMP 4.5 and subset of OpenMP 5.0 in C/C++/Fortran compiler classic 2021.1
  • OpenMP 4.5 and subset of OpenMP 5.1 supported in oneAPI DPC++/C++ compiler 2021.1 under -fiopenmp -fopenmp-targets=spir64
  • OpenMP 4.5 and subset of OpenMP 5.1 supported in oneAPI  Fortran compiler (Beta) under -fiopenmp -fopenmp-targets=spir64.

Compile with -Qopenmp on Windows, or just -qopenmp or –fiopenmp on Linux or Mac OSX

Compile with -fiopenmp -fopenmp-targets=spir64 on Windows and Linux for offloading support

More information

LLNL Rose Research Compiler C/C++/Fortran ROSE is a source-to-source research compiler supporting OpenMP 3.0 and some OpenMP 4.0 accelerator features targeting NVIDIA GPUs.

More information

LLVM Clang
C/C++
Clang is an open-source (permissively licensed) C/C++ compiler that is available to download gratis at http://llvm.org/releases/download.html.

Support for all non-offloading features of OpenMP 4.5 has been available since Clang 3.9. Support for offload constructs that run on the host is available since Clang 7.0. Support for offloading to GPU devices is available with some limitations is available since Clang 8.0.

Support for OpenMP 5.0 is under active development but in large parts complete. Some OpenMP 5.1 features are available as well, see https://clang.llvm.org/docs/OpenMPSupport.html.

The most recent release, Clang 11.0, defaults to OpenMP 5.0 semantics.

For more details, usage modes, flags, FAQ, and more, please visit the documentation at: http://openmp.llvm.org/docs/

Flang Fortran Flang is the Fortran frontend of the LLVM compiler infrastructure. The OpenMP support in Flang is a work in progress. Flang supports parsing of all OpenMP 4.5 constructs and a few OpenMP 5.0 constructs/clauses. Semantic checks and code generation of OpenMP 4.5 and 5.0 constructs are in progress.

For more information please see: Latest Flang release notes.

Mentor, a Siemens Business Sourcery CodeBench Lite
C/C++/Fortran
Sourcery CodeBench (AMD GCN) Lite for x86_64 GNU/Linux is Mentor’s free GCC-based compiler, supporting OpenMP with offloading to AMD Radeon (Graphics Core Next, GCN) GPUs.

Sourcery CodeBench 2020.11, released November 16, 2020, has full support for OpenMP 4.5 for C/C++/Fortran and has limited support for OpenMP 5.0; it supports offloading to gfx803 Fiji (GCN3), gfx900 Vega 10 and gfx906 Vega 20 (GCN5) GPUs. The compiler is based on GCC’s devel/omp/gcc-10 (og10) branch and supports all GCC 10 features, enriched by OpenMP features from GCC’s development branch and OpenMP and AMD GCN improvements such as support for offloading debugging.

Free download: https://www.mentor.com/embedded-software/toolchain-services/codebench-lite-downloads

NAG Nagfor

Fortran

NAG Fortran Compiler 6.2 supports OpenMP 3.1 on x86 and x64, for Linux, Mac and Windows. Compile with –openmp.

More Information

NVIDIA HPC compiler C/C++/Fortran NVIDIA HPC Compilers support a subset of OpenMP 5.0 in Fortran/C/C++ on Linux/x86-64, Linux/OpenPOWER, Linux/Arm, and NVIDIA GPUs. Full support for OpenMP 3.1 in Fortran/C/C++ on Linux/x86-64, Linux/OpenPOWER, and Linux/Arm.

  • Compile with -mp to enable OpenMP for multicore CPUs on all platforms.
  • Compile with -mp=gpu to enable target offload to NVIDIA GPUs.

More information

OpenUH Research Compiler C/C++/Fortran The OpenUH 3.x compiler has a full open-source implementation of OpenMP 2.5 and near-complete support for OpenMP 3.0 (including explicit task constructs) on Linux 32-bit or 64-bit platforms.

For more information or to download: https://github.com/uhhpctools/openuh

Oracle C/C++/Fortran Oracle Developer Studio 12.6 compilers (C, C++, and Fortran) support OpenMP 4.0 features.

Compile with -xopenmp to enable OpenMP in the compiler. For this to work use at least optimization level -xO3, or the recommended -fast option to generate the most efficient code.

To debug the code, compile without optimization option, add -g and use -xopenmp=noopt. Use the -xvpara option for static correctness checking and the -xloopinfo option for loop level messages. The latter is less comprehensive than the preferred er_src tool to get more detailed information on compiler optimizations. Add the -g option to the compile options to enable this and execute the command “er_src  file.o”  to extract the information.

More information

PGI C/C++/Fortran Refer to NVidia HPC Compilers (https://developer.nvidia.com/hpc-sdk)
Texas Instruments C The TI cl6x compiler v8.x supports OpenMP 3.0 for multicore C66x on TI’s Keystone I family of Multicore C667x/C665x Digital Signal Processor (DSP) SoCs using the Processor-SDK-RTOS.

The Linaro toolchain (gcc) 8.3.0 supports OpenMP 4.5 for multicore Cortex-A15 on TI’s AM572x and Keystone II family (K2H/K2K, K2E, K2L, K2G) SoCs using the Processor-SDK-Linux.

The TI clacc v1.x compiler supports OpenMP 3.0 and device constructs from OpenMP 4.0 heterogeneous multicore Cortex-A15+C66x-DSP on TI’s AM57x and Keystone II family (K2H/K2K, K2E, K2L, K2G) SoCs using both the Processor-SDK-Linux (A15) and Processor-SDK-RTOS (C66x).

See here for the latest versions of the Processor-SDKs for various TI SoCs:

University of Auckland PARC Lab Pyjama – research compiler
Java
Pyjama is a research compiler for OpenMP directives in Java developed by the Parallel and Reconfigurable Computing lab, University of Auckland. It supports most of the OpenMP Version 2.5 specification, corresponding to the Common Core. Beyond this, it supports advanced features, including GUI-aware directives and concepts and directives for OpenMP asynchronous event-driven programming as well as Java specific features like strong Exception handling, loops over iterators etc. It is based on a source-to-source compiler and a runtime library, both published Open Source.

The Pyjama website provides Pyjama, examples, documentation and more. The source code is hosted at: https://github.com/ParallelAndReconfigurableComputing/Pyjama.

(Last updated November, 2020)

Vendor/Sources Tools/Language Information
Appentra Parallelware Trainer
C, C++, Fortran
Learn parallel programming faster and at your own pace.
Learn parallelization concepts and techniques guided by parallel patterns used in real software. Use our integrated learning environment to experiment with different technologies for parallel programming. Make expert decisions for the development of multicore and GPU-accelerated software.More Information
Appentra Parallelware Analyzer
C, C++, Fortran
A static code analyzer specialized in parallelism.
Parallelware Analyzer helps developers create fast, correct parallel code in C/C++/Fortran, reportingproviding them with feedback in the form of objective and measurable metrics and. seamlessly integrating into their development workflow and CI/CD tool.More Information
Arm Forge
(includes DDT, Map and Performance Reports)
C, C++, Fortran, Python
Arm Forge is a software development toolkit designed to assist Linux developers write correct, scalable and performance applications for a variety of hardware architectures, including x86, Power, Armv8 and accelerators such as NVIDIA GPUs. Forge includes three components: DDT, MAP and Performance Reports and can be used for serial or parallel applications relying on MPI and/or OpenMP.

Arm DDT is a powerful, easy-to-use graphical debugger. It includes static analysis that highlights potential problems in the source code, integrated memory debugging that can catch reads and writes outside of array bounds, integration with MPI message queues and much more. It provides a complete solution for finding and fixing problems whether on a single thread or thousands of threads.

Debug with Arm DDT: (https://developer.arm.com/products/software-development-tools/hpc/arm-forge/arm-ddt)

Arm MAP is a parallel profiler that shows you which lines of code took the most time and why. It supports both interactive and batch modes for gathering profile data, and supports MPI, OpenMP and single-threaded programs. Syntax-highlighted source code with performance annotations, enable you to drill down to the performance of a single line, and has a rich set of zero-configuration metrics, showing memory usage, floating-point calculations and MPI usage across processes.

Profile with Arm MAP: (https://developer.arm.com/products/software-development-tools/hpc/arm-forge/arm-map)

Arm Performance Reports is a lightweight performance analysis tool that generates easy to read reports on an application. The tool processes data from a wide range of sources (including CPU, memory, IO or even energy sensors) and provides actionable feedback to help end-users improve the efficiency of their applications.

Analyze with Performance Reports:
https://developer.arm.com/tools-and-software/server-and-hpc/arm-architecture-tools/arm-performance-reports

BSC Extrae, Paraver / C, C++. Fortran, Java, Python Extrae is an instrumentation package that collects performance data and saves it in Paraver trace format. It supports the instrumentation of MPI, OpenMP, pthreads, OmpSs, CUDA, OpenCL, with C, C++, Fortran, Java and Python. With respect to OpenMP, it recognizes the main runtime calls for Intel and GNU compilers allowing instrumentation at loading time with the production binary. Support for GASPI and the latest standard of the OMPT interface are currently being tested and will be released shortly in the public version.

More information

Paraver is a performance analyzer based on traces with a great flexibility to explore the collected data. It was developed to respond to the need to have a qualitative global perception of the application behavior by visual inspection and then to be able to focus on the detailed quantitative analysis of the problems. The tool can be considered a data browser that can explore any information expressed on its trace format. Extrae is the main provider of Paraver traces, although the trace format is public and has been used to collect information of system behavior, power metrics and user customized metrics.

More information

HPE Code Parallelization Assistant
Reveal
HPE’s Code Parallelization Assistant, which is part of the HPE Cray Programming Environment, combines runtime performance statistics and program source code visualization with Cray Compiling Environment (CCE) compile-time optimization feedback to identify and exploit parallelism. This tool provides the ability to easily navigate through source code to highlighted dependencies or bottlenecks during the optimization phase of program development or porting.

Using the program library provided by CCE and the performance data collected by HPE’s Performance Analysis Tool, the user can navigate through his or her source code to understand which high-level loops could benefit from OpenMP parallelism from loop-level optimizations such as exposing vector parallelism. It provides dependency and variable scoping information for those loops and assists the user with creating parallel directives.

HPE Performance Analysis Tool
(CrayPat, Apprentice2)
C/C++/Fortran
HPE’s Performance Analysis Tool, which is part of the HPE Cray Programming Environment, provides an integrated infrastructure for measurement, analysis, and visualization of computation, communication, I/O, and memory utilization to help users optimize programs for faster execution and more efficient computing resource usage. With both simple and advanced interfaces, HPE’s Performance Analysis Tool allows the user to easily extract performance information from applications and use the tools’ wealth of capability to profile large, complex codes at scale.

The toolset allows developers to perform sampling and tracing experiments on executables, collecting information at the whole program, function, loop, and line level. Programs that use MPI, SHMEM, OpenMP (including target offload), CUDA, HIP, or a combination of these programming models are supported. Profiling applications built with CCE, Intel, Arm Allinea, AMD, and GNU compilers are supported.

More information

Intel VTune Amplifier
C, C++, C#, Fortran, Python, Go, Java, OpenCL
Intel VTune Amplifier is a low-overhead and high resolution performance profiling and analysis tool which may be used to collect performance statistics for applications written in various languages including C, C++, Fortran and using OpenMP and MPI. Intel VTune Amplifier includes various analysis types such as Hotspots, Threading, HPC Performance Characterization, Memory Consumption, Memory Access and Microarchitecture Exploration analysis.

Intel VTune Amplifier’s Platform Profiler analysis helps users identify how well an application uses the underlying architecture and how users can optimize the hardware configuration of their system. It displays high-level system configuration such as processor, memory, storage layout, PCIe and network interfaces, as well as performance metrics observed on the system such as CPU and memory utilization, CPU frequency, cycles per instruction (CPI), memory and disk input/output (I/O) throughput, power consumption, cache miss rate per instruction, and so on.

More Information

Intel Advisor
C, C++, Fortran
Intel Advisor provides two tools to help ensure your Fortran, C and C++ applications realize full performance potential on modern Intel processors: Vectorization Advisor and Threading Advisor.

Vectorization Advisor is a vectorization optimization tool that lets you identify loops that will benefit most from vectorization, identify what is blocking effective vectorization, forecast the benefit of alternative data reorganizations, and increase the confidence that vectorization is safe. Additionally, with cache-aware Roofline Analysis, visualization of actual performance against hardware-imposed performance ceilings (rooflines), such as memory bandwidth and compute capacity help you identify effective optimization strategies.

More information

Intel Inspector
C, C++, Fortran
Find errors early when they are less expensive to fix. Intel® Inspector is an easy-to-use memory and threading error debugger for C, C++, and Fortran applications that run on Windows* and Linux*. No special compilers or builds are required. Just use a normal debug or production build. Use the graphical user interface or automate regression testing with the command line. It has a stand-alone user interface on Windows and Linux or it can be integrated with Microsoft Visual Studio.

More information

Intel Trace Analyzer & Collector / C, C++, Fortran Intel Trace Collector is a low-overhead tracing library that performs event-based tracing in applications at runtime. It collects data about the application MPI and serial or OpenMP* regions, and can trace custom set functions.

Intel Trace Analyzer is a GUI-based tool that provides a convenient way to monitor application activities gathered by the Intel Trace Collector. The tools can help you evaluate profiling statistics and load balancing, analyze performance of subroutines or code blocks, learn about communication patterns, parameters, performance data, check MPI correctness and identify communication hotspots.

More information

Juelich Supercomputing Centre Scalasca Trace Tools The Scalasca Trace Tools are a collection of trace-based performance analysis tools that have been specifically designed for use on large-scale systems. A distinctive feature is the scalable automatic trace-analysis component which provides the ability to identify wait states that occur, e.g., as a result of unevenly distributed workloads. Besides merely identifying wait states, the trace analyzer is also able to pinpoint their root causes and to identify the activities on the critical path of the target application, highlighting those routines which determine the length of the program execution and therefore constitute the best candidates for optimization. The Scalasca Trace Tools process traces generated by the Score-P measurement infrastructure and produce reports that can be explored with Cube or TAU ParaProf/PerfExplorer.

More information

ParaFormance Technologies ParaFormance ParaFormance is a software tool-chain that allows software developers to quickly and easily write multi-core software. ParaFormance enables software developers to find the sources of parallelism within their code, automatically (through user-controlled guidance) inserting the parallel business logic (using OpenMP and TBB), and checking that the parallelised code is thread-safe.

More information

Perforce Software
(RogueWave)
TotalView for HPC
C/C++/Fortran/Python
The TotalView for HPC debugger was originally designed for debugging multi-threaded and multi-processing code, Simultaneous debug many processes and threads in a single window to get complete control over program execution: Running, stepping, and halting line-by-line through code within a single thread or arbitrary groups of processes or threads. Work backwards from failure through reverse debugging, isolating the root cause faster by eliminating repeated restarts of the application. Reproduce difficult problems that occur in concurrent programs that use threads, OpenMP, MPI, and CUDA.  Use TotalView’s memory debugging to find memory leaks, API errors and memory overruns in allocated memory. The new GUI extends TotalView’s mixed language support to include Python wrappers and filters the stack trace of unwanted ‘glue’ routines.  TotalView contains early support for OMPD as defined for OpenMP 5.0.

More Information

Rice University HPCToolkit HPCToolkit is an integrated suite of tools for measurement and analysis of program performance on computers ranging from multicore desktop systems to the nation’s largest supercomputers. HPCToolkit provides accurate measurements of a program’s work, resource consumption, and inefficiency, correlates these metrics with the program’s source code, works with multilingual, fully optimized binaries, has very low measurement overhead, and scales to large parallel systems. HPCToolkit’s measurements provide support for analyzing a program execution cost, inefficiency, and scaling characteristics both within and across nodes of a parallel system.

More Information

Score-P Developer Community Score-P The Score-P measurement infrastructure is an extremely scalable and easy-to-use tool suite for call-path profiling, event tracing, and online analysis of applications written in C, C++, or Fortran. It supports a wide range of HPC platforms and programming models; besides OpenMP, Score-P can hook into other common models, including MPI, SHMEM, Pthreads, CUDA, OpenCL, OpenACC, and their valid combinations. Score-P is capable of gathering performance information through automatic instrumentation of functions, library interception/wrapping, source-to-source instrumentation, event- and interrupt-based sampling, and hardware performance counters. Score-P measurements are the primary input for a range of specialized analysis tools, such as: Cube, Vampir, Scalasca Trace Tools, TAU, or Periscope.

More information

Signalogic CIM Heterogeneous Programming / C, C+ CIM enables code generation for combined  Intel x86 and Texas Instruments c66x platforms.  Within C/C++ source code, OpenMP pragmas can be used to mark sections of code that should be compiled and built for c66x run-time.  c66x I/O functions are supported, allowing c66x to “front” incoming data for high capacity media and streaming applications.

More Info    http://processors.wiki.ti.com/index.php/C66x_Heterogeneous_Programming

Technische Universität Dresden Vampir Vampir is an easy-to-use framework for performance analysis, which enables developers to quickly study program behavior at a fine-grained level of detail. Performance data obtained from a parallel program run can be analyzed with a collection of specialized performance views. Intuitive navigation and zooming are the key features of the tool, which help to quickly identify inefficient or faulty parts of a program code. Vampir allows analysis of load imbalances in OpenMP programs, visualizes the interplay of parallel APIs, such as MPI and OpenMP, and supports hardware performance counters to evaluate OpenMP code regions. Score-P is the primary code instrumentation and run-time measurement framework for Vampir.

More Information.

University of Oregon TAU
C, C++, Fortran, Java, Python, Spark
TAU is a performance evaluation tool that supports both profiling and tracing for programs written in C, C++, Fortran, Java, Python, and Spark. For instrumentation of OpenMP programs, TAU includes source-level instrumentation (Opari), a runtime “collector” API (called ORA) built into an OpenMP compiler (OpenUH), and an OpenMP runtime library supporting OMPT from the OpenMP 5.0 standard. View technical paper. TAU supports both direct probe based measurements as well as event-based sampling modes for profiling. For tracing, TAU provides an open-source trace visualizer (Jumpshot) and can generate native OTF2 trace files that may be visualized in the Vampir trace visualizer. TAU Commander simplifies the TAU workflow and installation. TAU supports both PAPI and LIKWID toolkits to access low-level processor specific hardware performance counter data to correlate it to the OpenMP code regions. TAU ships with a BSD style license.

More Information.

University of Oregon APEX
C/C++, Fortran
APEX is an introspection and runtime adaptation library for asynchronous multitasking runtime systems. However, APEX is not only useful for AMT/AMR runtimes – it can be used by any application wanting to perform runtime adaptation to deal with heterogeneous and/or variable environments. APEX provides an API for measuring actions within the OpenMP runtime, using the OpenMP 5.0 OMPT interface. APEX can generate TAU profiles, CSV files, taskgraphs, task scatterplots, OTF2 traces, or Google Trace Events traces. APEX also provides a policy engine for autotuning of OpenMP parameters such as thread count, scheduler or chunk size or for adaptation to a changing environment such as soft or hard power caps.

More Information and additional information.

(Last updated November 2020)