The definition of active parallel region was changed so that a parallel region is active if it is executed by a team that consists of more than one thread (see Section 1.2.2).
The concept of tasks was added to the execution model (see Section 1.2.5 and Section 1.3).
The OpenMP memory model was extended to cover atomicity of memory accesses (see Section 1.4.1). The description of the behavior of volatile in terms of flush was removed.
The definition of the nest-var, dyn-var, nthreads-var and run-sched-var internal control variables (ICVs) were modified to provide one copy of these ICVs per task instead of one copy for the whole program (see Chapter 2). The omp_set_num_threads, omp_set_nested, and omp_set_dynamic runtime library routines were specified to support their use from inside a parallel region (see Section 18.2.1, Section 18.2.6 and Section 18.2.9).
The thread-limit-var ICV, the omp_get_thread_limit runtime library routine and the OMP_THREAD_LIMIT environment variable were added to support control of the maximum number of threads (see Section 2.1, Section 18.2.13 and Section 21.1.3).
The max-active-levels-var ICV, omp_set_max_active_levels and omp_get_max_active_levels runtime library routines, and OMP_MAX_ACTIVE_LEVELS environment variable were added to support control of the number of nested active parallel regions (see Section 2.1, Section 18.2.15, Section 18.2.16 and Section 21.1.4).
The stacksize-var ICV and the OMP_STACKSIZE environment variable were added to support control of thread stack sizes (see Section 2.1 and Section 21.2.2).
The wait-policy-var ICV and the OMP_WAIT_POLICY environment variable were added to control the desired behavior of waiting threads (see Section 2.1 and Section 21.2.3).
Predetermined data-sharing attributes were defined for Fortran assumed-size arrays (see Section 5.1.1).
Static class members variables were allowed in threadprivate directives (see Section 5.2).
Invocations of constructors and destructors for private and threadprivate class type variables was clarified (see Section 5.2, Section 5.4.3, Section 5.4.4, Section 5.7.1 and Section 5.7.2).
The use of Fortran allocatable arrays was allowed in private, firstprivate, lastprivate, reduction, copyin and copyprivate clauses (see Section 5.2, Section 5.4.3, Section 5.4.4, Section 5.4.5, Section 5.5.8, Section 5.7.1 and Section 5.7.2).
Support for firstprivate was added to the default clause in Fortran (see Section 5.4.1).
Implementations were precluded from using the storage of the original list item to hold the new list item on the primary thread for list items in the private clause, and the value was made well defined on exit from the parallel region if no attempt is made to reference the original list item inside the parallel region (see Section 5.4.3).
Data environment restrictions were changed to allow intent(in) and const-qualified types for the firstprivate clause (see Section 5.4.4).
Data environment restrictions were changed to allow Fortran pointers in firstprivate (see Section 5.4.4) and lastprivate (see Section 5.4.5).
New reduction operators min and max were added for C and C++ (see Section 5.5).
Determination of the number of threads in parallel regions was updated (see Section 10.1.1).
The assignment of iterations to threads in a loop construct with a static schedule kind was made deterministic (see Section 11.5).
The worksharing-loop construct was extended to support association with more than one perfectly nested loop through the collapse clause (see Section 11.5).
Iteration variables for worksharing-loops were allowed to be random access iterators or of unsigned integer type (see Section 11.5).
The schedule kind auto was added to allow the implementation to choose any possible mapping of iterations in a loop construct to threads in the team (see Section 11.5).
The task construct (see Chapter 12) was added to support explicit tasks.
The taskwait construct (see Section 15.5) was added to support task synchronization.
The runtime library routines omp_set_schedule and omp_get_schedule were added to set and to retrieve the value of the run-sched-var ICV (see Section 18.2.11 and Section 18.2.12).
The omp_get_level runtime library routine was added to return the number of nested parallel regions that enclose the task that contains the call (see Section 18.2.17).
The omp_get_ancestor_thread_num runtime library routine was added to return the thread number of the ancestor of the current thread (see Section 18.2.18).
The omp_get_team_size runtime library routine was added to return the size of the thread team to which the ancestor of the current thread belongs (see Section 18.2.19).
The omp_get_active_level runtime library routine was added to return the number of active