The
definition
of
active
parallel
region
was
changed
so
that
a
parallel
region
is
active
if
it
is
executed
by
a
team
that
consists
of
more
than
one
thread
(see
Section 1.2.2).
The
concept
of
tasks
was
added
to
the
OpenMP
execution
model
(see
Section 1.2.5
and
Section 1.3).
The
OpenMP
memory
model
was
extended
to
cover
atomicity
of
memory
accesses
(see
Section 1.4.1).
The
description
of
the
behavior
of
volatile
in
terms
of
flush
was
removed.
The
definition
of
the
nest-var,
dyn-var,
nthreads-var
and
run-sched-var
internal
control
variables
(ICVs)
were
modified
to
provide
one
copy
of
these
ICVs
per
task
instead
of
one
copy
for
the
whole
program
(see
Section 2.4).
The
omp_set_num_threads,
omp_set_nested
and
omp_set_dynamic
runtime
library
routines
were
specified
to
support
their
use
from
inside
a
parallel
region
(see
Section 3.2.1,
Section 3.2.6
and
Section 3.2.9).
The
thread-limit-var
ICV,
the
omp_get_thread_limit
runtime
library
routine
and
the
OMP_THREAD_LIMIT
environment
variable
were
added
to
support
control
of
the
maximum
number
of
threads
that
participate
in
the
OpenMP
program
(see
Section 2.4.1,
Section 3.2.13
and
Section 6.10).
The
max-active-levels-var
ICV,
the
omp_set_max_active_levels
and
omp_get_max_active_levels
runtime
library
routine
and
the
OMP_MAX_ACTIVE_LEVELS
environment
variable
and
were
added
to
support
control
of
the
number
of
nested
active
parallel
regions
(see
Section 2.4.1,
Section 3.2.15,
Section 3.2.16
and
Section 6.8).
The
stacksize-var
ICV
and
the
OMP_STACKSIZE
environment
variable
were
added
to
support
control
of
the
stack
size
for
threads
that
the
OpenMP
implementation
creates
(see
Section 2.4.1
and
Section 6.6).
The
wait-policy-var
ICV
and
the
OMP_WAIT_POLICY
environment
variable
were
added
to
control
the
desired
behavior
of
waiting
threads
(see
Section 2.4.1
and
Section 6.7).
The
rules
for
determining
the
number
of
threads
used
in
a
parallel
region
were
modified
(see
Section 2.6.1).
The
assignment
of
iterations
to
threads
in
a
loop
construct
with
a
static
schedule
kind
was
made
deterministic
(see
Section 2.11.4).
The
worksharing-loop
construct
was
extended
to
support
association
with
more
than
one
perfectly
nested
loop
through
the
collapse
clause
(see
Section 2.11.4).
Iteration
variables
for
worksharing-loops
were
allowed
to
be
random
access
iterators
or
of
unsigned
integer
type
(see
Section 2.11.4).
The
schedule
kind
auto
was
added
to
allow
the
implementation
to
choose
any
possible
mapping
of
iterations
in
a
loop
construct
to
threads
in
the
team
(see
Section 2.11.4).
The
task
construct
(see
Section 2.12)
was
added
to
support
explicit
tasks.
The
taskwait
construct
(see
Section 2.19.5)
was
added
to
support
task
synchronization.
Predetermined
data-sharing
attributes
were
defined
for
Fortran
assumed-size
arrays
(see
Section 2.21.1.1).
Static
class
members
variables
were
allowed
to
appear
in
a
threadprivate
directive
(see
Section 2.21.2).
Invocations
of
constructors
and
destructors
for
private
and
threadprivate
class
type
variables
was
clarified
(see
Section 2.21.2,
Section 2.21.4.3,
Section 2.21.4.4,
Section 2.21.6.1
and
Section 2.21.6.2).
The
use
of
Fortran
allocatable
arrays
was
allowed
in
private,
firstprivate,
lastprivate,
reduction,
copyin
and
copyprivate
clauses
(see
Section 2.21.2,
Section 2.21.4.3,
Section 2.21.4.4,
Section 2.21.4.5,
Section 2.21.5.4,
Section 2.21.6.1
and
Section 2.21.6.2).
The
firstprivate
argument
was
added
for
the
default
clause
in
Fortran
(see
Section 2.21.4.1).
Implementations
were
precluded
from
using
the
storage
of
the
original
list
item
to
hold
the
new
list
item
on
the
primary
thread
for
list
items
in
the
private
clause
and
the
value
was
made
well
defined
on
exit
from
the
parallel
region
if
no
attempt
is
made
to
reference
the
original
list
item
inside
the
parallel
region
(see
Section 2.21.4.3).
The
runtime
library
routines
omp_set_schedule
and
omp_get_schedule
were
added
to
set
and
to
retrieve
the
value
of
the
run-sched-var
ICV
(see
Section 3.2.11
and
Section 3.2.12).
The
omp_get_level
runtime
library
routine
was
added
to
return
the
number
of
nested
parallel
regions
that
enclose
the
task
that
contains
the
call
(see
Section 3.2.17).
The
omp_get_ancestor_thread_num
runtime
library
routine
was
added
to
return
the
thread
number
of
the
ancestor
for
a
given
nested
level
of
the
current
thread,
(see
Section 3.2.18).
The
omp_get_team_size
runtime
library
routine
was
added
to
return
the
size
of
the
thread
team
to
which
the
ancestor
belongs
for
a
given
nested
level
of
the
current
thread,
(see
Section 3.2.19).
The
omp_get_active_level
runtime
library
routine
was
added
to
return
the
number
of
nested
active
parallel
regions
that
enclose
the
task
that
contains
the
call
(see
Section 3.2.20).
Lock
ownership
was
defined
in
terms
of
tasks
instead
of
threads
(see
Section 3.9).