An MPI program consists of autonomous processes, executing their own
code, in
an
MIMD style. The codes executed by each process need not be
identical. The processes communicate via calls to MPI communication
primitives. Typically, each process executes in its own address
space, although shared-memory implementations of MPI are possible.
This document specifies the behavior of a parallel program assuming
that only MPI calls are used. The interaction of
an MPI program with other possible means of communication, I/O, and
process management is not specified.
Unless otherwise stated in the specification of the standard, MPI places
no requirements on the result of its interaction with external mechanisms
that provide similar or equivalent functionality. This includes, but is
not limited to, interactions with external mechanisms for process
control, shared and remote memory access, file system access and control,
interprocess communication, process signaling, and terminal I/O.
High quality implementations should strive to make the results of such
interactions intuitive to users, and attempt to document restrictions
where deemed necessary.
Implementations that support such additional mechanisms for
functionality supported within MPI are expected to document how
these interact with MPI.
( End of advice to implementors.)
Advice
to implementors.
The interaction of MPI and threads is defined in
Section MPI and Threads
.
![]()
![]()
![]()
Up: Contents
Next: Error Handling
Previous: Functions and Macros
Return to MPI-2.1 Standard Index
Return to MPI Forum Home Page
MPI-2.0 of July 1, 2008
HTML Generated on July 6, 2008