8.9.2. Models of Execution

PreviousUpNext
Up: Formalizing the Loosely Synchronous Model Next: Static Communicator Allocation Previous: Basic Statements

In the loosely synchronous model, transfer of control to a parallel procedure is effected by having each executing MPI process invoke the procedure. The invocation is a collective operation: it is executed by all MPI processes in the execution group, and invocations are similarly ordered at all MPI processes. However, the invocation need not be synchronized.

We say that a parallel procedure is active in an MPI process if the MPI process belongs to a group that may collectively execute the procedure, and some member of that group is currently executing the procedure code. If a parallel procedure is active in an MPI process, then this MPI process may be receiving messages pertaining to this procedure, even if it does not currently execute the code of this procedure.


PreviousUpNext
Up: Formalizing the Loosely Synchronous Model Next: Static Communicator Allocation Previous: Basic Statements


8.9.2.1. Static Communicator Allocation

PreviousUpNext
Up: Models of Execution Next: Dynamic Communicator Allocation Previous: Models of Execution

This covers the case where, at any point in time, at most one invocation of a parallel procedure can be active at any MPI process, and the group of executing MPI processes is fixed. For example, all invocations of parallel procedures involve all MPI processes, MPI processes are single-threaded, and there are no recursive invocations.

In such a case, a communicator can be statically allocated to each procedure. The static allocation can be done in a preamble, as part of initialization code. If the parallel procedures can be organized into libraries, so that only one procedure of each library can be concurrently active in each processor, then it is sufficient to allocate one communicator per library.


PreviousUpNext
Up: Models of Execution Next: Dynamic Communicator Allocation Previous: Models of Execution


8.9.2.2. Dynamic Communicator Allocation

PreviousUpNext
Up: Models of Execution Next: The General Case Previous: Static Communicator Allocation

Calls of parallel procedures are well-nested if a new parallel procedure is always invoked in a subset of a group executing the same parallel procedure. Thus, MPI processes that execute the same parallel procedure have the same execution stack.

In such a case, a new communicator needs to be dynamically allocated for each new invocation of a parallel procedure. The allocation is done by the caller. A new communicator can be generated by a call to MPI_COMM_DUP, if the callee execution group is identical to the caller execution group, or by a call to MPI_COMM_SPLIT if the caller execution group is split into several subgroups executing distinct parallel routines. The new communicator is passed as an argument to the invoked routine.

The need for generating a new communicator at each invocation can be alleviated or avoided altogether in some cases: If the execution group is not split, then one can allocate a stack of communicators in a preamble, and next manage the stack in a way that mimics the stack of recursive calls.

One can also take advantage of the well-ordering property of communication to avoid confusing caller and callee communication, even if both use the same communicator. To do so, one needs to abide by the following two rules:



PreviousUpNext
Up: Models of Execution Next: The General Case Previous: Static Communicator Allocation


8.9.2.3. The General Case

PreviousUpNext
Up: Models of Execution Next: Virtual Topologies for MPI Processes Previous: Dynamic Communicator Allocation

In the general case, there may be multiple concurrently active invocations of the same parallel procedure within the same group; invocations may not be well-nested. A new communicator needs to be created for each invocation. It is the user's responsibility to make sure that, should two distinct parallel procedures be invoked concurrently on overlapping sets of MPI processes, communicator creation is properly coordinated.


PreviousUpNext
Up: Models of Execution Next: Virtual Topologies for MPI Processes Previous: Dynamic Communicator Allocation


Return to MPI-4.1 Standard Index
Return to MPI Forum Home Page

(Unofficial) MPI-4.1 of November 2, 2023
HTML Generated on November 19, 2023