5.8.2. Models of Execution


Up: Formalizing the Loosely Synchronous Model Next: Static communicator allocation Previous: Basic Statements

In the loosely synchronous model, transfer of control to a parallel procedure is effected by having each executing process invoke the procedure. The invocation is a collective operation: it is executed by all processes in the execution group, and invocations are similarly ordered at all processes. However, the invocation need not be synchronized.

We say that a parallel procedure is active in a process if the process belongs to a group that may collectively execute the procedure, and some member of that group is currently executing the procedure code. If a parallel procedure is active in a process, then this process may be receiving messages pertaining to this procedure, even if it does not currently execute the code of this procedure.



Up: Formalizing the Loosely Synchronous Model Next: Static communicator allocation Previous: Basic Statements


5.8.2.1. Static communicator allocation


Up: Models of Execution Next: Dynamic communicator allocation Previous: Models of Execution

This covers the case where, at any point in time, at most one invocation of a parallel procedure can be active at any process, and the group of executing processes is fixed. For example, all invocations of parallel procedures involve all processes, processes are single-threaded, and there are no recursive invocations.

In such a case, a communicator can be statically allocated to each procedure. The static allocation can be done in a preamble, as part of initialization code. If the parallel procedures can be organized into libraries, so that only one procedure of each library can be concurrently active in each processor, then it is sufficient to allocate one communicator per library.



Up: Models of Execution Next: Dynamic communicator allocation Previous: Models of Execution


5.8.2.2. Dynamic communicator allocation


Up: Models of Execution Next: The General case Previous: Static communicator allocation

Calls of parallel procedures are well-nested if a new parallel procedure is always invoked in a subset of a group executing the same parallel procedure. Thus, processes that execute the same parallel procedure have the same execution stack.

In such a case, a new communicator needs to be dynamically allocated for each new invocation of a parallel procedure. The allocation is done by the caller. A new communicator can be generated by a call to MPI_COMM_DUP, if the callee execution group is identical to the caller execution group, or by a call to MPI_COMM_SPLIT if the caller execution group is split into several subgroups executing distinct parallel routines. The new communicator is passed as an argument to the invoked routine.

The need for generating a new communicator at each invocation can be alleviated or avoided altogether in some cases: If the execution group is not split, then one can allocate a stack of communicators in a preamble, and next manage the stack in a way that mimics the stack of recursive calls.

One can also take advantage of the well-ordering property of communication to avoid confusing caller and callee communication, even if both use the same communicator. To do so, one needs to abide by the following two rules:




Up: Models of Execution Next: The General case Previous: Static communicator allocation


5.8.2.3. The General case


Up: Models of Execution Next: Process Topologies Previous: Dynamic communicator allocation

In the general case, there may be multiple concurrently active invocations of the same parallel procedure within the same group; invocations may not be well-nested. A new communicator needs to be created for each invocation. It is the user's responsibility to make sure that, should two distinct parallel procedures be invoked concurrently on overlapping sets of processes, then communicator creation be properly coordinated.



Up: Models of Execution Next: Process Topologies Previous: Dynamic communicator allocation


Return to MPI 1.1 Standard Index
Return to MPI-2 Standard Index
Return to MPI Forum Home Page

MPI-1.1 of June 12, 1995
HTML Generated on August 6, 1997