13.5. Synchronization Calls

PreviousUpNext
Up: One-Sided Communications Next: Fence Previous: Memory Model

RMA communications fall in two categories:

active target communication,
where data is moved from the memory of one MPI process to the memory of another, and both are explicitly involved in the communication. This communication pattern is similar to message passing, except that all the data transfer arguments are provided by the origin process, and the target process only participates in the synchronization.
passive target communication,
where data is moved from the memory of one MPI process to the memory of another, and only the origin process is explicitly involved in the transfer. Thus, two origin processes may communicate by accessing the same location in a target window. The MPI process that owns the target window may be distinct from the two communicating MPI processes, in which case it does not participate explicitly in the communication. This communication paradigm is closest to a shared memory model, where shared data can be accessed by all MPI processes, irrespective of location.

RMA communication calls with argument win must occur at an origin process only within an access epoch for win. Such an epoch is opened with an RMA synchronization call on win; it proceeds with zero or more RMA communication calls (e.g., MPI_PUT, MPI_GET or MPI_ACCUMULATE) on win; it is closed with another synchronization call on win. This allows users to amortize one synchronization with multiple data transfers and provide implementors more flexibility in the implementation of RMA operations.

Distinct access epochs for win at the same MPI process must be disjoint. On the other hand, epochs pertaining to different win arguments may overlap. Load/store accesses or other MPI calls may also occur during an epoch.

In active target communication, a target window can be accessed by RMA operations only within an exposure epoch. Such an epoch is opened and closed by RMA synchronization calls executed by the target process. Distinct exposure epochs at an MPI process on the same window must be disjoint, but such an exposure epoch may overlap with exposure epochs on other windows or with access epochs for the same or other window arguments. There is a one-to-one matching between access epochs at origin processes and exposure epochs on target processes: RMA operations issued by an origin process for a target window will access that target window during the same exposure epoch if and only if they were issued during the same access epoch.

In passive target communication the target process does not execute RMA synchronization calls, and there is no concept of an exposure epoch.

MPI provides three synchronization mechanisms:

    1. The MPI_WIN_FENCE collective synchronization call supports a simple synchronization pattern that is often used in parallel computations: namely a loosely-synchronous model, where global computation phases alternate with global communication phases. This mechanism is most useful for loosely synchronous algorithms where the graph of communicating MPI processes changes very frequently, or where each MPI process communicates with many others.

    This call is used for active target communication. An access epoch at an origin process or an exposure epoch at a target process is opened and closed by calls to MPI_WIN_FENCE. An origin process can access windows at all target processes in the group of win during such an access epoch, and the local window can be accessed by all MPI processes in the group of win during such an exposure epoch.
    2. The four functions MPI_WIN_START, MPI_WIN_COMPLETE, MPI_WIN_POST, and MPI_WIN_WAIT can be used to restrict synchronization to the minimum: only pairs of communicating MPI processes synchronize, and they do so only when a synchronization is needed to order RMA accesses to a window correctly with respect to local accesses to that same window. This mechanism may be more efficient when each MPI process communicates with few (logical) neighbors, and the communication graph is fixed or changes infrequently.

    These calls are used for active target communication. An access epoch is opened at the origin process with a call to MPI_WIN_START and is closed by a call to MPI_WIN_COMPLETE. The start call has a group argument that specifies the group of target processes for that epoch. An exposure epoch is opened at the target process by a call to MPI_WIN_POST and is closed by a call to MPI_WIN_WAIT. The post call has a group argument that specifies the set of origin processes for that epoch.
    3. Finally, shared lock access is provided by the functions MPI_WIN_LOCK, MPI_WIN_LOCK_ALL, MPI_WIN_UNLOCK, and MPI_WIN_UNLOCK_ALL. MPI_WIN_LOCK and MPI_WIN_UNLOCK also provide exclusive lock capability. Lock synchronization is useful for MPI applications that emulate a shared memory model via MPI calls; e.g., in a ``bulletin board'' model, where MPI processes can, at random times, access or update different parts of the bulletin board.

    These four calls provide passive target communication. An access epoch is opened by a call to MPI_WIN_LOCK or MPI_WIN_LOCK_ALL and closed by a call to MPI_WIN_UNLOCK or MPI_WIN_UNLOCK_ALL, respectively.

Image file


Figure 28: Active target communication. Dashed arrows represent synchronizations (ordering of events).

Figure 28 illustrates the general synchronization pattern for active target communication. The synchronization between post and start ensures that the put operation of the origin process does not start until the target process exposes the window (with the post call); the target process will expose the window only after preceding local accesses to the window have completed. The synchronization between complete and wait ensures that the put operation of the origin process completes at the origin and the target before the window is unexposed (with the wait call). The target process will execute subsequent local accesses to the target window only after the wait returned.

Image file


Figure 29: Active target communication, with weak synchronization. Dashed arrows represent synchronizations (ordering of events).

Figure 28 shows operations occurring in the natural temporal order implied by the synchronizations: the post occurs before the matching start, and complete occurs before the matching wait. However, such strong synchronization is more than needed for correct ordering of window accesses. The semantics of MPI calls allow weak synchronization, as illustrated in Figure 29. The access to the target window is delayed until the window is exposed, after the post. However the start may return before the exposure epoch opens at the target. Similarly, the put and complete calls may also return before the exposure epoch opens at the target, if put data is buffered by the implementation. The synchronization calls correctly order window accesses, but do not necessarily synchronize other operations. This weaker synchronization semantic allows for more efficient implementations.

Image file


Figure 30: Passive target communication. Dashed arrows represent synchronizations (ordering of events).

Figure 30 illustrates the general synchronization pattern for passive target communication. The first origin process communicates data to the second origin process, through the memory of the target process; the target process is not explicitly involved in the communication. The lock and unlock calls ensure that the two RMA accesses do not occur concurrently. However, they do not ensure that the put by origin 1 will precede the get by origin 2.


Rationale.

RMA does not define fine-grained mutexes in memory (only logical coarse-grained window locks). MPI provides the primitives (compare and swap, accumulate, send/receive, etc.) needed to implement high-level synchronization operations. ( End of rationale.)


PreviousUpNext
Up: One-Sided Communications Next: Fence Previous: Memory Model


Return to MPI-4.1 Standard Index
Return to MPI Forum Home Page

(Unofficial) MPI-4.1 of November 2, 2023
HTML Generated on November 19, 2023