13.1. Introduction

PreviousUpNext
Up: One-Sided Communications Next: Initialization Previous: One-Sided Communications

Remote Memory Access ( RMA) extends the communication mechanisms of MPI by allowing one MPI process to specify all communication parameters, both for the sending side and for the receiving side. This mode of communication facilitates the coding of some applications with dynamically changing data access patterns where the data distribution is fixed or slowly changing. In such a case, each MPI process can compute what data it needs to access or to update at other MPI processes. However, the programmer may not be able to easily determine which data in an MPI process may need to be accessed or to be updated by operations initiated by a different MPI process, and may not even know which MPI processes may perform such updates. Thus, the transfer parameters are all available only on one side. Regular send/receive communication requires matching operations by sender and receiver. In order to issue the matching operations, an application needs to distribute the transfer parameters. This distribution may require all MPI processes to participate in a time-consuming global computation, or to poll for potential communication requests to receive and upon which to act periodically. The use of RMA communication operations avoids the need for global computations or explicit polling. A generic example of this nature is the execution of an assignment of the form A = B(map), where map is a permutation vector, and A, B, and map are distributed in the same manner.

Message-passing communication achieves two effects: communication of data from sender to receiver and synchronization of sender with receiver. The RMA design separates these two functions. The following communication calls are provided:


This chapter refers to an operations set that includes all remote update, remote read and update, and remote atomic swap operations as ``accumulate'' operations.

MPI supports two fundamentally different memory models: separate and unified. The separate model makes no assumption about memory consistency and is highly portable. This model is similar to that of weakly coherent memory systems: the user must impose correct ordering of memory accesses through synchronization calls. The unified model can exploit cache-coherent hardware and hardware-accelerated, one-sided operations that are commonly available in high-performance systems. The two different models are discussed in detail in Section Memory Model. Both models support several synchronization calls to support different synchronization styles.

The design of the RMA functions allows implementors to take advantage of fast or asynchronous communication mechanisms provided by various platforms, such as coherent or noncoherent shared memory, DMA engines, hardware-supported put/get operations, and communication coprocessors. The most frequently used RMA communication mechanisms can be layered on top of message-passing. However, certain RMA functions might need support for asynchronous communication agents in software (handlers, threads, etc.) in a distributed memory environment.

We shall denote by origin or origin process the MPI process that calls an RMA procedure, and by target or target process the MPI process whose memory is accessed. Thus, in a put operation, source = origin and destination = target; in a get operation, source = target and destination = origin.


PreviousUpNext
Up: One-Sided Communications Next: Initialization Previous: One-Sided Communications


Return to MPI-4.1 Standard Index
Return to MPI Forum Home Page

(Unofficial) MPI-4.1 of November 2, 2023
HTML Generated on November 19, 2023