87. Applying Collective Operations to Intercommunicators


Up: Communicator Argument Next: Specifics for Intercommunicator Collective Operations Previous: Specifics for Intracommunicator Collective Operations

To understand how collective operations apply to intercommunicators, we can view most MPI intracommunicator collective operations as fitting one of the following categories (see, for instance, [43]):

All-To-All
All processes contribute to the result. All processes receive the result.
All-To-One
All processes contribute to the result. One process receives the result.
One-To-All
One process contributes to the result. All processes receive the result.
Other
Collective operations that do not fit into one of the above categories.

The MPI_BARRIER operation does not fit into this classification since no data is being moved (other than the implicit fact that a barrier has been called). The data movement patterns of MPI_SCAN and MPI_EXSCAN do not fit this taxonomy.

The application of collective communication to intercommunicators is best described in terms of two groups. For example, an all-to-all MPI_ALLGATHER operation can be described as collecting data from all members of one group with the result appearing in all members of the other group (see Figure 2 ). As another example, a one-to-all MPI_BCAST operation sends data from one member of one group to all members of the other group. Collective computation operations such as MPI_REDUCE_SCATTER have a similar interpretation (see Figure 3 ). For intracommunicators, these two groups are the same. For intercommunicators, these two groups are distinct. For the all-to-all operations, each such operation is described in two phases, so that it has a symmetric, full-duplex behavior.

The following collective operations also apply to intercommunicators:


In C++, the bindings for these functions are in the MPI::Comm class. However, since the collective operations do not make sense on a C++ MPI::Comm (as it is neither an intercommunicator nor an intracommunicator), the functions are all pure virtual.


Figure 2: Intercommunicator allgather. The focus of data to one process is represented, not mandated by the semantics. The two phases do allgathers in both directions.


Figure 3: Intercommunicator reduce-scatter. The focus of data to one process is represented, not mandated by the semantics. The two phases do reduce-scatters in both directions.



Up: Communicator Argument Next: Specifics for Intercommunicator Collective Operations Previous: Specifics for Intracommunicator Collective Operations


Return to MPI-2.1 Standard Index
Return to MPI Forum Home Page

MPI-2.0 of July 1, 2008
HTML Generated on July 6, 2008