7.2.2. Applying Collective Operations to Inter-Communicators

PreviousUpNext
Up: Communicator Argument Next: Specifics for Inter-Communicator Collective Operations Previous: Specifics for Intra-Communicator Collective Operations

To understand how collective operations apply to inter-communicators, we can view most MPI intra-communicator collective operations as fitting one of the following categories (see, for instance, [64]):

All-To-All
All MPI processes contribute to the result. All MPI processes receive the result.
All-To-One
All MPI processes contribute to the result. One MPI process receives the result.
One-To-All
One MPI process contributes to the result. All MPI processes receive the result.
Other:
Collective operations that do not fit into one of the above categories.

The data movement patterns of MPI_SCAN, MPI_ISCAN, MPI_SCAN_INIT, MPI_EXSCAN, MPI_IEXSCAN and MPI_EXSCAN_INIT do not fit this taxonomy.

The application of collective communication to inter-communicators is best described in terms of two groups. For example, an all-to-all MPI_ALLGATHER operation can be described as collecting data from all members of one group with the result appearing in all members of the other group (see Figure 5). As another example, a one-to-all MPI_BCAST operation sends data from one member of one group to all members of the other group. Collective computation operations such as MPI_REDUCE_SCATTER have a similar interpretation (see Figure 6). For intra-communicators, these two groups are the same. For inter-communicators, these two groups are distinct. For the all-to-all operations, each such operation is described in two phases, so that it has a symmetric, full-duplex behavior.

The following collective operations also apply to inter-communicators:


Image file


Figure 5: Inter-communicator allgather. The focus of data to one MPI process is represented, not mandated by the semantics. The two phases do allgathers in both directions.

Image file


Figure 6: Inter-communicator reduce-scatter. The focus of data to one MPI process is represented, not mandated by the semantics. The two phases do reduce-scatters in both directions.


PreviousUpNext
Up: Communicator Argument Next: Specifics for Inter-Communicator Collective Operations Previous: Specifics for Intra-Communicator Collective Operations


Return to MPI-4.1 Standard Index
Return to MPI Forum Home Page

(Unofficial) MPI-4.1 of November 2, 2023
HTML Generated on November 19, 2023