To understand how collective operations apply to intercommunicators,
we can view most MPI intracommunicator
collective operations as fitting one of the following categories (see, for
instance,
[43]):
The application of collective communication to
intercommunicators is best described in terms of two groups.
For example, an all-to-all
MPI_ALLGATHER operation can be described as collecting data
from all members of one group with the result appearing in all members
of the other group (see Figure 2
). As another
example, a one-to-all MPI_BCAST operation sends data from one
member of one group to all members of the other group.
Collective computation operations such as MPI_REDUCE_SCATTER have
a
similar interpretation (see Figure 3
).
For intracommunicators, these two groups
are the same. For intercommunicators, these two groups are distinct.
For the all-to-all operations, each such operation is described in two phases,
so that it
has a symmetric, full-duplex behavior.
The following collective operations also apply to intercommunicators:
The MPI_BARRIER operation does not fit into this classification
since no data is being moved (other than the implicit fact that a barrier has
been called). The data movement patterns of MPI_SCAN
and MPI_EXSCAN
do not fit this taxonomy.
In C++, the bindings for these functions are in the MPI::Comm class.
However,
since the collective operations do not make
sense on a C++ MPI::Comm
(as
it is neither an intercommunicator nor an intracommunicator),
the functions are all pure virtual.


![]()
![]()
![]()
Up: Communicator Argument
Next: Specifics for Intercommunicator Collective Operations
Previous: Specifics for Intracommunicator Collective Operations
Return to MPI-2.1 Standard Index
Return to MPI Forum Home Page
MPI-2.0 of July 1, 2008
HTML Generated on July 6, 2008