7.10.2. MPI_REDUCE_SCATTER

PreviousUpNext
Up: Reduce-Scatter Next: Scan Previous: MPI_REDUCE_SCATTER_BLOCK

MPI_REDUCE_SCATTER extends the functionality of MPI_REDUCE_SCATTER_BLOCK such that the scattered blocks can vary in size. Block sizes are determined by the recvcounts array, such that the i-th block contains recvcounts[i] elements.

MPI_REDUCE_SCATTER(sendbuf, recvbuf, recvcounts, datatype, op, comm)
IN sendbufstarting address of send buffer (choice)
OUT recvbufstarting address of receive buffer (choice)
IN recvcountsnonnegative integer array (of length group size) specifying the number of elements of the result distributed to each MPI process.
IN datatypedatatype of elements of send and receive buffers (handle)
IN opoperation (handle)
IN commcommunicator (handle)
C binding
int MPI_Reduce_scatter(const void *sendbuf, void *recvbuf, const int recvcounts[], MPI_Datatype datatype, MPI_Op op, MPI_Comm comm)
int MPI_Reduce_scatter_c(const void *sendbuf, void *recvbuf, const MPI_Count recvcounts[], MPI_Datatype datatype, MPI_Op op, MPI_Comm comm)
Fortran 2008 binding
MPI_Reduce_scatter(sendbuf, recvbuf, recvcounts, datatype, op, comm, ierror)

TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf
TYPE(*), DIMENSION(..) :: recvbuf
INTEGER, INTENT(IN) :: recvcounts(*)
TYPE(MPI_Datatype), INTENT(IN) :: datatype
TYPE(MPI_Op), INTENT(IN) :: op
TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_Reduce_scatter(sendbuf, recvbuf, recvcounts, datatype, op, comm, ierror) !(_c)

TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf
TYPE(*), DIMENSION(..) :: recvbuf
INTEGER(KIND=MPI_COUNT_KIND), INTENT(IN) :: recvcounts(*)
TYPE(MPI_Datatype), INTENT(IN) :: datatype
TYPE(MPI_Op), INTENT(IN) :: op
TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
Fortran binding
MPI_REDUCE_SCATTER(SENDBUF, RECVBUF, RECVCOUNTS, DATATYPE, OP, COMM, IERROR)

<type> SENDBUF(*), RECVBUF(*)
INTEGER RECVCOUNTS(*), DATATYPE, OP, COMM, IERROR

If comm is an intra-communicator, MPI_REDUCE_SCATTER first performs a global, element-wise reduction on vectors of Image file elements in the send buffers defined by sendbuf, count and datatype, using the operation op, where n is the number of MPI processes in the group of comm. The routine is called by all group members using the same arguments for recvcounts, datatype, op and comm. The resulting vector is treated as n consecutive blocks where the number of elements of the i-th block is recvcounts[i]. The blocks are scattered to the MPI processes of the group. The i-th block is sent to MPI process i and stored in the receive buffer defined by recvbuf, recvcounts[i] and datatype.


Advice to implementors.

The MPI_REDUCE_SCATTER routine is functionally equivalent to an MPI_REDUCE collective operation with count equal to the sum of recvcounts[i] followed by MPI_SCATTERV with sendcounts equal to recvcounts. However, a direct implementation may run faster. ( End of advice to implementors.)
The ``in place'' option for intra-communicators is specified by passing MPI_IN_PLACE in the sendbuf argument. In this case, the input data is taken from the receive buffer. It is not required to specify the ``in place'' option on all MPI processes, since the MPI processes for which recvcounts[i] =0 may not have allocated a receive buffer.

If comm is an inter-communicator, then the result of the reduction of the data provided by MPI processes in one group (group A) is scattered among MPI processes in the other group (group B), and vice versa. Within each group, all MPI processes provide the same recvcounts argument, and provide input vectors of Image file elements stored in the send buffers, where n is the size of the group. The resulting vector from the other group is scattered in blocks of recvcounts[i] elements among the MPI processes in the group. The number of elements count must be the same for the two groups.


Rationale.

The last restriction is needed so that the length of the send buffer can be determined by the sum of the local recvcounts entries. Otherwise, communication is needed to figure out how many elements are reduced. ( End of rationale.)


PreviousUpNext
Up: Reduce-Scatter Next: Scan Previous: MPI_REDUCE_SCATTER_BLOCK


Return to MPI-4.1 Standard Index
Return to MPI Forum Home Page

(Unofficial) MPI-4.1 of November 2, 2023
HTML Generated on November 19, 2023