7.6. Scatter

PreviousUpNext
Up: Collective Communication Next: Examples using MPI_SCATTER, MPI_SCATTERV Previous: Examples using MPI_GATHER, MPI_GATHERV

MPI_SCATTER(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm)
IN sendbufaddress of send buffer (choice, significant only at root)
IN sendcountnumber of elements sent to each MPI process (non-negative integer, significant only at root)
IN sendtypedatatype of send buffer elements (handle, significant only at root)
OUT recvbufaddress of receive buffer (choice)
IN recvcountnumber of elements in receive buffer (non-negative integer)
IN recvtypedatatype of receive buffer elements (handle)
IN rootrank of sending MPI process (integer)
IN commcommunicator (handle)
C binding
int MPI_Scatter(const void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm)
int MPI_Scatter_c(const void *sendbuf, MPI_Count sendcount, MPI_Datatype sendtype, void *recvbuf, MPI_Count recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm)
Fortran 2008 binding
MPI_Scatter(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm, ierror)

TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf
INTEGER, INTENT(IN) :: sendcount, recvcount, root
TYPE(MPI_Datatype), INTENT(IN) :: sendtype, recvtype
TYPE(*), DIMENSION(..) :: recvbuf
TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_Scatter(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm, ierror) !(_c)

TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf
INTEGER(KIND=MPI_COUNT_KIND), INTENT(IN) :: sendcount, recvcount
TYPE(MPI_Datatype), INTENT(IN) :: sendtype, recvtype
TYPE(*), DIMENSION(..) :: recvbuf
INTEGER, INTENT(IN) :: root
TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
Fortran binding
MPI_SCATTER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, ROOT, COMM, IERROR)

<type> SENDBUF(*), RECVBUF(*)
INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, ROOT, COMM, IERROR

MPI_SCATTER is the inverse operation to MPI_GATHER.

If comm is an intra-communicator, the outcome is as if the root executed n send operations,

Image file

and each MPI process executed a receive,

Image file

An alternative description is that the root sends a message with MPI_Send(sendbuf, sendcount·n, sendtype, ...). This message is split into n equal segments, the i-th segment is sent to the i-th MPI process in the group, and each MPI process receives this message as above.

The send buffer is ignored for all nonroot MPI processes.

The type signature associated with sendcount, sendtype at the root must be equal to the type signature associated with recvcount, recvtype at all MPI processes (however, the type maps may be different). This implies that the amount of data sent must be equal to the amount of data received, pairwise between each MPI process and the root. Distinct type maps between sender and receiver are still allowed.

All arguments to the function are significant on the root, while on other MPI processes, only arguments recvbuf, recvcount, recvtype, root, and comm are significant. The arguments root and comm must have identical values on all MPI processes.

The specification of counts and types should not cause any location on the root to be read more than once.


Rationale.

Though not needed, the last restriction is imposed so as to achieve symmetry with MPI_GATHER, where the corresponding restriction (a multiple-write restriction) is necessary. ( End of rationale.)
The ``in place'' option for intra-communicators is specified by passing MPI_IN_PLACE as the value of recvbuf at the root. In such a case, recvcount and recvtype are ignored, and the root ``sends'' no data to itself. The scattered vector is still assumed to contain n segments, where n is the group size; the root-th segment, which root should ``send to itself,'' is not moved.

If comm is an inter-communicator, then the call involves all MPI processes in the inter-communicator, but with one group (group A) defining the root. All MPI processes in the other group (group B) pass the same value in argument root, which is the rank of the root in group A. The root passes the value MPI_ROOT in root. All other MPI processes in group A pass the value MPI_PROC_NULL in root. Data is scattered from the root to all MPI processes in group B. The receive buffer arguments of the MPI processes in group B must be consistent with the send buffer argument of the root.

MPI_SCATTERV(sendbuf, sendcounts, displs, sendtype, recvbuf, recvcount, recvtype, root, comm)
IN sendbufaddress of send buffer (choice, significant only at root)
IN sendcountsnonnegative integer array (of length group size) specifying the number of elements to send to each rank (significant only at root)
IN displsinteger array (of length group size). Entry i specifies the displacement (relative to sendbuf) from which to take the outgoing data to MPI process i (significant only at root)
IN sendtypedatatype of send buffer elements (handle, significant only at root)
OUT recvbufaddress of receive buffer (choice)
IN recvcountnumber of elements in receive buffer (non-negative integer)
IN recvtypedatatype of receive buffer elements (handle)
IN rootrank of sending MPI process (integer)
IN commcommunicator (handle)
C binding
int MPI_Scatterv(const void *sendbuf, const int sendcounts[], const int displs[], MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm)
int MPI_Scatterv_c(const void *sendbuf, const MPI_Count sendcounts[], const MPI_Aint displs[], MPI_Datatype sendtype, void *recvbuf, MPI_Count recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm)
Fortran 2008 binding
MPI_Scatterv(sendbuf, sendcounts, displs, sendtype, recvbuf, recvcount, recvtype, root, comm, ierror)

TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf
INTEGER, INTENT(IN) :: sendcounts(*), displs(*), recvcount, root
TYPE(MPI_Datatype), INTENT(IN) :: sendtype, recvtype
TYPE(*), DIMENSION(..) :: recvbuf
TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_Scatterv(sendbuf, sendcounts, displs, sendtype, recvbuf, recvcount, recvtype, root, comm, ierror) !(_c)

TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf
INTEGER(KIND=MPI_COUNT_KIND), INTENT(IN) :: sendcounts(*), recvcount
INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN) :: displs(*)
TYPE(MPI_Datatype), INTENT(IN) :: sendtype, recvtype
TYPE(*), DIMENSION(..) :: recvbuf
INTEGER, INTENT(IN) :: root
TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
Fortran binding
MPI_SCATTERV(SENDBUF, SENDCOUNTS, DISPLS, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, ROOT, COMM, IERROR)

<type> SENDBUF(*), RECVBUF(*)
INTEGER SENDCOUNTS(*), DISPLS(*), SENDTYPE, RECVCOUNT, RECVTYPE, ROOT, COMM, IERROR

MPI_SCATTERV is the inverse operation to MPI_GATHERV.

MPI_SCATTERV extends the functionality of MPI_SCATTER by allowing a varying count of data to be sent to each MPI process, since sendcounts is now an array. It also allows more flexibility as to where the data is taken from on the root, by providing an additional argument, displs.

If comm is an intra-communicator, the outcome is as if the root executed n send operations,

Image file

and each MPI process executed a receive,

Image file

The send buffer is ignored for all nonroot MPI processes.

The type signature implied by sendcount [i], sendtype at the root must be equal to the type signature implied by recvcount, recvtype at MPI process i (however, the type maps may be different). This implies that the amount of data sent must be equal to the amount of data received, pairwise between each MPI process and the root. Distinct type maps between sender and receiver are still allowed.

All arguments to the function are significant on the root, while on other MPI processes, only arguments recvbuf, recvcount, recvtype, root, and comm are significant. The arguments root and comm must have identical values on all MPI processes.

The specification of counts, types, and displacements should not cause any location on the root to be read more than once.

The ``in place'' option for intra-communicators is specified by passing MPI_IN_PLACE as the value of recvbuf at the root. In such a case, recvcount and recvtype are ignored, and root ``sends'' no data to itself. The scattered vector is still assumed to contain n segments, where n is the group size; the root-th segment, which root should ``send to itself,'' is not moved.

If comm is an inter-communicator, then the call involves all MPI processes in the inter-communicator, but with one group (group A) defining the root. All MPI processes in the other group (group B) pass the same value in argument root, which is the rank of the root in group A. The root MPI process passes the value MPI_ROOT in root. All other MPI processes in group A pass the value MPI_PROC_NULL in root. Data is scattered from the root to all MPI processes in group B. The receive buffer arguments of the MPI processes in group B must be consistent with the send buffer argument of the root.


PreviousUpNext
Up: Collective Communication Next: Examples using MPI_SCATTER, MPI_SCATTERV Previous: Examples using MPI_GATHER, MPI_GATHERV


Return to MPI-4.1 Standard Index
Return to MPI Forum Home Page

(Unofficial) MPI-4.1 of November 2, 2023
HTML Generated on November 19, 2023