7.7. Gather-to-all

PreviousUpNext
Up: Collective Communication Next: Example using MPI_ALLGATHER Previous: Examples using MPI_SCATTER, MPI_SCATTERV

MPI_ALLGATHER(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm)
IN sendbufstarting address of send buffer (choice)
IN sendcountnumber of elements in send buffer (non-negative integer)
IN sendtypedatatype of send buffer elements (handle)
OUT recvbufaddress of receive buffer (choice)
IN recvcountnumber of elements received from any MPI process (non-negative integer)
IN recvtypedatatype of receive buffer elements (handle)
IN commcommunicator (handle)
C binding
int MPI_Allgather(const void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm)
int MPI_Allgather_c(const void *sendbuf, MPI_Count sendcount, MPI_Datatype sendtype, void *recvbuf, MPI_Count recvcount, MPI_Datatype recvtype, MPI_Comm comm)
Fortran 2008 binding
MPI_Allgather(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, ierror)

TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf
INTEGER, INTENT(IN) :: sendcount, recvcount
TYPE(MPI_Datatype), INTENT(IN) :: sendtype, recvtype
TYPE(*), DIMENSION(..) :: recvbuf
TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_Allgather(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, ierror) !(_c)

TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf
INTEGER(KIND=MPI_COUNT_KIND), INTENT(IN) :: sendcount, recvcount
TYPE(MPI_Datatype), INTENT(IN) :: sendtype, recvtype
TYPE(*), DIMENSION(..) :: recvbuf
TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
Fortran binding
MPI_ALLGATHER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, COMM, IERROR)

<type> SENDBUF(*), RECVBUF(*)
INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, COMM, IERROR

MPI_ALLGATHER can be thought of as MPI_GATHER, but where all MPI processes receive the result, instead of just the root. The block of data sent from the j-th MPI process is received by every MPI process and placed in the j-th block of the buffer recvbuf.

The type signature associated with sendcount, sendtype, at an MPI process must be equal to the type signature associated with recvcount, recvtype at any other MPI process.

If comm is an intra-communicator, the outcome of a call to MPI_ALLGATHER(...) is as if all MPI processes executed n calls to

   MPI_Gather(sendbuf,sendcount,sendtype,recvbuf,recvcount, 
                                                 recvtype,root,comm) 
for root = 0, ..., n-1. The rules for correct usage of MPI_ALLGATHER can be found in the corresponding rules for MPI_GATHER (see Section Gather).

The ``in place'' option for intra-communicators is specified by passing the value MPI_IN_PLACE to the argument sendbuf at all MPI processes. sendcount and sendtype are ignored. Then the input data of each MPI process is assumed to be in the area where that MPI process would receive its own contribution to the receive buffer.

If comm is an inter-communicator, then each MPI process of one group (group A) contributes sendcount data items; these data are concatenated and the result is stored at each MPI process in the other group (group B). Conversely the concatenation of the contributions of the MPI processes in group B is stored at each MPI process in group A. The send buffer arguments in group A must be consistent with the receive buffer arguments in group B, and vice versa.


Advice to users.

In the inter-communicator case, the communication pattern of MPI_ALLGATHER need not be symmetric. The number of items sent by MPI processes in group A (as specified by the arguments sendcount, sendtype in group A and the arguments recvcount, recvtype in group B), need not equal the number of items sent by MPI processes in group B (as specified by the arguments sendcount, sendtype in group B and the arguments recvcount, recvtype in group A). In particular, one can move data in only one direction by specifying sendcount = 0 for the communication in the reverse direction. ( End of advice to users.)

MPI_ALLGATHERV(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm)
IN sendbufstarting address of send buffer (choice)
IN sendcountnumber of elements in send buffer (non-negative integer)
IN sendtypedatatype of send buffer elements (handle)
OUT recvbufaddress of receive buffer (choice)
IN recvcountsnonnegative integer array (of length group size) containing the number of elements that are received from each MPI process
IN displsinteger array (of length group size). Entry i specifies the displacement (relative to recvbuf) at which to place the incoming data from MPI process i
IN recvtypedatatype of receive buffer elements (handle)
IN commcommunicator (handle)
C binding
int MPI_Allgatherv(const void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, const int recvcounts[], const int displs[], MPI_Datatype recvtype, MPI_Comm comm)
int MPI_Allgatherv_c(const void *sendbuf, MPI_Count sendcount, MPI_Datatype sendtype, void *recvbuf, const MPI_Count recvcounts[], const MPI_Aint displs[], MPI_Datatype recvtype, MPI_Comm comm)
Fortran 2008 binding
MPI_Allgatherv(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm, ierror)

TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf
INTEGER, INTENT(IN) :: sendcount, recvcounts(*), displs(*)
TYPE(MPI_Datatype), INTENT(IN) :: sendtype, recvtype
TYPE(*), DIMENSION(..) :: recvbuf
TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_Allgatherv(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm, ierror) !(_c)

TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf
INTEGER(KIND=MPI_COUNT_KIND), INTENT(IN) :: sendcount, recvcounts(*)
TYPE(MPI_Datatype), INTENT(IN) :: sendtype, recvtype
TYPE(*), DIMENSION(..) :: recvbuf
INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN) :: displs(*)
TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
Fortran binding
MPI_ALLGATHERV(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNTS, DISPLS, RECVTYPE, COMM, IERROR)

<type> SENDBUF(*), RECVBUF(*)
INTEGER SENDCOUNT, SENDTYPE, RECVCOUNTS(*), DISPLS(*), RECVTYPE, COMM, IERROR

MPI_ALLGATHERV can be thought of as MPI_GATHERV, but where all processes receive the result, instead of just the root. The block of data sent from the j-th MPI process is received by every MPI process and placed in the j-th block of the buffer recvbuf. These blocks need not all be the same size.

The type signature associated with sendcount, sendtype, at MPI process j must be equal to the type signature associated with recvcounts[j], recvtype at any other MPI process.

If comm is an intra-communicator, the outcome is as if all MPI processes executed calls to

    MPI_Gatherv(sendbuf,sendcount,sendtype,recvbuf,recvcounts,displs, 
                                                   recvtype,root,comm), 
for root = 0, ..., n-1. The rules for correct usage of MPI_ALLGATHERV can be found in the corresponding rules for MPI_GATHERV (see Section Gather).

The ``in place'' option for intra-communicators is specified by passing the value MPI_IN_PLACE to the argument sendbuf at all MPI processes. In such a case, sendcount and sendtype are ignored, and the input data of each MPI process is assumed to be in the area where that MPI process would receive its own contribution to the receive buffer.

If comm is an inter-communicator, then each MPI process of one group (group A) contributes sendcount data items; these data are concatenated and the result is stored at each MPI process in the other group (group B). Conversely the concatenation of the contributions of the MPI processes in group B is stored at each MPI process in group A. The send buffer arguments in group A must be consistent with the receive buffer arguments in group B, and vice versa.


PreviousUpNext
Up: Collective Communication Next: Example using MPI_ALLGATHER Previous: Examples using MPI_SCATTER, MPI_SCATTERV


Return to MPI-4.1 Standard Index
Return to MPI Forum Home Page

(Unofficial) MPI-4.1 of November 2, 2023
HTML Generated on November 19, 2023