9.6.2. Neighborhood Alltoall

PreviousUpNext
Up: Neighborhood Collective Communication on Virtual Topologies Next: Nonblocking Neighborhood Communication on Process Topologies Previous: Neighborhood Gather

In the neighborhood alltoall operation, each MPI process i receives data items from each MPI process j if an edge (j,i) exists in the topology graph or Cartesian topology. Similarly, each MPI process i sends data items to all MPI processes j where an edge (i,j) exists. This call is more general than MPI_NEIGHBOR_ALLGATHER in that different data items can be sent to each neighbor. The k-th block in send buffer is sent to the k-th neighboring MPI process and the l-th block in the receive buffer is received from the l-th neighbor.

MPI_NEIGHBOR_ALLTOALL(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm)
IN sendbufstarting address of send buffer (choice)
IN sendcountnumber of elements sent to each neighbor (non-negative integer)
IN sendtypedatatype of send buffer elements (handle)
OUT recvbufstarting address of receive buffer (choice)
IN recvcountnumber of elements received from each neighbor (non-negative integer)
IN recvtypedatatype of receive buffer elements (handle)
IN commcommunicator with associated virtual topology (handle)
C binding
int MPI_Neighbor_alltoall(const void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm)
int MPI_Neighbor_alltoall_c(const void *sendbuf, MPI_Count sendcount, MPI_Datatype sendtype, void *recvbuf, MPI_Count recvcount, MPI_Datatype recvtype, MPI_Comm comm)
Fortran 2008 binding
MPI_Neighbor_alltoall(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, ierror)

TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf
INTEGER, INTENT(IN) :: sendcount, recvcount
TYPE(MPI_Datatype), INTENT(IN) :: sendtype, recvtype
TYPE(*), DIMENSION(..) :: recvbuf
TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_Neighbor_alltoall(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, ierror) !(_c)

TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf
INTEGER(KIND=MPI_COUNT_KIND), INTENT(IN) :: sendcount, recvcount
TYPE(MPI_Datatype), INTENT(IN) :: sendtype, recvtype
TYPE(*), DIMENSION(..) :: recvbuf
TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
Fortran binding
MPI_NEIGHBOR_ALLTOALL(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, COMM, IERROR)

<type> SENDBUF(*), RECVBUF(*)
INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, COMM, IERROR

The MPI_NEIGHBOR_ALLTOALL procedure supports Cartesian communicators, graph communicators, and distributed graph communicators as described in Section Neighborhood Collective Communication on Virtual Topologies. If comm is a distributed graph communicator, the outcome is as if each MPI process executed sends to each of its outgoing neighbors and receives from each of its incoming neighbors:

Image file

The type signature associated with sendcount, sendtype at an MPI process must be equal to the type signature associated with recvcount, recvtype at any other MPI process. This implies that the amount of data sent must be equal to the amount of data received, pairwise between every pair of communicating MPI processes. Distinct type maps between sender and receiver are still allowed.

The ``in place'' option is not meaningful for this operation.

All arguments are significant on all MPI processes and the argument comm must have identical values on all MPI processes.


Example Buffer usage of MPI_NEIGHBOR_ALLTOALL in the case of a Cartesian virtual topology.

For a halo communication on a Cartesian grid, the buffer usage in a given direction d with dims[d]=3 and 1, respectively during creation of the communicator is described in Figure 23.

The figure may apply to any (or multiple) directions in the Cartesian topology. The grey buffers are required in all cases but are only accessed if during creation of the communicator, periods[d] was defined as nonzero (in C) or .TRUE. (in Fortran).

If sendbuf and recvbuf are declared as (char *) and contain a sequence of buffers each described by sendcount, sendtype and recvbuf, recvtype, then after MPI_NEIGHBOR_ALLTOALL on a Cartesian communicator returned, the content of the recvbuf is as if the following code is executed:

Image file

The first call to MPI_Sendrecv implements the solid arrows' communication pattern in each diagram of Figure 23, whereas the second call is for the dashed arrows' pattern.

Image file


Figure 23: Cartesian neighborhood alltoall example for 3 and 1 MPI processes in a dimension


Advice to implementors.

For a Cartesian topology, if the grid in a direction d is periodic and dims[d] is equal to 1 or 2, then rank_source and rank_dest are identical, but still all ndims send and ndims receive operations use different buffers. If in this case, the two send and receive operations per direction or of all directions are internally parallelized, then the several send and receive operations for the same sender-receiver MPI process pair shall be initiated in the same sequence on sender and receiver side or they shall be distinguished by different tags. The code above shows a valid sequence of operations and tags. ( End of advice to implementors.)
The vector variant of MPI_NEIGHBOR_ALLTOALL allows sending/receiving different numbers of elements to and from each neighbor.

MPI_NEIGHBOR_ALLTOALLV(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm)
IN sendbufstarting address of send buffer (choice)
IN sendcountsnonnegative integer array (of length outdegree) specifying the number of elements to send to each neighbor
IN sdisplsinteger array (of length outdegree). Entry j specifies the displacement (relative to sendbuf) from which to send the outgoing data to neighbor j
IN sendtypedatatype of send buffer elements (handle)
OUT recvbufstarting address of receive buffer (choice)
IN recvcountsnonnegative integer array (of length indegree) specifying the number of elements that are received from each neighbor
IN rdisplsinteger array (of length indegree). Entry i specifies the displacement (relative to recvbuf) at which to place the incoming data from neighbor i
IN recvtypedatatype of receive buffer elements (handle)
IN commcommunicator with associated virtual topology (handle)
C binding
int MPI_Neighbor_alltoallv(const void *sendbuf, const int sendcounts[], const int sdispls[], MPI_Datatype sendtype, void *recvbuf, const int recvcounts[], const int rdispls[], MPI_Datatype recvtype, MPI_Comm comm)
int MPI_Neighbor_alltoallv_c(const void *sendbuf, const MPI_Count sendcounts[], const MPI_Aint sdispls[], MPI_Datatype sendtype, void *recvbuf, const MPI_Count recvcounts[], const MPI_Aint rdispls[], MPI_Datatype recvtype, MPI_Comm comm)
Fortran 2008 binding
MPI_Neighbor_alltoallv(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm, ierror)

TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf
INTEGER, INTENT(IN) :: sendcounts(*), sdispls(*), recvcounts(*), rdispls(*)
TYPE(MPI_Datatype), INTENT(IN) :: sendtype, recvtype
TYPE(*), DIMENSION(..) :: recvbuf
TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_Neighbor_alltoallv(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm, ierror) !(_c)

TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf
INTEGER(KIND=MPI_COUNT_KIND), INTENT(IN) :: sendcounts(*), recvcounts(*)
INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN) :: sdispls(*), rdispls(*)
TYPE(MPI_Datatype), INTENT(IN) :: sendtype, recvtype
TYPE(*), DIMENSION(..) :: recvbuf
TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
Fortran binding
MPI_NEIGHBOR_ALLTOALLV(SENDBUF, SENDCOUNTS, SDISPLS, SENDTYPE, RECVBUF, RECVCOUNTS, RDISPLS, RECVTYPE, COMM, IERROR)

<type> SENDBUF(*), RECVBUF(*)
INTEGER SENDCOUNTS(*), SDISPLS(*), SENDTYPE, RECVCOUNTS(*), RDISPLS(*), RECVTYPE, COMM, IERROR

The MPI_NEIGHBOR_ALLTOALLV procedure supports Cartesian communicators, graph communicators, and distributed graph communicators as described in Section Neighborhood Collective Communication on Virtual Topologies. If comm is a distributed graph communicator, the outcome is as if each MPI process executed sends to each of its outgoing neighbors and receives from each of its incoming neighbors:

Image file

The type signature associated with sendcounts [k], sendtype with dsts[k]=j at MPI process i must be equal to the type signature associated with recvcounts [l], recvtype with srcs[l]=i at MPI process j. This implies that the amount of data sent must be equal to the amount of data received, pairwise between every pair of communicating MPI processes. Distinct type maps between sender and receiver are still allowed. The data in the sendbuf beginning at offset sdispls [k] elements (in terms of the sendtype) is sent to the k-th outgoing neighbor. The data received from the l-th incoming neighbor is placed into recvbuf beginning at offset rdispls [l] elements (in terms of the recvtype).

The ``in place'' option is not meaningful for this operation.

All arguments are significant on all MPI processes and the argument comm must have identical values on all MPI processes.

MPI_NEIGHBOR_ALLTOALLW allows one to send and receive with different datatypes to and from each neighbor.

MPI_NEIGHBOR_ALLTOALLW(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm)
IN sendbufstarting address of send buffer (choice)
IN sendcountsnonnegative integer array (of length outdegree) specifying the number of elements to send to each neighbor
IN sdisplsinteger array (of length outdegree). Entry j specifies the displacement in bytes (relative to sendbuf) from which to take the outgoing data destined for neighbor j (array of integers)
IN sendtypesarray of datatypes (of length outdegree). Entry j specifies the type of data to send to neighbor j (array of handles)
OUT recvbufstarting address of receive buffer (choice)
IN recvcountsnonnegative integer array (of length indegree) specifying the number of elements that are received from each neighbor
IN rdisplsinteger array (of length indegree). Entry i specifies the displacement in bytes (relative to recvbuf) at which to place the incoming data from neighbor i (array of integers)
IN recvtypesarray of datatypes (of length indegree). Entry i specifies the type of data received from neighbor i (array of handles)
IN commcommunicator with associated virtual topology (handle)
C binding
int MPI_Neighbor_alltoallw(const void *sendbuf, const int sendcounts[], const MPI_Aint sdispls[], const MPI_Datatype sendtypes[], void *recvbuf, const int recvcounts[], const MPI_Aint rdispls[], const MPI_Datatype recvtypes[], MPI_Comm comm)
int MPI_Neighbor_alltoallw_c(const void *sendbuf, const MPI_Count sendcounts[], const MPI_Aint sdispls[], const MPI_Datatype sendtypes[], void *recvbuf, const MPI_Count recvcounts[], const MPI_Aint rdispls[], const MPI_Datatype recvtypes[], MPI_Comm comm)
Fortran 2008 binding
MPI_Neighbor_alltoallw(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm, ierror)

TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf
INTEGER, INTENT(IN) :: sendcounts(*), recvcounts(*)
INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN) :: sdispls(*), rdispls(*)
TYPE(MPI_Datatype), INTENT(IN) :: sendtypes(*), recvtypes(*)
TYPE(*), DIMENSION(..) :: recvbuf
TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_Neighbor_alltoallw(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm, ierror) !(_c)

TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf
INTEGER(KIND=MPI_COUNT_KIND), INTENT(IN) :: sendcounts(*), recvcounts(*)
INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN) :: sdispls(*), rdispls(*)
TYPE(MPI_Datatype), INTENT(IN) :: sendtypes(*), recvtypes(*)
TYPE(*), DIMENSION(..) :: recvbuf
TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
Fortran binding
MPI_NEIGHBOR_ALLTOALLW(SENDBUF, SENDCOUNTS, SDISPLS, SENDTYPES, RECVBUF, RECVCOUNTS, RDISPLS, RECVTYPES, COMM, IERROR)

<type> SENDBUF(*), RECVBUF(*)
INTEGER SENDCOUNTS(*), SENDTYPES(*), RECVCOUNTS(*), RECVTYPES(*), COMM, IERROR
INTEGER(KIND=MPI_ADDRESS_KIND) SDISPLS(*), RDISPLS(*)

The MPI_NEIGHBOR_ALLTOALLW procedure supports Cartesian communicators, graph communicators, and distributed graph communicators as described in Section Neighborhood Collective Communication on Virtual Topologies. If comm is a distributed graph communicator, the outcome is as if each MPI process executed sends to each of its outgoing neighbors and receives from each of its incoming neighbors:

Image file

The type signature associated with sendcounts [k], sendtypes [k] with dsts[k]=j at MPI process i must be equal to the type signature associated with recvcounts [l], recvtypes [l] with srcs[l]=i at MPI process j. This implies that the amount of data sent must be equal to the amount of data received, pairwise between every pair of communicating MPI processes. Distinct type maps between sender and receiver are still allowed.

The ``in place'' option is not meaningful for this operation.

All arguments are significant on all MPI processes and the argument comm must have identical values on all MPI processes.


PreviousUpNext
Up: Neighborhood Collective Communication on Virtual Topologies Next: Nonblocking Neighborhood Communication on Process Topologies Previous: Neighborhood Gather


Return to MPI-4.1 Standard Index
Return to MPI Forum Home Page

(Unofficial) MPI-4.1 of November 2, 2023
HTML Generated on November 19, 2023