190. Neighbor Alltoall

PreviousUpNext
Up: Next: Previous: Neighborhood Gather

In this function, each process i receives data items from each process j if an edge (j,i) exists in the topology graph or Cartesian topology. Similarly, each process i sends data items to all processes j where an edge (i,j) exists. This call is more general than MPI_NEIGHBOR_ALLGATHER in that different data items can be sent to each neighbor. The k-th block in send buffer is sent to the k-th neighboring process and the l-th block in the receive buffer is received from the l-th neighbor.

Image file

int MPI_Neighbor_alltoall(const void* sendbuf, int sendcount, MPI_Datatype sendtype, void* recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm)

MPI_Neighbor_alltoall(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, ierror)
TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf
TYPE(*), DIMENSION(..) :: recvbuf
INTEGER, INTENT(IN) :: sendcount, recvcount
TYPE(MPI_Datatype), INTENT(IN) :: sendtype, recvtype
TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_NEIGHBOR_ALLTOALL(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, COMM, IERROR)
<type> SENDBUF(*), RECVBUF(*)
INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, COMM, IERROR

This function supports Cartesian communicators, graph communicators, and distributed graph communicators as described in Section . If comm is a distributed graph communicator, the outcome is as if each process executed sends to each of its outgoing neighbors and receives from each of its incoming neighbors:


MPI_Dist_graph_neighbors_count(comm,&indegree,&outdegree,&weighted); 
int *srcs=(int*)malloc(indegree*sizeof(int)); 
int *dsts=(int*)malloc(outdegree*sizeof(int)); 
MPI_Dist_graph_neighbors(comm,indegree,srcs,MPI_UNWEIGHTED, 
                         outdegree,dsts,MPI_UNWEIGHTED); 
int k,l; 
 
/* assume sendbuf and recvbuf are of type (char*) */ 
for(k=0; k<outdegree; ++k) 
  MPI_Isend(sendbuf+k*sendcount*extent(sendtype),sendcount,sendtype, 
            dsts[k],...);  
 
for(l=0; l<indegree; ++l)  
  MPI_Irecv(recvbuf+l*recvcount*extent(recvtype),recvcount,recvtype, 
            srcs[l],...);  
 
MPI_Waitall(...); 
The type signature associated with sendcount, sendtype, at a process must be equal to the type signature associated with recvcount, recvtype at any other process. This implies that the amount of data sent must be equal to the amount of data received, pairwise between every pair of communicating processes. Distinct type maps between sender and receiver are still allowed.

The ``in place'' option is not meaningful for this operation.

All arguments are significant on all processes and the argument comm must have identical values on all processes.

The vector variant of MPI_NEIGHBOR_ALLTOALL allows sending/receiving different numbers of elements to and from each neighbor.

MPI_NEIGHBOR_ALLTOALLV(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm)
IN sendbufstarting address of send buffer (choice)
IN sendcountsnon-negative integer array (of length outdegree) specifying the number of elements to send to each neighbor
IN sdisplsinteger array (of length outdegree). Entry j specifies the displacement (relative to sendbuf) from which to send the outgoing data to neighbor j
IN sendtypedata type of send buffer elements (handle)
OUT recvbufstarting address of receive buffer (choice)
IN recvcountsnon-negative integer array (of length indegree) specifying the number of elements that are received from each neighbor
IN rdisplsinteger array (of length indegree). Entry i specifies the displacement (relative to recvbuf) at which to place the incoming data from neighbor i
IN recvtypedata type of receive buffer elements (handle)
IN commcommunicator with topology structure (handle)

int MPI_Neighbor_alltoallv(const void* sendbuf, const int sendcounts[], const int sdispls[], MPI_Datatype sendtype, void* recvbuf, const int recvcounts[], const int rdispls[], MPI_Datatype recvtype, MPI_Comm comm)

MPI_Neighbor_alltoallv(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm, ierror)
TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf
TYPE(*), DIMENSION(..) :: recvbuf
INTEGER, INTENT(IN) :: sendcounts(*), sdispls(*), recvcounts(*),
rdispls(*)
TYPE(MPI_Datatype), INTENT(IN) :: sendtype, recvtype
TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_NEIGHBOR_ALLTOALLV(SENDBUF, SENDCOUNTS, SDISPLS, SENDTYPE, RECVBUF, RECVCOUNTS, RDISPLS,
RECVTYPE, COMM, IERROR)
<type> SENDBUF(*), RECVBUF(*)
INTEGER SENDCOUNTS(*), SDISPLS(*), SENDTYPE, RECVCOUNTS(*), RDISPLS(*), RECVTYPE, COMM, IERROR

This function supports Cartesian communicators, graph communicators, and distributed graph communicators as described in Section . If comm is a distributed graph communicator, the outcome is as if each process executed sends to each of its outgoing neighbors and receives from each of its incoming neighbors:


MPI_Dist_graph_neighbors_count(comm,&indegree,&outdegree,&weighted); 
int *srcs=(int*)malloc(indegree*sizeof(int)); 
int *dsts=(int*)malloc(outdegree*sizeof(int)); 
MPI_Dist_graph_neighbors(comm,indegree,srcs,MPI_UNWEIGHTED, 
                         outdegree,dsts,MPI_UNWEIGHTED); 
int k,l; 
 
/* assume sendbuf and recvbuf are of type (char*) */ 
for(k=0; k<outdegree; ++k)  
  MPI_Isend(sendbuf+sdispls[k]*extent(sendtype),sendcounts[k],sendtype, 
            dsts[k],...);  
 
for(l=0; l<indegree; ++l)  
  MPI_Irecv(recvbuf+rdispls[l]*extent(recvtype),recvcounts[l],recvtype, 
            srcs[l],...);  
 
MPI_Waitall(...); 
The type signature associated with sendcounts [k], sendtype with dsts[k]==j at process i must be equal to the type signature associated with recvcounts [l], recvtype with srcs[l]==i at process j. This implies that the amount of data sent must be equal to the amount of data received, pairwise between every pair of communicating processes. Distinct type maps between sender and receiver are still allowed. The data in the sendbuf beginning at offset sdispls [k] elements (in terms of the sendtype) is sent to the k-th outgoing neighbor. The data received from the l-th incoming neighbor is placed into recvbuf beginning at offset rdispls [l] elements (in terms of the recvtype).

The ``in place'' option is not meaningful for this operation.

All arguments are significant on all processes and the argument comm must have identical values on all processes.

MPI_NEIGHBOR_ALLTOALLW allows one to send and receive with different datatypes to and from each neighbor.

MPI_NEIGHBOR_ALLTOALLW(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm)
IN sendbufstarting address of send buffer (choice)
IN sendcountsnon-negative integer array (of length outdegree) specifying the number of elements to send to each neighbor
IN sdisplsinteger array (of length outdegree). Entry j specifies the displacement in bytes (relative to sendbuf) from which to take the outgoing data destined for neighbor j (array of integers)
IN sendtypesarray of datatypes (of length outdegree). Entry j specifies the type of data to send to neighbor j (array of handles)
OUT recvbufstarting address of receive buffer (choice)
IN recvcountsnon-negative integer array (of length indegree) specifying the number of elements that are received from each neighbor
IN rdisplsinteger array (of length indegree). Entry i specifies the displacement in bytes (relative to recvbuf) at which to place the incoming data from neighbor i (array of integers)
IN recvtypesarray of datatypes (of length indegree). Entry i specifies the type of data received from neighbor i (array of handles)
IN commcommunicator with topology structure (handle)

int MPI_Neighbor_alltoallw(const void* sendbuf, const int sendcounts[], const MPI_Aint sdispls[], const MPI_Datatype sendtypes[], void* recvbuf, const int recvcounts[], const MPI_Aint rdispls[], const MPI_Datatype recvtypes[], MPI_Comm comm)

MPI_Neighbor_alltoallw(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm, ierror)
TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf
TYPE(*), DIMENSION(..) :: recvbuf
INTEGER, INTENT(IN) :: sendcounts(*), recvcounts(*)
INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN) :: sdispls(*), rdispls(*)
TYPE(MPI_Datatype), INTENT(IN) :: sendtypes(*), recvtypes(*)
TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_NEIGHBOR_ALLTOALLW(SENDBUF, SENDCOUNTS, SDISPLS, SENDTYPES, RECVBUF, RECVCOUNTS, RDISPLS, RECVTYPES, COMM, IERROR)
<type> SENDBUF(*), RECVBUF(*)
INTEGER(KIND=MPI_ADDRESS_KIND) SDISPLS(*), RDISPLS(*)
INTEGER SENDCOUNTS(*), SENDTYPES(*), RECVCOUNTS(*), RECVTYPES(*), COMM,
IERROR

This function supports Cartesian communicators, graph communicators, and distributed graph communicators as described in Section . If comm is a distributed graph communicator, the outcome is as if each process executed sends to each of its outgoing neighbors and receives from each of its incoming neighbors:


MPI_Dist_graph_neighbors_count(comm,&indegree,&outdegree,&weighted); 
int *srcs=(int*)malloc(indegree*sizeof(int)); 
int *dsts=(int*)malloc(outdegree*sizeof(int)); 
MPI_Dist_graph_neighbors(comm,indegree,srcs,MPI_UNWEIGHTED, 
                         outdegree,dsts,MPI_UNWEIGHTED); 
int k,l; 
 
/* assume sendbuf and recvbuf are of type (char*) */ 
for(k=0; k<outdegree; ++k)  
  MPI_Isend(sendbuf+sdispls[k],sendcounts[k], sendtypes[k],dsts[k],...);  
 
for(l=0; l<indegree; ++l)  
  MPI_Irecv(recvbuf+rdispls[l],recvcounts[l], recvtypes[l],srcs[l],...);  
 
MPI_Waitall(...); 
The type signature associated with sendcounts [k], sendtypes [k] with dsts[k]==j at process i must be equal to the type signature associated with recvcounts [l], recvtypes [l] with srcs[l]==i at process j. This implies that the amount of data sent must be equal to the amount of data received, pairwise between every pair of communicating processes. Distinct type maps between sender and receiver are still allowed.

The ``in place'' option is not meaningful for this operation.

All arguments are significant on all processes and the argument comm must have identical values on all processes.


PreviousUpNext
Up: Next: Previous: Neighborhood Gather


Return to MPI-3.1 Standard Index
Return to MPI Forum Home Page

(Unofficial) MPI-3.1 of June 4, 2015
HTML Generated on June 4, 2015