In the neighborhood alltoall operation, each MPI process *i* receives data items from each MPI process
*j* if an edge *(j,i)* exists in the topology graph or Cartesian
topology. Similarly, each MPI process *i* sends data items to all MPI processes
*j* where an edge *(i,j)* exists. This call is more general than
MPI_NEIGHBOR_ALLGATHER in that different data items can be
sent to each neighbor. The *k*-th block in send buffer is sent to the
*k*-th neighboring MPI process and the *l*-th block in the receive buffer is
received from the *l*-th neighbor.

MPI_NEIGHBOR_ALLTOALL(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm) | |

IN sendbuf | starting address of send buffer (choice) |

IN sendcount | number of elements sent to each neighbor (non-negative integer) |

IN sendtype | datatype of send buffer elements (handle) |

OUT recvbuf | starting address of receive buffer (choice) |

IN recvcount | number of elements received from each neighbor (non-negative integer) |

IN recvtype | datatype of receive buffer elements (handle) |

IN comm | communicator with associated virtual topology (handle) |

TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf

INTEGER, INTENT(IN) :: sendcount, recvcount

TYPE(MPI_Datatype), INTENT(IN) :: sendtype, recvtype

TYPE(*), DIMENSION(..) :: recvbuf

TYPE(MPI_Comm), INTENT(IN) :: comm

INTEGER, OPTIONAL, INTENT(OUT) :: ierror

TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf

INTEGER(KIND=MPI_COUNT_KIND), INTENT(IN) :: sendcount, recvcount

TYPE(MPI_Datatype), INTENT(IN) :: sendtype, recvtype

TYPE(*), DIMENSION(..) :: recvbuf

TYPE(MPI_Comm), INTENT(IN) :: comm

INTEGER, OPTIONAL, INTENT(OUT) :: ierror

<type> SENDBUF(*), RECVBUF(*)

INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, COMM, IERROR

The MPI_NEIGHBOR_ALLTOALL procedure supports Cartesian communicators, graph communicators, and distributed graph communicators as described in Section Neighborhood Collective Communication on Virtual Topologies. If comm is a distributed graph communicator, the outcome is as if each MPI process executed sends to each of its outgoing neighbors and receives from each of its incoming neighbors:

The type signature associated with sendcount, sendtype at an MPI process must be equal to the type signature associated with recvcount, recvtype at any other MPI process. This implies that the amount of data sent must be equal to the amount of data received, pairwise between every pair of communicating MPI processes. Distinct type maps between sender and receiver are still allowed.

The ``in place'' option is not meaningful for this operation.

All arguments are significant on all MPI processes and the argument comm must have identical values on all MPI processes.

** Example**
Buffer usage of MPI_NEIGHBOR_ALLTOALL in the case of a Cartesian virtual topology.

For a halo communication on a Cartesian grid, the buffer usage in a given direction d with dims[d]=3 and 1, respectively during creation of the communicator is described in Figure 23.

The figure may apply to any (or multiple) directions in the Cartesian topology. The grey buffers are required in all cases but are only accessed if during creation of the communicator, periods[d] was defined as nonzero (in C) or .TRUE. (in Fortran).

If sendbuf and recvbuf are declared as (char *) and contain a sequence of buffers each described by sendcount, sendtype and recvbuf, recvtype, then after MPI_NEIGHBOR_ALLTOALL on a Cartesian communicator returned, the content of the recvbuf is as if the following code is executed:

The first call to MPI_Sendrecv implements the solid arrows' communication pattern in each diagram of Figure 23, whereas the second call is for the dashed arrows' pattern.

* Advice
to implementors.*

For a Cartesian topology, if the grid in a
direction d is periodic and dims[d] is equal to 1 or 2,
then rank_source and rank_dest are identical,
but still all ndims send and ndims receive operations
use different buffers.
If in this case, the two send and receive operations per direction or
of all directions are internally parallelized, then the several send and
receive operations for the same sender-receiver MPI process pair shall be
initiated in the same sequence on sender and receiver side or
they shall be distinguished by different tags.
The code above shows a valid sequence of operations and tags.
(* End of advice to implementors.*)

The vector variant of MPI_NEIGHBOR_ALLTOALL allows
sending/receiving different numbers of elements to and from each neighbor.

MPI_NEIGHBOR_ALLTOALLV(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm) | |

IN sendbuf | starting address of send buffer (choice) |

IN sendcounts | nonnegative integer array (of length outdegree) specifying the number of elements to send to each neighbor |

IN sdispls | integer array (of length outdegree). Entry j specifies the displacement (relative to sendbuf) from which to send the outgoing data to neighbor j |

IN sendtype | datatype of send buffer elements (handle) |

OUT recvbuf | starting address of receive buffer (choice) |

IN recvcounts | nonnegative integer array (of length indegree) specifying the number of elements that are received from each neighbor |

IN rdispls | integer array (of length indegree). Entry i specifies the displacement (relative to recvbuf) at which to place the incoming data from neighbor i |

IN recvtype | datatype of receive buffer elements (handle) |

IN comm | communicator with associated virtual topology (handle) |

TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf

INTEGER, INTENT(IN) :: sendcounts(*), sdispls(*), recvcounts(*), rdispls(*)

TYPE(MPI_Datatype), INTENT(IN) :: sendtype, recvtype

TYPE(*), DIMENSION(..) :: recvbuf

TYPE(MPI_Comm), INTENT(IN) :: comm

INTEGER, OPTIONAL, INTENT(OUT) :: ierror

TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf

INTEGER(KIND=MPI_COUNT_KIND), INTENT(IN) :: sendcounts(*), recvcounts(*)

INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN) :: sdispls(*), rdispls(*)

TYPE(MPI_Datatype), INTENT(IN) :: sendtype, recvtype

TYPE(*), DIMENSION(..) :: recvbuf

TYPE(MPI_Comm), INTENT(IN) :: comm

INTEGER, OPTIONAL, INTENT(OUT) :: ierror

<type> SENDBUF(*), RECVBUF(*)

INTEGER SENDCOUNTS(*), SDISPLS(*), SENDTYPE, RECVCOUNTS(*), RDISPLS(*), RECVTYPE, COMM, IERROR

The MPI_NEIGHBOR_ALLTOALLV procedure supports Cartesian communicators, graph communicators, and distributed graph communicators as described in Section Neighborhood Collective Communication on Virtual Topologies. If comm is a distributed graph communicator, the outcome is as if each MPI process executed sends to each of its outgoing neighbors and receives from each of its incoming neighbors:

The type signature associated with
sendcounts [k], sendtype
with dsts[k]=*j* at MPI process *i*
must be equal to the type signature associated with
recvcounts [l], recvtype
with srcs[l]=*i* at MPI process *j*.
This implies that the amount of data sent must be equal to the
amount of data received, pairwise between every pair of
communicating MPI processes.
Distinct type maps between sender and receiver are still allowed.
The data in the sendbuf beginning at offset
sdispls [k] elements (in terms of the sendtype)
is sent to the k-th outgoing neighbor.
The data received from the l-th incoming neighbor is placed
into recvbuf beginning at offset rdispls [l]
elements (in terms of the recvtype).

The ``in place'' option is not meaningful for this operation.

All arguments are significant on all MPI processes and the argument comm must have identical values on all MPI processes.

MPI_NEIGHBOR_ALLTOALLW allows one to send and receive with different datatypes to and from each neighbor.

MPI_NEIGHBOR_ALLTOALLW(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm) | |

IN sendbuf | starting address of send buffer (choice) |

IN sendcounts | nonnegative integer array (of length outdegree) specifying the number of elements to send to each neighbor |

IN sdispls | integer array (of length outdegree). Entry j specifies the displacement in bytes (relative to sendbuf) from which to take the outgoing data destined for neighbor j (array of integers) |

IN sendtypes | array of datatypes (of length outdegree). Entry j specifies the type of data to send to neighbor j (array of handles) |

OUT recvbuf | starting address of receive buffer (choice) |

IN recvcounts | nonnegative integer array (of length indegree) specifying the number of elements that are received from each neighbor |

IN rdispls | integer array (of length indegree). Entry i specifies the displacement in bytes (relative to recvbuf) at which to place the incoming data from neighbor i (array of integers) |

IN recvtypes | array of datatypes (of length indegree). Entry i specifies the type of data received from neighbor i (array of handles) |

IN comm | communicator with associated virtual topology (handle) |

TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf

INTEGER, INTENT(IN) :: sendcounts(*), recvcounts(*)

INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN) :: sdispls(*), rdispls(*)

TYPE(MPI_Datatype), INTENT(IN) :: sendtypes(*), recvtypes(*)

TYPE(*), DIMENSION(..) :: recvbuf

TYPE(MPI_Comm), INTENT(IN) :: comm

INTEGER, OPTIONAL, INTENT(OUT) :: ierror

TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf

INTEGER(KIND=MPI_COUNT_KIND), INTENT(IN) :: sendcounts(*), recvcounts(*)

INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN) :: sdispls(*), rdispls(*)

TYPE(MPI_Datatype), INTENT(IN) :: sendtypes(*), recvtypes(*)

TYPE(*), DIMENSION(..) :: recvbuf

TYPE(MPI_Comm), INTENT(IN) :: comm

INTEGER, OPTIONAL, INTENT(OUT) :: ierror

<type> SENDBUF(*), RECVBUF(*)

INTEGER SENDCOUNTS(*), SENDTYPES(*), RECVCOUNTS(*), RECVTYPES(*), COMM, IERROR

INTEGER(KIND=MPI_ADDRESS_KIND) SDISPLS(*), RDISPLS(*)

The MPI_NEIGHBOR_ALLTOALLW procedure supports Cartesian communicators, graph communicators, and distributed graph communicators as described in Section Neighborhood Collective Communication on Virtual Topologies. If comm is a distributed graph communicator, the outcome is as if each MPI process executed sends to each of its outgoing neighbors and receives from each of its incoming neighbors:

The type signature associated with
sendcounts [k], sendtypes [k]
with dsts[k]=*j* at MPI process *i*
must be equal to the type signature associated with
recvcounts [l], recvtypes [l]
with srcs[l]=*i* at MPI process *j*.
This implies that the amount of data sent must be equal to the
amount of data received, pairwise between every pair of
communicating MPI processes.
Distinct type maps between sender and receiver are still allowed.

The ``in place'' option is not meaningful for this operation.

All arguments are significant on all MPI processes and the argument comm must have identical values on all MPI processes.

Return to MPI-4.1 Standard Index

Return to MPI Forum Home Page

(Unofficial) MPI-4.1 of November 2, 2023

HTML Generated on November 19, 2023