9.6.1. Neighborhood Gather

PreviousUpNext
Up: Neighborhood Collective Communication on Virtual Topologies Next: Neighborhood Alltoall Previous: Neighborhood Collective Communication on Virtual Topologies

In the neighborhood gather operation, each MPI process i gathers data items from each MPI process j if an edge (j,i) exists in the topology graph, and each MPI process i sends the same data items to all MPI processes j where an edge (i,j) exists. The send buffer is sent to each neighboring MPI process and the l-th block in the receive buffer is received from the l-th neighbor.

MPI_NEIGHBOR_ALLGATHER(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm)
IN sendbufstarting address of send buffer (choice)
IN sendcountnumber of elements sent to each neighbor (non-negative integer)
IN sendtypedatatype of send buffer elements (handle)
OUT recvbufstarting address of receive buffer (choice)
IN recvcountnumber of elements received from each neighbor (non-negative integer)
IN recvtypedatatype of receive buffer elements (handle)
IN commcommunicator with associated virtual topology (handle)
C binding
int MPI_Neighbor_allgather(const void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm)
int MPI_Neighbor_allgather_c(const void *sendbuf, MPI_Count sendcount, MPI_Datatype sendtype, void *recvbuf, MPI_Count recvcount, MPI_Datatype recvtype, MPI_Comm comm)
Fortran 2008 binding
MPI_Neighbor_allgather(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, ierror)

TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf
INTEGER, INTENT(IN) :: sendcount, recvcount
TYPE(MPI_Datatype), INTENT(IN) :: sendtype, recvtype
TYPE(*), DIMENSION(..) :: recvbuf
TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_Neighbor_allgather(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, ierror) !(_c)

TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf
INTEGER(KIND=MPI_COUNT_KIND), INTENT(IN) :: sendcount, recvcount
TYPE(MPI_Datatype), INTENT(IN) :: sendtype, recvtype
TYPE(*), DIMENSION(..) :: recvbuf
TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
Fortran binding
MPI_NEIGHBOR_ALLGATHER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, COMM, IERROR)

<type> SENDBUF(*), RECVBUF(*)
INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, COMM, IERROR

The MPI_NEIGHBOR_ALLGATHER procedure supports Cartesian communicators, graph communicators, and distributed graph communicators as described in Section Neighborhood Collective Communication on Virtual Topologies. If comm is a distributed graph communicator, the outcome is as if each MPI process executed sends to each of its outgoing neighbors and receives from each of its incoming neighbors:

Image file

Figure Neighborhood Gather shows the neighborhood gather communication of one MPI process with outgoing neighbors d0... d3 and incoming neighbors s0... s5. The MPI process will send its sendbuf to all four destinations (outgoing neighbors) and it will receive the contribution from all six sources (incoming neighbors) into separate locations of its receive buffer.

Image file


Neighborhood gather communication example

All arguments are significant on all MPI processes and the argument comm must have identical values on all MPI processes.

The type signature associated with sendcount, sendtype at an MPI process must be equal to the type signature associated with recvcount, recvtype at all other MPI processes. This implies that the amount of data sent must be equal to the amount of data received, pairwise between every pair of communicating MPI processes. Distinct type maps between sender and receiver are still allowed.


Rationale.

For optimization reasons, the same type signature is required independently of whether the topology graph is connected or not. ( End of rationale.)
The ``in place'' option is not meaningful for this operation.


Example Buffer usage of MPI_NEIGHBOR_ALLGATHER in the case of a Cartesian virtual topology.

On a Cartesian virtual topology, the buffer usage in a given direction d with dims[d]=3 and 1, respectively during creation of the communicator is described in Figure 22.

The figure may apply to any (or multiple) directions in the Cartesian topology. The grey buffers are required in all cases but are only accessed if during creation of the communicator, periods[d] was defined as nonzero (in C) or .TRUE. (in Fortran).

Image file


Figure 22: Cartesian neighborhood allgather example for 3 and 1 processes in a dimension

The vector variant of MPI_NEIGHBOR_ALLGATHER allows one to gather different numbers of elements from each neighbor.

MPI_NEIGHBOR_ALLGATHERV(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm)
IN sendbufstarting address of send buffer (choice)
IN sendcountnumber of elements sent to each neighbor (non-negative integer)
IN sendtypedatatype of send buffer elements (handle)
OUT recvbufstarting address of receive buffer (choice)
IN recvcountsnonnegative integer array (of length indegree) containing the number of elements that are received from each neighbor
IN displsinteger array (of length indegree). Entry i specifies the displacement (relative to recvbuf) at which to place the incoming data from neighbor i
IN recvtypedatatype of receive buffer elements (handle)
IN commcommunicator with associated virtual topology (handle)
C binding
int MPI_Neighbor_allgatherv(const void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, const int recvcounts[], const int displs[], MPI_Datatype recvtype, MPI_Comm comm)
int MPI_Neighbor_allgatherv_c(const void *sendbuf, MPI_Count sendcount, MPI_Datatype sendtype, void *recvbuf, const MPI_Count recvcounts[], const MPI_Aint displs[], MPI_Datatype recvtype, MPI_Comm comm)
Fortran 2008 binding
MPI_Neighbor_allgatherv(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm, ierror)

TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf
INTEGER, INTENT(IN) :: sendcount, recvcounts(*), displs(*)
TYPE(MPI_Datatype), INTENT(IN) :: sendtype, recvtype
TYPE(*), DIMENSION(..) :: recvbuf
TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_Neighbor_allgatherv(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm, ierror) !(_c)

TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf
INTEGER(KIND=MPI_COUNT_KIND), INTENT(IN) :: sendcount, recvcounts(*)
TYPE(MPI_Datatype), INTENT(IN) :: sendtype, recvtype
TYPE(*), DIMENSION(..) :: recvbuf
INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN) :: displs(*)
TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
Fortran binding
MPI_NEIGHBOR_ALLGATHERV(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNTS, DISPLS, RECVTYPE, COMM, IERROR)

<type> SENDBUF(*), RECVBUF(*)
INTEGER SENDCOUNT, SENDTYPE, RECVCOUNTS(*), DISPLS(*), RECVTYPE, COMM, IERROR

The MPI_NEIGHBOR_ALLGATHERV procedure supports Cartesian communicators, graph communicators, and distributed graph communicators as described in Section Neighborhood Collective Communication on Virtual Topologies. If comm is a distributed graph communicator, the outcome is as if each MPI process executed sends to each of its outgoing neighbors and receives from each of its incoming neighbors:

Image file

The type signature associated with sendcount, sendtype at MPI process j must be equal to the type signature associated with recvcounts [l], recvtype at any other MPI process with srcs[l]=j. This implies that the amount of data sent must be equal to the amount of data received, pairwise between every pair of communicating MPI processes. Distinct type maps between sender and receiver are still allowed. The data received from the l-th neighbor is placed into recvbuf beginning at offset displs [l] elements (in terms of the recvtype).

The ``in place'' option is not meaningful for this operation.

All arguments are significant on all MPI processes and the argument comm must have identical values on all MPI processes.


PreviousUpNext
Up: Neighborhood Collective Communication on Virtual Topologies Next: Neighborhood Alltoall Previous: Neighborhood Collective Communication on Virtual Topologies


Return to MPI-4.1 Standard Index
Return to MPI Forum Home Page

(Unofficial) MPI-4.1 of November 2, 2023
HTML Generated on November 19, 2023