In this function, each process *i* gathers data items from each process
*j* if an edge *(j,i)* exists in the topology graph, and each
process *i* sends the same data items to all processes *j* where an edge *(i,j)*
exists. The send buffer is sent to each neighboring process and the
*l*-th block in the receive buffer is received from the *l*-th neighbor.

MPI_NEIGHBOR_ALLGATHER(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm) | |

IN sendbuf | starting address of send buffer (choice) |

IN sendcount | number of elements sent to each neighbor (non-negative integer) |

IN sendtype | data type of send buffer elements (handle) |

OUT recvbuf | starting address of receive buffer (choice) |

IN recvcount | number of elements received from each neighbor (non-negative integer) |

IN recvtype | data type of receive buffer elements (handle) |

IN comm | communicator with topology structure (handle) |

` int MPI_Neighbor_allgather(const void* sendbuf, int sendcount, MPI_Datatype sendtype, void* recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm) `

` MPI_Neighbor_allgather(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, ierror) TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf TYPE(*), DIMENSION(..) :: recvbuf INTEGER, INTENT(IN) :: sendcount, recvcount TYPE(MPI_Datatype), INTENT(IN) :: sendtype, recvtype TYPE(MPI_Comm), INTENT(IN) :: comm INTEGER, OPTIONAL, INTENT(OUT) :: ierror `

INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, COMM, IERROR

This function supports Cartesian communicators, graph communicators, and distributed graph communicators as described in Section . If comm is a distributed graph communicator, the outcome is as if each process executed sends to each of its outgoing neighbors and receives from each of its incoming neighbors:

Figure Neighborhood Gather shows the neighborhood gather communication of one process with outgoing neighborsMPI_Dist_graph_neighbors_count(comm,&indegree,&outdegree,&weighted); int *srcs=(int*)malloc(indegree*sizeof(int)); int *dsts=(int*)malloc(outdegree*sizeof(int)); MPI_Dist_graph_neighbors(comm,indegree,srcs,MPI_UNWEIGHTED, outdegree,dsts,MPI_UNWEIGHTED); int k,l; /* assume sendbuf and recvbuf are of type (char*) */ for(k=0; k<outdegree; ++k) MPI_Isend(sendbuf,sendcount,sendtype,dsts[k],...); for(l=0; l<indegree; ++l) MPI_Irecv(recvbuf+l*recvcount*extent(recvtype),recvcount,recvtype, srcs[l],...); MPI_Waitall(...);

Neighborhood gather communication example.

All arguments are significant on all processes and the argument comm must have identical values on all processes.

The type signature associated with sendcount, sendtype, at a process must be equal to the type signature associated with recvcount, recvtype at all other processes. This implies that the amount of data sent must be equal to the amount of data received, pairwise between every pair of communicating processes. Distinct type maps between sender and receiver are still allowed.

* Rationale.*

For optimization reasons, the same type signature is required
independently of whether the topology graph is connected or not.
(* End of rationale.*)

The ``in place'' option is not meaningful for this operation.

The vector variant of MPI_NEIGHBOR_ALLGATHER allows one to gather different numbers of elements from each neighbor.

MPI_NEIGHBOR_ALLGATHERV(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm) | |

IN sendbuf | starting address of send buffer (choice) |

IN sendcount | number of elements sent to each neighbor (non-negative integer) |

IN sendtype | data type of send buffer elements (handle) |

OUT recvbuf | starting address of receive buffer (choice) |

IN recvcounts | non-negative integer array (of length indegree) containing the number of elements that are received from each neighbor |

IN displs | integer array (of length indegree). Entry i specifies the displacement (relative to recvbuf) at which to place the incoming data from neighbor i |

IN recvtype | data type of receive buffer elements (handle) |

IN comm | communicator with topology structure (handle) |

` int MPI_Neighbor_allgatherv(const void* sendbuf, int sendcount, MPI_Datatype sendtype, void* recvbuf, const int recvcounts[], const int displs[], MPI_Datatype recvtype, MPI_Comm comm) `

` MPI_Neighbor_allgatherv(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm, ierror) TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf TYPE(*), DIMENSION(..) :: recvbuf INTEGER, INTENT(IN) :: sendcount, recvcounts(*), displs(*) TYPE(MPI_Datatype), INTENT(IN) :: sendtype, recvtype TYPE(MPI_Comm), INTENT(IN) :: comm INTEGER, OPTIONAL, INTENT(OUT) :: ierror `

INTEGER SENDCOUNT, SENDTYPE, RECVCOUNTS(*), DISPLS(*), RECVTYPE, COMM,

IERROR

This function supports Cartesian communicators, graph communicators, and distributed graph communicators as described in Section . If comm is a distributed graph communicator, the outcome is as if each process executed sends to each of its outgoing neighbors and receives from each of its incoming neighbors:

The type signature associated with sendcount, sendtype, at process j must be equal to the type signature associated with recvcounts [l], recvtype at any other process with srcs[l]==j. This implies that the amount of data sent must be equal to the amount of data received, pairwise between every pair of communicating processes. Distinct type maps between sender and receiver are still allowed. The data received from the l-th neighbor is placed into recvbuf beginning at offset displs [l] elements (in terms of the recvtype).MPI_Dist_graph_neighbors_count(comm,&indegree,&outdegree,&weighted); int *srcs=(int*)malloc(indegree*sizeof(int)); int *dsts=(int*)malloc(outdegree*sizeof(int)); MPI_Dist_graph_neighbors(comm,indegree,srcs,MPI_UNWEIGHTED, outdegree,dsts,MPI_UNWEIGHTED); int k,l; /* assume sendbuf and recvbuf are of type (char*) */ for(k=0; k<outdegree; ++k) MPI_Isend(sendbuf,sendcount,sendtype,dsts[k],...); for(l=0; l<indegree; ++l) MPI_Irecv(recvbuf+displs[l]*extent(recvtype),recvcounts[l],recvtype, srcs[l],...); MPI_Waitall(...);

The ``in place'' option is not meaningful for this operation.

All arguments are significant on all processes and the argument comm must have identical values on all processes.

Return to MPI-3.1 Standard Index

Return to MPI Forum Home Page

(Unofficial) MPI-3.1 of June 4, 2015

HTML Generated on June 4, 2015