| MPI_ALLGATHER( sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm) | |
| IN sendbuf | starting address of send buffer (choice) |
| IN sendcount | number of elements in send buffer (non-negative integer) |
| IN sendtype | data type of send buffer elements (handle) |
| OUT recvbuf | address of receive buffer (choice) |
| IN recvcount | number of elements received from any process (non-negative integer) |
| IN recvtype | data type of receive buffer elements (handle) |
| IN comm | communicator (handle) |
int MPI_Allgather(void* sendbuf, int sendcount, MPI_Datatype sendtype, void* recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm)
MPI_ALLGATHER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, COMM, IERROR)
The type signature associated with sendcount, sendtype,
at a process must be equal to the type signature associated with
recvcount, recvtype at any other process.
If comm is an intracommunicator,
the outcome of a call to MPI_ALLGATHER(...) is as if
all processes executed n calls to
The communication pattern of MPI_ALLGATHER executed on an
intercommunication domain need not be symmetric. The number of items
sent by processes in group A (as specified by the arguments
sendcount, sendtype in group A and the arguments
recvcount, recvtype in group B), need not equal the number of
items sent by processes in group B (as specified by the arguments
sendcount, sendtype in group B and the arguments
recvcount, recvtype in group A). In particular, one can move
data in only one direction by specifying sendcount = 0 for
the communication in the reverse direction.
( End of advice to users.)
int MPI_Allgatherv(void* sendbuf, int sendcount, MPI_Datatype sendtype, void* recvbuf, int *recvcounts, int *displs, MPI_Datatype recvtype, MPI_Comm comm)
MPI_ALLGATHERV(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNTS, DISPLS, RECVTYPE, COMM, IERROR)
The type signature associated with sendcount, sendtype,
at process j must be equal to the type signature associated with
recvcounts[j], recvtype at any other process.
If comm is an intracommunicator,
the outcome is as if all processes executed calls to
<type> SENDBUF(*), RECVBUF(*)
INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, COMM, IERROR
void MPI::Comm::Allgather(const void* sendbuf, int sendcount, const MPI::Datatype& sendtype, void* recvbuf, int recvcount, const MPI::Datatype& recvtype) const = 0
MPI_ALLGATHER can be thought of as MPI_GATHER, but
where all processes receive the result, instead of just the root.
The block of data sent from the
j-th
process is received by every process and placed in the
j-th
block of the buffer recvbuf.
MPI_GATHER(sendbuf,sendcount,sendtype,recvbuf,recvcount,
recvtype,root,comm),
for root = 0 , ..., n-1. The rules for correct usage of
MPI_ALLGATHER are easily found from the corresponding rules
for MPI_GATHER.
The ``in place'' option for intracommunicators is specified by passing the
value
MPI_IN_PLACE to the argument sendbuf at all processes.
sendcount and sendtype are ignored. Then the input data
of each process is assumed to be in the area where that
process would receive its own contribution to the receive
buffer.
If comm is an intercommunicator, then each process in group A
contributes a data item; these items are concatenated and the result
is stored at each process in group B. Conversely the concatenation of the
contributions of the processes in group B is stored at each process in
group A. The send buffer arguments in group A must be consistent
with the receive buffer arguments in group B, and vice versa.
Advice to users.
MPI_ALLGATHERV( sendbuf, sendcount, sendtype, recvbuf,
recvcounts, displs, recvtype, comm) IN sendbuf starting address of send buffer (choice) IN sendcount number of elements in send buffer (non-negative
integer) IN sendtype data type of send buffer elements (handle) OUT recvbuf address of receive buffer (choice) IN recvcounts non-negative
integer array (of length group size)
containing the number of elements that are received from each process IN displs integer array (of length group size). Entry
i specifies the displacement (relative to recvbuf) at
which to place the incoming data from process i IN recvtype data type of receive buffer elements (handle) IN comm communicator (handle)
<type> SENDBUF(*), RECVBUF(*)
INTEGER SENDCOUNT, SENDTYPE, RECVCOUNTS(*), DISPLS(*), RECVTYPE, COMM, IERROR
void MPI::Comm::Allgatherv(const void* sendbuf, int sendcount, const MPI::Datatype& sendtype, void* recvbuf, const int recvcounts[], const int displs[], const MPI::Datatype& recvtype) const = 0
MPI_ALLGATHERV can be thought of as MPI_GATHERV, but
where all processes receive the result, instead of just the root.
The block of data sent from the
j-th
process is received by every process and placed in the
j-th
block of the buffer recvbuf.
These blocks need not all be the same size.
MPI_GATHERV(sendbuf,sendcount,sendtype,recvbuf,recvcounts,displs,
recvtype,root,comm),
for root = 0 , ..., n-1. The rules for correct usage of
MPI_ALLGATHERV are easily found from the corresponding rules
for MPI_GATHERV.
The ``in place'' option for intracommunicators is specified by passing the
value
MPI_IN_PLACE to the argument sendbuf at all processes.
sendcount and sendtype are ignored. Then the input data
of each process is assumed to be in the area where that
process would receive its own contribution to the receive
buffer.
If comm is an intercommunicator, then each process in group A
contributes a data item; these items are concatenated and the result
is stored at each process in group B. Conversely the concatenation of the
contributions of the processes in group B is stored at each process in
group A. The send buffer arguments in group A must be consistent
with the receive buffer arguments in group B, and vice versa.
![]()
![]()
![]()
Up: Contents
Next: Examples using MPI_ALLGATHER, MPI_ALLGATHERV
Previous: Examples using MPI_SCATTER, MPI_SCATTERV
Return to MPI-2.1 Standard Index
Return to MPI Forum Home Page
MPI-2.0 of July 1, 2008
HTML Generated on July 6, 2008