If a * virtual topology* has been defined with one of the above functions, then the topology
information can be looked up using inquiry functions. They all are local
calls.

MPI_TOPO_TEST(comm, status) | |

IN comm | communicator (handle) |

OUT status | topology type of communicator comm (state) |

TYPE(MPI_Comm), INTENT(IN) :: comm

INTEGER, INTENT(OUT) :: status

INTEGER, OPTIONAL, INTENT(OUT) :: ierror

INTEGER COMM, STATUS, IERROR

The function MPI_TOPO_TEST returns the type of topology that is associated with a communicator.

The output value status is one of the following:

MPI_GRAPHDIMS_GET(comm, nnodes, nedges) | |

IN comm | communicator with associated graph topology (handle) |

OUT nnodes | number of nodes in graph (same as number of MPI processes in the group of comm) (integer) |

OUT nedges | number of edges in graph (integer) |

TYPE(MPI_Comm), INTENT(IN) :: comm

INTEGER, INTENT(OUT) :: nnodes, nedges

INTEGER, OPTIONAL, INTENT(OUT) :: ierror

INTEGER COMM, NNODES, NEDGES, IERROR

The functions MPI_GRAPHDIMS_GET and MPI_GRAPH_GET retrieve the graph topology information that is associated with the communicator. The information provided by MPI_GRAPHDIMS_GET can be used to dimension the vectors index and edges correctly for the following call to MPI_GRAPH_GET.

MPI_GRAPH_GET(comm, maxindex, maxedges, index, edges) | |

IN comm | communicator with associated graph topology (handle) |

IN maxindex | length of vector index in the calling program (integer) |

IN maxedges | length of vector edges in the calling program (integer) |

OUT index | array of integers containing the graph structure (for details see the definition of MPI_GRAPH_CREATE) |

OUT edges | array of integers containing the graph structure |

TYPE(MPI_Comm), INTENT(IN) :: comm

INTEGER, INTENT(IN) :: maxindex, maxedges

INTEGER, INTENT(OUT) :: index(maxindex), edges(maxedges)

INTEGER, OPTIONAL, INTENT(OUT) :: ierror

INTEGER COMM, MAXINDEX, MAXEDGES, INDEX(*), EDGES(*), IERROR

MPI_CARTDIM_GET(comm, ndims) | |

IN comm | communicator with associated Cartesian topology (handle) |

OUT ndims | number of dimensions of the Cartesian structure (integer) |

TYPE(MPI_Comm), INTENT(IN) :: comm

INTEGER, INTENT(OUT) :: ndims

INTEGER, OPTIONAL, INTENT(OUT) :: ierror

INTEGER COMM, NDIMS, IERROR

The functions MPI_CARTDIM_GET and MPI_CART_GET return the Cartesian topology information that is associated with the communicator. If comm is associated with a zero-dimensional Cartesian topology, MPI_CARTDIM_GET returns ndims = 0 and MPI_CART_GET will keep all output arguments unchanged.

MPI_CART_GET(comm, maxdims, dims, periods, coords) | |

IN comm | communicator with associated Cartesian topology (handle) |

IN maxdims | length of vectors dims, periods, and coords in the calling program (integer) |

OUT dims | number of MPI processes for each Cartesian dimension (array of integers) |

OUT periods | periodicity ( true/ false) for each Cartesian dimension (array of logicals) |

OUT coords | coordinates of calling MPI process in Cartesian structure (array of integers) |

TYPE(MPI_Comm), INTENT(IN) :: comm

INTEGER, INTENT(IN) :: maxdims

INTEGER, INTENT(OUT) :: dims(maxdims), coords(maxdims)

LOGICAL, INTENT(OUT) :: periods(maxdims)

INTEGER, OPTIONAL, INTENT(OUT) :: ierror

INTEGER COMM, MAXDIMS, DIMS(*), COORDS(*), IERROR

LOGICAL PERIODS(*)

MPI_CART_RANK(comm, coords, rank) | |

IN comm | communicator with associated Cartesian topology (handle) |

IN coords | integer array (of size ndims) specifying the Cartesian coordinates of an MPI process |

OUT rank | rank of specified MPI process within group of comm (integer) |

TYPE(MPI_Comm), INTENT(IN) :: comm

INTEGER, INTENT(IN) :: coords(*)

INTEGER, INTENT(OUT) :: rank

INTEGER, OPTIONAL, INTENT(OUT) :: ierror

INTEGER COMM, COORDS(*), RANK, IERROR

For a communicator with an associated Cartesian topology, the function
MPI_CART_RANK translates the logical coordinates of an MPI process to
the corresponding rank in the group of the communicator.
For dimension i with periods(i) = true, if the coordinate,
coords(i), is out of range, that is, coords(i) *<* 0 or
coords(i) *≤* dims(i), it is shifted back to the interval
0 *≥* coords(i) *<* dims(i) automatically. Out-of-range
coordinates are erroneous for nonperiodic dimensions.

If comm is associated with a zero-dimensional Cartesian topology, coords is not significant and 0 is returned in rank.

MPI_CART_COORDS(comm, rank, maxdims, coords) | |

IN comm | communicator with associated Cartesian topology (handle) |

IN rank | rank of an MPI process within group of comm (integer) |

IN maxdims | length of vector coords in the calling program (integer) |

OUT coords | coordinates of the MPI process with the rank rank in Cartesian structure (array of integers) |

TYPE(MPI_Comm), INTENT(IN) :: comm

INTEGER, INTENT(IN) :: rank, maxdims

INTEGER, INTENT(OUT) :: coords(maxdims)

INTEGER, OPTIONAL, INTENT(OUT) :: ierror

INTEGER COMM, RANK, MAXDIMS, COORDS(*), IERROR

The inverse mapping, rank-to-coordinates translation is provided by MPI_CART_COORDS. If comm is associated with a zero-dimensional Cartesian topology, coords will be unchanged. If maxdims is less than the number of dimensions of the Cartesian topology associated with the communicator comm, the outcome is unspecified.

MPI_GRAPH_NEIGHBORS_COUNT(comm, rank, nneighbors) | |

IN comm | communicator with associated graph topology (handle) |

IN rank | rank of MPI process in group of comm (integer) |

OUT nneighbors | number of neighbors of specified MPI process (integer) |

TYPE(MPI_Comm), INTENT(IN) :: comm

INTEGER, INTENT(IN) :: rank

INTEGER, INTENT(OUT) :: nneighbors

INTEGER, OPTIONAL, INTENT(OUT) :: ierror

INTEGER COMM, RANK, NNEIGHBORS, IERROR

MPI_GRAPH_NEIGHBORS(comm, rank, maxneighbors, neighbors) | |

IN comm | communicator with associated graph topology (handle) |

IN rank | rank of MPI process in group of comm (integer) |

IN maxneighbors | size of array neighbors (integer) |

OUT neighbors | ranks of MPI processes that are neighbors to specified MPI process (array of integers) |

TYPE(MPI_Comm), INTENT(IN) :: comm

INTEGER, INTENT(IN) :: rank, maxneighbors

INTEGER, INTENT(OUT) :: neighbors(maxneighbors)

INTEGER, OPTIONAL, INTENT(OUT) :: ierror

INTEGER COMM, RANK, MAXNEIGHBORS, NEIGHBORS(*), IERROR

MPI_GRAPH_NEIGHBORS_COUNT and MPI_GRAPH_NEIGHBORS provide
adjacency information for a graph topology.
The returned count and array of neighbors for the queried rank will
both include * all* neighbors and reflect the same edge ordering as
was specified by the original call to MPI_GRAPH_CREATE.
Specifically, MPI_GRAPH_NEIGHBORS_COUNT and
MPI_GRAPH_NEIGHBORS will return values based on the original
index and edges array passed to MPI_GRAPH_CREATE
(for the purpose of this example, we assume that index[-1] is zero):

- The number of neighbors ( nneighbors) returned from MPI_GRAPH_NEIGHBORS_COUNT will be ( index[rank] - index[rank-1]).
- The neighbors array returned from MPI_GRAPH_NEIGHBORS will be edges[index[rank-1]] through edges[index[rank]-1].

Assume there are four MPI processes with ranks 0, 1, 2, 3 in the input communicator with the following adjacency matrix (note that some neighbors are listed multiple times):

MPI process | neighbors |

0 | 1, 1, 3 |

1 | 0, 0 |

2 | 3 |

3 | 0, 2, 2 |

Thus, the input arguments to MPI_GRAPH_CREATE are:

nnodes = | 4 |

index = | 3, 5, 6, 9 |

edges = | 1, 1, 3, 0, 0, 3, 0, 2, 2 |

Therefore, calling MPI_GRAPH_NEIGHBORS_COUNT and MPI_GRAPH_NEIGHBORS for each of the four MPI processes will return:

Input rank | Count | Neighbors | |||

0 | 3 | 1, 1, 3 | |||

1 | 2 | 0, 0 | |||

2 | 1 | 3 | |||

3 | 3 | 0, 2, 2 | |||

** Example**
Using a communicator with an associated graph topology that represents a shuffle-exchange network.

Suppose that comm is a communicator with a
shuffle-exchange topology. The group has *2 ^{n}* members.
Each MPI process is labeled by

node | exchange | shuffle | unshuffle | ||

neighbors(1) | neighbors(2) | neighbors(3) | |||

0 | (000) | 1 | 0 | 0 | |

1 | (001) | 0 | 2 | 4 | |

2 | (010) | 3 | 4 | 1 | |

3 | (011) | 2 | 6 | 5 | |

4 | (100) | 5 | 1 | 2 | |

5 | (101) | 4 | 3 | 6 | |

6 | (110) | 7 | 5 | 3 | |

7 | (111) | 6 | 7 | 7 | |

Suppose that the communicator comm has this topology associated with it. The following code fragment cycles through the three types of neighbors and performs an appropriate permutation for each.

MPI_DIST_GRAPH_NEIGHBORS_COUNT and MPI_DIST_GRAPH_NEIGHBORS provide adjacency information for a distributed graph topology.

MPI_DIST_GRAPH_NEIGHBORS_COUNT(comm, indegree, outdegree, weighted) | |

IN comm | communicator with associated distributed graph topology (handle) |

OUT indegree | number of edges into this MPI process (non-negative integer) |

OUT outdegree | number of edges out of this MPI process (non-negative integer) |

OUT weighted | false if MPI_UNWEIGHTED was supplied during creation, true otherwise (logical) |

TYPE(MPI_Comm), INTENT(IN) :: comm

INTEGER, INTENT(OUT) :: indegree, outdegree

LOGICAL, INTENT(OUT) :: weighted

INTEGER, OPTIONAL, INTENT(OUT) :: ierror

INTEGER COMM, INDEGREE, OUTDEGREE, IERROR

LOGICAL WEIGHTED

MPI_DIST_GRAPH_NEIGHBORS(comm, maxindegree, sources, sourceweights, maxoutdegree, destinations, destweights) | |

IN comm | communicator with associated distributed graph topology (handle) |

IN maxindegree | size of sources and sourceweights arrays (non-negative integer) |

OUT sources | ranks of MPI processes for which the calling MPI process is a destination (array of non-negative integers) |

OUT sourceweights | weights of the edges into the calling MPI process (array of non-negative integers) |

IN maxoutdegree | size of destinations and destweights arrays (non-negative integer) |

OUT destinations | ranks of MPI processes for which the calling MPI process is a source (array of non-negative integers) |

OUT destweights | weights of the edges out of the calling MPI process (array of non-negative integers) |

TYPE(MPI_Comm), INTENT(IN) :: comm

INTEGER, INTENT(IN) :: maxindegree, maxoutdegree

INTEGER, INTENT(OUT) :: sources(maxindegree), destinations(maxoutdegree)

INTEGER :: sourceweights(*), destweights(*)

INTEGER, OPTIONAL, INTENT(OUT) :: ierror

INTEGER COMM, MAXINDEGREE, SOURCES(*), SOURCEWEIGHTS(*), MAXOUTDEGREE, DESTINATIONS(*), DESTWEIGHTS(*), IERROR

These calls are local. The number of edges into and out of the MPI process returned by MPI_DIST_GRAPH_NEIGHBORS_COUNT are the total number of such edges given in the call to MPI_DIST_GRAPH_CREATE_ADJACENT or MPI_DIST_GRAPH_CREATE (potentially by MPI processes other than the calling MPI process in the case of MPI_DIST_GRAPH_CREATE). Multiply-defined edges are all counted and returned by MPI_DIST_GRAPH_NEIGHBORS in some order. If MPI_UNWEIGHTED is supplied for sourceweights or destweights or both, or if MPI_UNWEIGHTED was supplied during the construction of the graph then no weight information is returned in that array or those arrays. If the communicator was created with MPI_DIST_GRAPH_CREATE_ADJACENT then for each MPI process in comm, the order of the values in sources and destinations is identical to the input that was used by the MPI process with the same rank in comm_old in the creation call. If the communicator was created with MPI_DIST_GRAPH_CREATE then the only requirement on the order of values in sources and destinations is that two calls to the routine with same input argument comm will return the same sequence of edges. If maxindegree or maxoutdegree is smaller than the numbers returned by MPI_DIST_GRAPH_NEIGHBORS_COUNT, then only the first part of the full list is returned.

* Advice
to implementors.*

Since the query calls are defined to be local, each MPI process needs to
store the list of its neighbors with incoming and outgoing
edges. Communication is required at the collective
MPI_DIST_GRAPH_CREATE call in order to compute the neighbor
lists for each MPI process from the distributed graph specification.
(* End of advice to implementors.*)

Return to MPI-4.1 Standard Index

Return to MPI Forum Home Page

(Unofficial) MPI-4.1 of November 2, 2023

HTML Generated on November 19, 2023