9.5.5. Topology Inquiry Functions

PreviousUpNext
Up: Topology Constructors Next: Cartesian Shift Coordinates Previous: Distributed Graph Constructor

If a virtual topology has been defined with one of the above functions, then the topology information can be looked up using inquiry functions. They all are local calls.

MPI_TOPO_TEST(comm, status)
IN commcommunicator (handle)
OUT statustopology type of communicator comm (state)
C binding
int MPI_Topo_test(MPI_Comm comm, int *status)
Fortran 2008 binding
MPI_Topo_test(comm, status, ierror)
TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, INTENT(OUT) :: status
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
Fortran binding
MPI_TOPO_TEST(COMM, STATUS, IERROR)

INTEGER COMM, STATUS, IERROR

The function MPI_TOPO_TEST returns the type of topology that is associated with a communicator.

The output value status is one of the following:

MPI_GRAPHgraph topology
MPI_CARTCartesian topology
MPI_DIST_GRAPHdistributed graph topology
  • [ MPI_UNDEFINED]no topology
  • MPI_GRAPHDIMS_GET(comm, nnodes, nedges)
    IN commcommunicator with associated graph topology (handle)
    OUT nnodesnumber of nodes in graph (same as number of MPI processes in the group of comm) (integer)
    OUT nedgesnumber of edges in graph (integer)
    C binding
    int MPI_Graphdims_get(MPI_Comm comm, int *nnodes, int *nedges)
    Fortran 2008 binding
    MPI_Graphdims_get(comm, nnodes, nedges, ierror)
    TYPE(MPI_Comm), INTENT(IN) :: comm
    INTEGER, INTENT(OUT) :: nnodes, nedges
    INTEGER, OPTIONAL, INTENT(OUT) :: ierror
    Fortran binding
    MPI_GRAPHDIMS_GET(COMM, NNODES, NEDGES, IERROR)

    INTEGER COMM, NNODES, NEDGES, IERROR
    The functions MPI_GRAPHDIMS_GET and MPI_GRAPH_GET retrieve the graph topology information that is associated with the communicator. The information provided by MPI_GRAPHDIMS_GET can be used to dimension the vectors index and edges correctly for the following call to MPI_GRAPH_GET.

    MPI_GRAPH_GET(comm, maxindex, maxedges, index, edges)
    IN commcommunicator with associated graph topology (handle)
    IN maxindexlength of vector index in the calling program (integer)
    IN maxedgeslength of vector edges in the calling program (integer)
    OUT indexarray of integers containing the graph structure (for details see the definition of MPI_GRAPH_CREATE)
    OUT edgesarray of integers containing the graph structure
    C binding
    int MPI_Graph_get(MPI_Comm comm, int maxindex, int maxedges, int index[], int edges[])
    Fortran 2008 binding
    MPI_Graph_get(comm, maxindex, maxedges, index, edges, ierror)
    TYPE(MPI_Comm), INTENT(IN) :: comm
    INTEGER, INTENT(IN) :: maxindex, maxedges
    INTEGER, INTENT(OUT) :: index(maxindex), edges(maxedges)
    INTEGER, OPTIONAL, INTENT(OUT) :: ierror
    Fortran binding
    MPI_GRAPH_GET(COMM, MAXINDEX, MAXEDGES, INDEX, EDGES, IERROR)

    INTEGER COMM, MAXINDEX, MAXEDGES, INDEX(*), EDGES(*), IERROR

    MPI_CARTDIM_GET(comm, ndims)
    IN commcommunicator with associated Cartesian topology (handle)
    OUT ndimsnumber of dimensions of the Cartesian structure (integer)
    C binding
    int MPI_Cartdim_get(MPI_Comm comm, int *ndims)
    Fortran 2008 binding
    MPI_Cartdim_get(comm, ndims, ierror)
    TYPE(MPI_Comm), INTENT(IN) :: comm
    INTEGER, INTENT(OUT) :: ndims
    INTEGER, OPTIONAL, INTENT(OUT) :: ierror
    Fortran binding
    MPI_CARTDIM_GET(COMM, NDIMS, IERROR)

    INTEGER COMM, NDIMS, IERROR

    The functions MPI_CARTDIM_GET and MPI_CART_GET return the Cartesian topology information that is associated with the communicator. If comm is associated with a zero-dimensional Cartesian topology, MPI_CARTDIM_GET returns ndims = 0 and MPI_CART_GET will keep all output arguments unchanged.

    MPI_CART_GET(comm, maxdims, dims, periods, coords)
    IN commcommunicator with associated Cartesian topology (handle)
    IN maxdimslength of vectors dims, periods, and coords in the calling program (integer)
    OUT dimsnumber of MPI processes for each Cartesian dimension (array of integers)
    OUT periodsperiodicity ( true/ false) for each Cartesian dimension (array of logicals)
    OUT coordscoordinates of calling MPI process in Cartesian structure (array of integers)
    C binding
    int MPI_Cart_get(MPI_Comm comm, int maxdims, int dims[], int periods[], int coords[])
    Fortran 2008 binding
    MPI_Cart_get(comm, maxdims, dims, periods, coords, ierror)
    TYPE(MPI_Comm), INTENT(IN) :: comm
    INTEGER, INTENT(IN) :: maxdims
    INTEGER, INTENT(OUT) :: dims(maxdims), coords(maxdims)
    LOGICAL, INTENT(OUT) :: periods(maxdims)
    INTEGER, OPTIONAL, INTENT(OUT) :: ierror
    Fortran binding
    MPI_CART_GET(COMM, MAXDIMS, DIMS, PERIODS, COORDS, IERROR)
    INTEGER COMM, MAXDIMS, DIMS(*), COORDS(*), IERROR
    LOGICAL PERIODS(*)
    If maxdims in a call to MPI_CART_GET is less than the number of dimensions of the Cartesian topology associated with the communicator comm, the outcome is unspecified.

    MPI_CART_RANK(comm, coords, rank)
    IN commcommunicator with associated Cartesian topology (handle)
    IN coordsinteger array (of size ndims) specifying the Cartesian coordinates of an MPI process
    OUT rankrank of specified MPI process within group of comm (integer)
    C binding
    int MPI_Cart_rank(MPI_Comm comm, const int coords[], int *rank)
    Fortran 2008 binding
    MPI_Cart_rank(comm, coords, rank, ierror)
    TYPE(MPI_Comm), INTENT(IN) :: comm
    INTEGER, INTENT(IN) :: coords(*)
    INTEGER, INTENT(OUT) :: rank
    INTEGER, OPTIONAL, INTENT(OUT) :: ierror
    Fortran binding
    MPI_CART_RANK(COMM, COORDS, RANK, IERROR)

    INTEGER COMM, COORDS(*), RANK, IERROR

    For a communicator with an associated Cartesian topology, the function MPI_CART_RANK translates the logical coordinates of an MPI process to the corresponding rank in the group of the communicator. For dimension i with periods(i) = true, if the coordinate, coords(i), is out of range, that is, coords(i) < 0 or coords(i) dims(i), it is shifted back to the interval 0 coords(i) < dims(i) automatically. Out-of-range coordinates are erroneous for nonperiodic dimensions.

    If comm is associated with a zero-dimensional Cartesian topology, coords is not significant and 0 is returned in rank.

    MPI_CART_COORDS(comm, rank, maxdims, coords)
    IN commcommunicator with associated Cartesian topology (handle)
    IN rankrank of an MPI process within group of comm (integer)
    IN maxdimslength of vector coords in the calling program (integer)
    OUT coordscoordinates of the MPI process with the rank rank in Cartesian structure (array of integers)
    C binding
    int MPI_Cart_coords(MPI_Comm comm, int rank, int maxdims, int coords[])
    Fortran 2008 binding
    MPI_Cart_coords(comm, rank, maxdims, coords, ierror)
    TYPE(MPI_Comm), INTENT(IN) :: comm
    INTEGER, INTENT(IN) :: rank, maxdims
    INTEGER, INTENT(OUT) :: coords(maxdims)
    INTEGER, OPTIONAL, INTENT(OUT) :: ierror
    Fortran binding
    MPI_CART_COORDS(COMM, RANK, MAXDIMS, COORDS, IERROR)

    INTEGER COMM, RANK, MAXDIMS, COORDS(*), IERROR

    The inverse mapping, rank-to-coordinates translation is provided by MPI_CART_COORDS. If comm is associated with a zero-dimensional Cartesian topology, coords will be unchanged. If maxdims is less than the number of dimensions of the Cartesian topology associated with the communicator comm, the outcome is unspecified.

    MPI_GRAPH_NEIGHBORS_COUNT(comm, rank, nneighbors)
    IN commcommunicator with associated graph topology (handle)
    IN rankrank of MPI process in group of comm (integer)
    OUT nneighborsnumber of neighbors of specified MPI process (integer)
    C binding
    int MPI_Graph_neighbors_count(MPI_Comm comm, int rank, int *nneighbors)
    Fortran 2008 binding
    MPI_Graph_neighbors_count(comm, rank, nneighbors, ierror)
    TYPE(MPI_Comm), INTENT(IN) :: comm
    INTEGER, INTENT(IN) :: rank
    INTEGER, INTENT(OUT) :: nneighbors
    INTEGER, OPTIONAL, INTENT(OUT) :: ierror
    Fortran binding
    MPI_GRAPH_NEIGHBORS_COUNT(COMM, RANK, NNEIGHBORS, IERROR)

    INTEGER COMM, RANK, NNEIGHBORS, IERROR

    MPI_GRAPH_NEIGHBORS(comm, rank, maxneighbors, neighbors)
    IN commcommunicator with associated graph topology (handle)
    IN rankrank of MPI process in group of comm (integer)
    IN maxneighborssize of array neighbors (integer)
    OUT neighborsranks of MPI processes that are neighbors to specified MPI process (array of integers)
    C binding
    int MPI_Graph_neighbors(MPI_Comm comm, int rank, int maxneighbors, int neighbors[])
    Fortran 2008 binding
    MPI_Graph_neighbors(comm, rank, maxneighbors, neighbors, ierror)
    TYPE(MPI_Comm), INTENT(IN) :: comm
    INTEGER, INTENT(IN) :: rank, maxneighbors
    INTEGER, INTENT(OUT) :: neighbors(maxneighbors)
    INTEGER, OPTIONAL, INTENT(OUT) :: ierror
    Fortran binding
    MPI_GRAPH_NEIGHBORS(COMM, RANK, MAXNEIGHBORS, NEIGHBORS, IERROR)

    INTEGER COMM, RANK, MAXNEIGHBORS, NEIGHBORS(*), IERROR

    MPI_GRAPH_NEIGHBORS_COUNT and MPI_GRAPH_NEIGHBORS provide adjacency information for a graph topology. The returned count and array of neighbors for the queried rank will both include all neighbors and reflect the same edge ordering as was specified by the original call to MPI_GRAPH_CREATE. Specifically, MPI_GRAPH_NEIGHBORS_COUNT and MPI_GRAPH_NEIGHBORS will return values based on the original index and edges array passed to MPI_GRAPH_CREATE (for the purpose of this example, we assume that index[-1] is zero):



    Example Inquiry of graph topology information.

    Assume there are four MPI processes with ranks 0, 1, 2, 3 in the input communicator with the following adjacency matrix (note that some neighbors are listed multiple times):

    MPI process neighbors
    0 1, 1, 3
    1 0, 0
    2 3
    3 0, 2, 2

    Thus, the input arguments to MPI_GRAPH_CREATE are:

    nnodes = 4
    index = 3, 5, 6, 9
    edges = 1, 1, 3, 0, 0, 3, 0, 2, 2

    Therefore, calling MPI_GRAPH_NEIGHBORS_COUNT and MPI_GRAPH_NEIGHBORS for each of the four MPI processes will return:

    Input rank Count Neighbors
    0 3 1, 1, 3
    1 2 0, 0
    2 1 3
    3 3 0, 2, 2


    Example Using a communicator with an associated graph topology that represents a shuffle-exchange network.

    Suppose that comm is a communicator with a shuffle-exchange topology. The group has 2n members. Each MPI process is labeled by a1 , ..., an with ai ∈ {0,1}, and has three neighbors: exchange(Image file (Image file ), shuffle(a1 , ..., an )= a2 , ..., an, a1, and unshuffle(a1 , ..., an ) = an , a1 , ... , an-1. The graph adjacency list is illustrated below for n=3.

    node exchange shuffle unshuffle
    neighbors(1) neighbors(2) neighbors(3)
    0 (000) 1 0 0
    1 (001) 0 2 4
    2 (010) 3 4 1
    3 (011) 2 6 5
    4 (100) 5 1 2
    5 (101) 4 3 6
    6 (110) 7 5 3
    7 (111) 6 7 7

    Suppose that the communicator comm has this topology associated with it. The following code fragment cycles through the three types of neighbors and performs an appropriate permutation for each.


    !  assume: each MPI process has stored a real number A. 
    !  extract neighborhood information 
    CALL MPI_COMM_RANK(comm, myrank, ierr) 
    CALL MPI_GRAPH_NEIGHBORS(comm, myrank, 3, neighbors, ierr) 
    !  perform exchange permutation 
    CALL MPI_SENDRECV_REPLACE(A, 1, MPI_REAL, neighbors(1), 0, & 
                              neighbors(1), 0, comm, status, ierr) 
    !  perform shuffle permutation 
    CALL MPI_SENDRECV_REPLACE(A, 1, MPI_REAL, neighbors(2), 0, & 
                              neighbors(3), 0, comm, status, ierr) 
    !  perform unshuffle permutation 
    CALL MPI_SENDRECV_REPLACE(A, 1, MPI_REAL, neighbors(3), 0, & 
                              neighbors(2), 0, comm, status, ierr) 
    

    MPI_DIST_GRAPH_NEIGHBORS_COUNT and MPI_DIST_GRAPH_NEIGHBORS provide adjacency information for a distributed graph topology.

    MPI_DIST_GRAPH_NEIGHBORS_COUNT(comm, indegree, outdegree, weighted)
    IN commcommunicator with associated distributed graph topology (handle)
    OUT indegreenumber of edges into this MPI process (nonnegative integer)
    OUT outdegreenumber of edges out of this MPI process (nonnegative integer)
    OUT weighted false if MPI_UNWEIGHTED was supplied during creation, true otherwise (logical)
    C binding
    int MPI_Dist_graph_neighbors_count(MPI_Comm comm, int *indegree, int *outdegree, int *weighted)
    Fortran 2008 binding
    MPI_Dist_graph_neighbors_count(comm, indegree, outdegree, weighted, ierror)
    TYPE(MPI_Comm), INTENT(IN) :: comm
    INTEGER, INTENT(OUT) :: indegree, outdegree
    LOGICAL, INTENT(OUT) :: weighted
    INTEGER, OPTIONAL, INTENT(OUT) :: ierror
    Fortran binding
    MPI_DIST_GRAPH_NEIGHBORS_COUNT(COMM, INDEGREE, OUTDEGREE, WEIGHTED, IERROR)
    INTEGER COMM, INDEGREE, OUTDEGREE, IERROR
    LOGICAL WEIGHTED

    MPI_DIST_GRAPH_NEIGHBORS(comm, maxindegree, sources, sourceweights, maxoutdegree, destinations, destweights)
    IN commcommunicator with associated distributed graph topology (handle)
    IN maxindegreesize of sources and sourceweights arrays (nonnegative integer)
    OUT sourcesranks of MPI processes for which the calling MPI process is a destination (array of nonnegative integers)
    OUT sourceweightsweights of the edges into the calling MPI process (array of nonnegative integers)
    IN maxoutdegreesize of destinations and destweights arrays (nonnegative integer)
    OUT destinationsranks of MPI processes for which the calling MPI process is a source (array of nonnegative integers)
    OUT destweightsweights of the edges out of the calling MPI process (array of nonnegative integers)
    C binding
    int MPI_Dist_graph_neighbors(MPI_Comm comm, int maxindegree, int sources[], int sourceweights[], int maxoutdegree, int destinations[], int destweights[])
    Fortran 2008 binding
    MPI_Dist_graph_neighbors(comm, maxindegree, sources, sourceweights, maxoutdegree, destinations, destweights, ierror)
    TYPE(MPI_Comm), INTENT(IN) :: comm
    INTEGER, INTENT(IN) :: maxindegree, maxoutdegree
    INTEGER, INTENT(OUT) :: sources(maxindegree), destinations(maxoutdegree)
    INTEGER :: sourceweights(*), destweights(*)
    INTEGER, OPTIONAL, INTENT(OUT) :: ierror
    Fortran binding
    MPI_DIST_GRAPH_NEIGHBORS(COMM, MAXINDEGREE, SOURCES, SOURCEWEIGHTS, MAXOUTDEGREE, DESTINATIONS, DESTWEIGHTS, IERROR)

    INTEGER COMM, MAXINDEGREE, SOURCES(*), SOURCEWEIGHTS(*), MAXOUTDEGREE, DESTINATIONS(*), DESTWEIGHTS(*), IERROR

    These calls are local. The number of edges into and out of the MPI process returned by MPI_DIST_GRAPH_NEIGHBORS_COUNT are the total number of such edges given in the call to MPI_DIST_GRAPH_CREATE_ADJACENT or MPI_DIST_GRAPH_CREATE (potentially by MPI processes other than the calling MPI process in the case of MPI_DIST_GRAPH_CREATE). Multiply-defined edges are all counted and returned by MPI_DIST_GRAPH_NEIGHBORS in some order. If MPI_UNWEIGHTED is supplied for sourceweights or destweights or both, or if MPI_UNWEIGHTED was supplied during the construction of the graph then no weight information is returned in that array or those arrays. If the communicator was created with MPI_DIST_GRAPH_CREATE_ADJACENT then for each MPI process in comm, the order of the values in sources and destinations is identical to the input that was used by the MPI process with the same rank in comm_old in the creation call. If the communicator was created with MPI_DIST_GRAPH_CREATE then the only requirement on the order of values in sources and destinations is that two calls to the routine with same input argument comm will return the same sequence of edges. If maxindegree or maxoutdegree is smaller than the numbers returned by MPI_DIST_GRAPH_NEIGHBORS_COUNT, then only the first part of the full list is returned.


    Advice to implementors.

    Since the query calls are defined to be local, each MPI process needs to store the list of its neighbors with incoming and outgoing edges. Communication is required at the collective MPI_DIST_GRAPH_CREATE call in order to compute the neighbor lists for each MPI process from the distributed graph specification. ( End of advice to implementors.)


    PreviousUpNext
    Up: Topology Constructors Next: Cartesian Shift Coordinates Previous: Distributed Graph Constructor


    Return to MPI-5.0 Standard Index
    Return to MPI Forum Home Page

    (Unofficial) MPI-5.0 of June 9, 2025
    HTML Generated on March 2, 2025