9.5.8. Low-Level Topology Functions

PreviousUpNext
Up: Topology Constructors Next: Neighborhood Collective Communication on Virtual Topologies Previous: Partitioning of Cartesian Structures

The two additional functions introduced in this section can be used to implement all other topology functions. In general they will not be called by the user directly, except when creating additional virtual topology capabilities other than those provided by MPI. The two calls are both local.

MPI_CART_MAP(comm, ndims, dims, periods, newrank)
IN comminput communicator (handle)
IN ndimsnumber of dimensions of Cartesian structure (integer)
IN dimsinteger array of size ndims specifying the number of processes in each coordinate direction
IN periodslogical array of size ndims specifying the periodicity specification in each coordinate direction
OUT newrankreordered rank of the calling MPI process; MPI_UNDEFINED if calling MPI process does not belong to grid (integer)
C binding
int MPI_Cart_map(MPI_Comm comm, int ndims, const int dims[], const int periods[], int *newrank)
Fortran 2008 binding
MPI_Cart_map(comm, ndims, dims, periods, newrank, ierror)

TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, INTENT(IN) :: ndims, dims(ndims)
LOGICAL, INTENT(IN) :: periods(ndims)
INTEGER, INTENT(OUT) :: newrank
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
Fortran binding
MPI_CART_MAP(COMM, NDIMS, DIMS, PERIODS, NEWRANK, IERROR)

INTEGER COMM, NDIMS, DIMS(*), NEWRANK, IERROR
LOGICAL PERIODS(*)

MPI_CART_MAP computes an ``optimal'' placement for the calling MPI process on the physical machine. A possible implementation of this function is to always return the rank of the calling MPI process, that is, not to perform any reordering.


Advice to implementors.

The function MPI_CART_CREATE(comm, ndims, dims, periods, reorder, comm_cart), with reorder = true can be implemented by calling MPI_CART_MAP(comm, ndims, dims, periods, newrank), then calling MPI_COMM_SPLIT(comm, color, key, comm_cart), with color = 0 if newrank Image file MPI_UNDEFINED, color = MPI_UNDEFINED otherwise, and key = newrank. If ndims is zero then a zero-dimensional Cartesian topology is created.

The function MPI_CART_SUB(comm, remain_dims, comm_new) can be implemented by a call to MPI_COMM_SPLIT(comm, color, key, comm_new), using a single number encoding of the lost dimensions as color and a single number encoding of the preserved dimensions as key.

All other Cartesian topology functions can be implemented locally, using the topology information that is cached with the communicator. ( End of advice to implementors.)
The corresponding function for graph structures is as follows.

MPI_GRAPH_MAP(comm, nnodes, index, edges, newrank)
IN comminput communicator (handle)
IN nnodesnumber of graph nodes (integer)
IN indexinteger array specifying the graph structure (for details see the definition of MPI_GRAPH_CREATE)
IN edgesinteger array specifying the graph structure
OUT newrankreordered rank of the calling MPI process; MPI_UNDEFINED if the calling MPI process does not belong to graph (integer)
C binding
int MPI_Graph_map(MPI_Comm comm, int nnodes, const int index[], const int edges[], int *newrank)
Fortran 2008 binding
MPI_Graph_map(comm, nnodes, index, edges, newrank, ierror)

TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, INTENT(IN) :: nnodes, index(nnodes), edges(*)
INTEGER, INTENT(OUT) :: newrank
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
Fortran binding
MPI_GRAPH_MAP(COMM, NNODES, INDEX, EDGES, NEWRANK, IERROR)

INTEGER COMM, NNODES, INDEX(*), EDGES(*), NEWRANK, IERROR


Advice to implementors.

The function MPI_GRAPH_CREATE(comm, nnodes, index, edges, reorder, comm_graph), with reorder = true can be implemented by calling MPI_GRAPH_MAP(comm, nnodes, index, edges, newrank), then calling MPI_COMM_SPLIT(comm, color, key, comm_graph), with color = 0 if newrank Image file MPI_UNDEFINED, color = MPI_UNDEFINED otherwise, and key = newrank.

All other graph topology functions can be implemented locally, using the topology information that is cached with the communicator. ( End of advice to implementors.)


PreviousUpNext
Up: Topology Constructors Next: Neighborhood Collective Communication on Virtual Topologies Previous: Partitioning of Cartesian Structures


Return to MPI-4.1 Standard Index
Return to MPI Forum Home Page

(Unofficial) MPI-4.1 of November 2, 2023
HTML Generated on November 19, 2023