9.5.3. Graph Constructor

PreviousUpNext
Up: Topology Constructors Next: Distributed Graph Constructor Previous: Cartesian Convenience Function: MPI_DIMS_CREATE

MPI_GRAPH_CREATE(comm_old, nnodes, index, edges, reorder, comm_graph)
IN comm_oldinput communicator (handle)
IN nnodesnumber of nodes in graph (integer)
IN indexarray of integers describing node degrees (see below)
IN edgesarray of integers describing graph edges (see below)
IN reorderranks may be reordered ( true) or not ( false) (logical)
OUT comm_graphnew communicator with associated graph topology (handle)
C binding
int MPI_Graph_create(MPI_Comm comm_old, int nnodes, const int index[], const int edges[], int reorder, MPI_Comm *comm_graph)
Fortran 2008 binding
MPI_Graph_create(comm_old, nnodes, index, edges, reorder, comm_graph, ierror)

TYPE(MPI_Comm), INTENT(IN) :: comm_old
INTEGER, INTENT(IN) :: nnodes, index(nnodes), edges(*)
LOGICAL, INTENT(IN) :: reorder
TYPE(MPI_Comm), INTENT(OUT) :: comm_graph
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
Fortran binding
MPI_GRAPH_CREATE(COMM_OLD, NNODES, INDEX, EDGES, REORDER, COMM_GRAPH, IERROR)

INTEGER COMM_OLD, NNODES, INDEX(*), EDGES(*), COMM_GRAPH, IERROR
LOGICAL REORDER

MPI_GRAPH_CREATE returns a handle to a new communicator to which the graph topology information is attached. If reorder = false then the rank of each MPI process in the group of the new communicator is identical to its rank in the group of the old communicator. If reorder = true then the procedure may reorder the ranks of the MPI processes. If the number of nodes in the graph ( nnodes) is smaller than the size of the group of comm_old, then MPI_COMM_NULL is returned by some MPI processes, in analogy to MPI_CART_CREATE and MPI_COMM_SPLIT. If the graph is empty, i.e., nnodes = 0, then MPI_COMM_NULL is returned in all MPI processes. The call is erroneous if it specifies a graph that is larger than the group size of the input communicator.

The three parameters nnodes, index and edges define the graph structure. nnodes is the number of nodes of the graph. The nodes are numbered from 0 to nnodes-1. The i-th entry of array index stores the total number of neighbors of the first i graph nodes. The lists of neighbors of nodes 0, 1, ..., nnodes-1 are stored in consecutive locations in array edges. The array edges is a flattened representation of the edge lists. The total number of entries in index is nnodes and the total number of entries in edges is equal to the number of graph edges.

The definitions of the arguments nnodes, index, and edges are illustrated with the following simple example.


Example Specification of the adjacency matrix for MPI_GRAPH_CREATE.

Assume there are four MPI processes with ranks 0, 1, 2, 3 in the input communicator with the following adjacency matrix:

MPI process neighbors
0 1, 3
1 0
2 3
3 0, 2
Then, the input arguments are:

nnodes = 4
index = 2, 3, 4, 6
edges = 1, 3, 0, 3, 0, 2

Thus, in C, index[0] is the degree of node zero, and index[i] - index[i-1] is the degree of node i, i=1, ..., nnodes-1; the list of neighbors of node zero is stored in edges[j], for Image file and the list of neighbors of node i, Image file , is stored in edges[j], Image file .

In Fortran, index(1) is the degree of node zero, and index(i+1) - index(i) is the degree of node i, i=1, ..., nnodes-1; the list of neighbors of node zero is stored in edges(j), for Image file and the list of neighbors of node i, Image file , is stored in edges(j), Image file .

A single MPI process is allowed to be defined multiple times in the list of neighbors of an MPI process (i.e., there may be multiple edges between two MPI processes). An MPI process is also allowed to be a neighbor to itself (i.e., a self loop in the graph). The adjacency matrix is allowed to be nonsymmetric.
Advice to users.

Performance implications of using multiple edges or a nonsymmetric adjacency matrix are not defined. The definition of a node-neighbor edge does not imply a direction of the communication. ( End of advice to users.)

Advice to implementors.

The following topology information is likely to be stored with a communicator:


For a graph structure the number of nodes is equal to the number of MPI processes in the group. Therefore, the number of nodes does not have to be stored explicitly. An additional zero entry at the start of array index simplifies access to the topology information. ( End of advice to implementors.)


PreviousUpNext
Up: Topology Constructors Next: Distributed Graph Constructor Previous: Cartesian Convenience Function: MPI_DIMS_CREATE


Return to MPI-4.1 Standard Index
Return to MPI Forum Home Page

(Unofficial) MPI-4.1 of November 2, 2023
HTML Generated on November 19, 2023