MPI_GRAPH_CREATE requires that each process passes the full (global) communication graph to the call. This limits the scalability of this constructor. With the distributed graph interface, the communication graph is specified in a fully distributed fashion. Each process specifies only the part of the communication graph of which it is aware. Typically, this could be the set of processes from which the process will eventually receive or get data, or the set of processes to which the process will send or put data, or some combination of such edges. Two different interfaces can be used to create a distributed graph topology. MPI_DIST_GRAPH_CREATE_ADJACENT creates a distributed graph communicator with each process specifying each of its incoming and outgoing (adjacent) edges in the logical communication graph and thus requires minimal communication during creation. MPI_DIST_GRAPH_CREATE provides full flexibility such that any process can indicate that communication will occur between any pair of processes in the graph.
To provide better possibilities for optimization by the MPI library, the distributed graph constructors permit weighted communication edges and take an info argument that can further influence process reordering or other optimizations performed by the MPI library. For example, hints can be provided on how edge weights are to be interpreted, the quality of the reordering, and/or the time permitted for the MPI library to process the graph.

  int MPI_Dist_graph_create_adjacent(MPI_Comm comm_old, int indegree, const int sources[], const int sourceweights[], int outdegree, const int destinations[], const int destweights[], MPI_Info info, int reorder, MPI_Comm *comm_dist_graph) 
  
 
  MPI_Dist_graph_create_adjacent(comm_old, indegree, sources, sourceweights, outdegree, destinations, destweights, info, reorder, comm_dist_graph, ierror) 
 TYPE(MPI_Comm), INTENT(IN) :: comm_old 
INTEGER, INTENT(IN) :: indegree, sources(indegree), outdegree, 
    destinations(outdegree) 
INTEGER, INTENT(IN) :: sourceweights(*), destweights(*) 
TYPE(MPI_Info), INTENT(IN) :: info 
LOGICAL, INTENT(IN) :: reorder 
TYPE(MPI_Comm), INTENT(OUT) :: comm_dist_graph 
INTEGER, OPTIONAL, INTENT(OUT) :: ierror 
  
  MPI_DIST_GRAPH_CREATE_ADJACENT(COMM_OLD, INDEGREE, SOURCES, SOURCEWEIGHTS, OUTDEGREE, DESTINATIONS, DESTWEIGHTS, INFO, REORDER, COMM_DIST_GRAPH, IERROR)
 INTEGER COMM_OLD, INDEGREE, SOURCES(*), SOURCEWEIGHTS(*), OUTDEGREE, 
    DESTINATIONS(*), DESTWEIGHTS(*), INFO, COMM_DIST_GRAPH, IERROR 
LOGICAL REORDER 
  
MPI_DIST_GRAPH_CREATE_ADJACENT returns a handle to a new communicator to which the distributed graph topology information is attached. Each process passes all information about its incoming and outgoing edges in the virtual distributed graph topology. The calling processes must ensure that each edge of the graph is described in the source and in the destination process with the same weights. If there are multiple edges for a given (source,dest) pair, then the sequence of the weights of these edges does not matter. The complete communication topology is the combination of all edges shown in the sources arrays of all processes in comm_old, which must be identical to the combination of all edges shown in the destinations arrays. Source and destination ranks must be process ranks of comm_old. This allows a fully distributed specification of the communication graph. Isolated processes (i.e., processes with no outgoing or incoming edges, that is, processes that have specified indegree and outdegree as zero and thus do not occur as source or destination rank in the graph specification) are allowed.
The call creates a new communicator comm_dist_graph of distributed graph topology type to which topology information has been attached. The number of processes in comm_dist_graph is identical to the number of processes in comm_old. The call to MPI_DIST_GRAPH_CREATE_ADJACENT is collective.
 
Weights are specified as non-negative integers and can be used to  
influence the process remapping strategy and other internal  MPI  
optimizations. For instance, approximate count arguments of later  
communication calls along specific edges could be used as their edge  
weights. Multiplicity of edges can likewise indicate more intense  
communication between pairs of processes. However, the exact meaning  
of edge weights is not specified by the  MPI standard and is left to  
the implementation. In C or Fortran, an application can supply the  
special value   MPI_UNWEIGHTED for the weight array to indicate  
that all edges have the same (effectively no) weight. It is erroneous to supply  
  MPI_UNWEIGHTED for some but not all processes of  
 comm_old.   
If the graph is weighted but  
 indegree or  outdegree is zero, then   MPI_WEIGHTS_EMPTY or any  
arbitrary array may be passed to  sourceweights or  
 destweights respectively. Note that   MPI_UNWEIGHTED and  
  MPI_WEIGHTS_EMPTY are not special weight values; rather they  
are special values for the total array argument.  In Fortran,  
  MPI_UNWEIGHTED and   MPI_WEIGHTS_EMPTY are objects like  
  MPI_BOTTOM (not usable for initialization or assignment).  
See Section Named Constants 
.  
 
 
 Advice to users.  
 
In the case of an empty weights array argument passed while constructing  
a weighted graph, one should not pass   NULL because the value of  
  MPI_UNWEIGHTED may be equal to   NULL. The value of this argument  
would then be indistinguishable from   MPI_UNWEIGHTED to the  
implementation. In this case   MPI_WEIGHTS_EMPTY should be used  
instead.  
 ( End of advice to users.) 
 
 
 
 Advice  
        to implementors.  
 
It is recommended that   MPI_UNWEIGHTED not be implemented as  
  NULL.  
 ( End of advice to implementors.) 
 
 
 
 Rationale.  
 
To ensure backward compatibility,   MPI_UNWEIGHTED may still be  
implemented as NULL. See subsec:22to30.  
 ( End of rationale.) 
 
The meaning of the  info and  reorder arguments is defined  
in the description of the following routine.  

  int MPI_Dist_graph_create(MPI_Comm comm_old, int n, const int sources[], const int degrees[], const int destinations[], const int weights[], MPI_Info info, int reorder, MPI_Comm *comm_dist_graph) 
  
 
  MPI_Dist_graph_create(comm_old, n, sources, degrees, destinations, weights, info, reorder, comm_dist_graph, ierror) 
 TYPE(MPI_Comm), INTENT(IN) :: comm_old 
INTEGER, INTENT(IN) :: n, sources(n), degrees(n), destinations(*) 
INTEGER, INTENT(IN) :: weights(*) 
TYPE(MPI_Info), INTENT(IN) :: info 
LOGICAL, INTENT(IN) :: reorder 
TYPE(MPI_Comm), INTENT(OUT) :: comm_dist_graph 
INTEGER, OPTIONAL, INTENT(OUT) :: ierror 
  
  MPI_DIST_GRAPH_CREATE(COMM_OLD, N, SOURCES, DEGREES, DESTINATIONS, WEIGHTS, INFO, REORDER, COMM_DIST_GRAPH, IERROR)
 INTEGER COMM_OLD, N, SOURCES(*), DEGREES(*), DESTINATIONS(*),
    WEIGHTS(*), INFO, COMM_DIST_GRAPH, IERROR 
LOGICAL REORDER 
  
MPI_DIST_GRAPH_CREATE returns a handle to a new communicator to which the distributed graph topology information is attached. Concretely, each process calls the constructor with a set of directed (source,destination) communication edges as described below. Every process passes an array of n source nodes in the sources array. For each source node, a non-negative number of destination nodes is specified in the degrees array. The destination nodes are stored in the corresponding consecutive segment of the destinations array. More precisely, if the i-th node in sources is s, this specifies degrees[i] edges (s,d) with d of the j-th such edge stored in destinations[degrees[0]+...+degrees[i-1]+j]. The weight of this edge is stored in weights[degrees[0]+...+degrees[i-1]+j]. Both the sources and the destinations arrays may contain the same node more than once, and the order in which nodes are listed as destinations or sources is not significant. Similarly, different processes may specify edges with the same source and destination nodes. Source and destination nodes must be process ranks of comm_old. Different processes may specify different numbers of source and destination nodes, as well as different source to destination edges. This allows a fully distributed specification of the communication graph. Isolated processes (i.e., processes with no outgoing or incoming edges, that is, processes that do not occur as source or destination node in the graph specification) are allowed.
The call creates a new communicator comm_dist_graph of distributed graph topology type to which topology information has been attached. The number of processes in comm_dist_graph is identical to the number of processes in comm_old. The call to MPI_DIST_GRAPH_CREATE is collective.
If reorder = false, all processes will have the same rank in comm_dist_graph as in comm_old. If reorder = true then the MPI library is free to remap to other processes (of comm_old) in order to improve communication on the edges of the communication graph. The weight associated with each edge is a hint to the MPI library about the amount or intensity of communication on that edge, and may be used to compute a ``best'' reordering.
 
Weights are specified as non-negative integers and can be used to  
influence the process remapping strategy and other internal  MPI  
optimizations. For instance, approximate count arguments of later  
communication calls along specific edges could be used as their edge  
weights. Multiplicity of edges can likewise indicate more intense  
communication between pairs of processes. However, the exact meaning  
of edge weights is not specified by the  MPI standard and is left to  
the implementation. In C or Fortran, an application can supply the  
special value   MPI_UNWEIGHTED for the weight array to indicate  
that all edges have the same (effectively no) weight. It is erroneous to supply   MPI_UNWEIGHTED for some but not all processes of  
 comm_old.   
If the graph is weighted but  n = 0, then  
  MPI_WEIGHTS_EMPTY or any arbitrary array may be passed to  
weights. Note that   MPI_UNWEIGHTED and  
  MPI_WEIGHTS_EMPTY are not special weight values; rather  
they are special values for the total array argument. In Fortran,  
  MPI_UNWEIGHTED and   MPI_WEIGHTS_EMPTY are objects like  
  MPI_BOTTOM (not usable for initialization or assignment).  
See Section Named Constants 
.  
 
 
 Advice to users.  
 
In the case of an empty weights array argument passed while constructing  
a weighted graph, one should not pass   NULL because the value of  
  MPI_UNWEIGHTED may be equal to   NULL. The value of this argument  
would then be indistinguishable from   MPI_UNWEIGHTED to the  
implementation.   MPI_WEIGHTS_EMPTY should be used  
instead.  
 ( End of advice to users.) 
 
 
 
 Advice  
        to implementors.  
 
It is recommended that   MPI_UNWEIGHTED not be implemented as  
  NULL.  
 ( End of advice to implementors.) 
 
 
 
 Rationale.  
 
To ensure backward compatibility,   MPI_UNWEIGHTED may still be  
implemented as NULL. See subsec:22to30.  
 ( End of rationale.) 
 
The meaning of the  weights argument can be influenced by the  
 info argument. Info arguments can be used to guide the  
mapping; possible options include minimizing the maximum number of  
edges between processes on different SMP nodes, or minimizing the sum  
of all such edges.  An  MPI implementation is not obliged to follow  
specific hints, and it is valid for an  MPI implementation not to do  
any reordering.  An  MPI implementation may specify more  info  
key-value pairs. All processes must specify the same set of key-value  
 info pairs.  
 
 
 
 Advice  
        to implementors.  
MPI implementations must document any additionally supported key-value info pairs. MPI_INFO_NULL is always valid, and may indicate the default creation of the distributed graph topology to the MPI library.
 
An implementation does not explicitly need to construct the topology  
from its distributed parts. However, all processes can construct the  
full topology from the distributed specification and use this in a  
call to  MPI_GRAPH_CREATE to create the topology.  This may serve  
as a reference implementation of the functionality, and may be  
acceptable for small communicators. However, a scalable high-quality  
implementation would save the topology graph in a distributed way.  
 ( End of advice to implementors.) 
 
 Example  
  
As for Example Graph Constructor 
,assume there are four processes 0, 1, 2, 3 with the following adjacency matrix and   
unit edge weights:  
| process | neighbors | 
| 0 | 1, 3 | 
| 1 | 0 | 
| 2 | 3 | 
| 3 | 0, 2 | 
| process | n | sources | degrees | destinations | weights | 
| 0 | 1 | 0 | 2 | 1,3 | 1,1 | 
| 1 | 1 | 1 | 1 | 0 | 1 | 
| 2 | 1 | 2 | 1 | 3 | 1 | 
| 3 | 1 | 3 | 2 | 0,2 | 1,1 | 
Another way would be to pass the whole graph on process 0, which could be done with the following arguments per process:
| process | n | sources | degrees | destinations | weights | 
| 0 | 4 | 0,1,2,3 | 2,1,1,2 | 1,3,0,3,0,2 | 1,1,1,1,1,1 | 
| 1 | 0 | - | - | - | - | 
| 2 | 0 | - | - | - | - | 
| 3 | 0 | - | - | - | |
In both cases above, the application could supply MPI_UNWEIGHTED instead of explicitly providing identical weights.
MPI_DIST_GRAPH_CREATE_ADJACENT could be used to specify this graph using the following arguments:
| process | indegree | sources | sourceweights | outdegree | destinations | destweights | 
| 0 | 2 | 1,3 | 1,1 | 2 | 1,3 | 1,1 | 
| 1 | 1 | 0 | 1 | 1 | 0 | 1 | 
| 2 | 1 | 3 | 1 | 1 | 3 | 1 | 
| 3 | 2 | 0,2 | 1,1 | 2 | 0,2 | 1,1 | 
 
 Example  
  
A two-dimensional PxQ torus where all processes communicate along the  
dimensions and along the diagonal edges. This cannot be modeled with  
Cartesian topologies, but can easily be captured with  
 MPI_DIST_GRAPH_CREATE as shown in the following code. In this  
example, the communication along the dimensions is twice as heavy as  
the communication along the diagonals:  
 
 
/* 
Input:     dimensions P, Q 
Condition: number of processes equal to P*Q; otherwise only  
           ranks smaller than P*Q participate 
*/ 
int rank, x, y; 
int sources[1], degrees[1]; 
int destinations[8], weights[8]; 
MPI_Comm comm_dist_graph; 
 
MPI_Comm_rank(MPI_COMM_WORLD, &rank); 
 
/* get x and y dimension */ 
y=rank/P; x=rank%P; 
 
/* get my communication partners along x dimension */ 
destinations[0] = P*y+(x+1)%P; weights[0] = 2; 
destinations[1] = P*y+(P+x-1)%P; weights[1] = 2; 
 
/* get my communication partners along y dimension */ 
destinations[2] = P*((y+1)%Q)+x; weights[2] = 2; 
destinations[3] = P*((Q+y-1)%Q)+x; weights[3] = 2; 
 
/* get my communication partners along diagonals */ 
destinations[4] = P*((y+1)%Q)+(x+1)%P; weights[4] = 1; 
destinations[5] = P*((Q+y-1)%Q)+(x+1)%P; weights[5] = 1; 
destinations[6] = P*((y+1)%Q)+(P+x-1)%P; weights[6] = 1; 
destinations[7] = P*((Q+y-1)%Q)+(P+x-1)%P; weights[7] = 1; 
 
sources[0] = rank; 
degrees[0] = 8; 
MPI_Dist_graph_create(MPI_COMM_WORLD, 1, sources, degrees, destinations, 
                      weights, MPI_INFO_NULL, 1, &comm_dist_graph);