186. Partitioning of Cartesian Structures

PreviousUpNext
Up: Topology Constructors Next: Low-Level Topology Functions Previous: Cartesian Shift Coordinates

MPI_CART_SUB(comm, remain_dims, newcomm)
IN comm communicator with Cartesian structure (handle)
IN remain_dims the i-th entry of remain_dims specifies whether the i-th dimension is kept in the subgrid ( true) or is dropped ( false) (logical vector)
OUT newcomm communicator containing the subgrid that includes the calling process (handle)

int MPI_Cart_sub(MPI_Comm comm, const int remain_dims[], MPI_Comm *newcomm)

MPI_Cart_sub(comm, remain_dims, newcomm, ierror)
TYPE(MPI_Comm), INTENT(IN) :: comm
LOGICAL, INTENT(IN) :: remain_dims(*)
TYPE(MPI_Comm), INTENT(OUT) :: newcomm
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_CART_SUB(COMM, REMAIN_DIMS, NEWCOMM, IERROR)
INTEGER COMM, NEWCOMM, IERROR
LOGICAL REMAIN_DIMS(*)

If a Cartesian topology has been created with MPI_CART_CREATE, the function MPI_CART_SUB can be used to partition the communicator group into subgroups that form lower-dimensional Cartesian subgrids, and to build for each subgroup a communicator with the associated subgrid Cartesian topology. If all entries in remain_dims are false or comm is already associated with a zero-dimensional Cartesian topology then newcomm is associated with a zero-dimensional Cartesian topology. (This function is closely related to MPI_COMM_SPLIT.)


Example Assume that MPI_CART_CREATE (..., comm) has defined a (2 × 3 × 4) grid. Let remain_dims = (true, false, true). Then a call to

     MPI_CART_SUB(comm, remain_dims, comm_new); 
will create three communicators each with eight processes in a 2 × 4 Cartesian topology. If remain_dims = (false, false, true) then the call to MPI_CART_SUB(comm, remain_dims, comm_new) will create six non-overlapping communicators, each with four processes, in a one-dimensional Cartesian topology.


PreviousUpNext
Up: Topology Constructors Next: Low-Level Topology Functions Previous: Cartesian Shift Coordinates


Return to MPI-3.1 Standard Index
Return to MPI Forum Home Page

(Unofficial) MPI-3.1 of June 4, 2015
HTML Generated on June 4, 2015