7.9.2. Predefined Reduction Operations

PreviousUpNext
Up: Global Reduction Operations Next: Signed Characters and Reductions Previous: Reduce

The following predefined operations are supplied for MPI_REDUCE and related functions MPI_ALLREDUCE, MPI_REDUCE_SCATTER_BLOCK, MPI_REDUCE_SCATTER, MPI_SCAN, MPI_EXSCAN, all nonblocking variants of those (see Section Nonblocking Collective Operations), and MPI_REDUCE_LOCAL. These operations are invoked by placing the following in op.

NameMeaning
MPI_MAXmaximum
MPI_MINminimum
MPI_SUMsum
MPI_PRODproduct
MPI_LANDlogical and
MPI_BANDbit-wise and
MPI_LORlogical or
MPI_BORbit-wise or
MPI_LXORlogical exclusive or (xor)
MPI_BXORbit-wise exclusive or (xor)
MPI_MAXLOCmax value and location
MPI_MINLOCmin value and location
The two operations MPI_MINLOC and MPI_MAXLOC are discussed separately in Section MINLOC and MAXLOC. For the other predefined operations, we enumerate below the allowed combinations of op and datatype arguments. First, define groups of MPI basic datatypes in the following way.

C integer: MPI_INT, MPI_LONG, MPI_SHORT,
MPI_UNSIGNED_SHORT, MPI_UNSIGNED,
MPI_UNSIGNED_LONG,
MPI_LONG_LONG_INT,
MPI_LONG_LONG (as synonym),
MPI_UNSIGNED_LONG_LONG,
MPI_SIGNED_CHAR,
MPI_UNSIGNED_CHAR,
MPI_INT8_T, MPI_INT16_T,
MPI_INT32_T, MPI_INT64_T,
MPI_UINT8_T, MPI_UINT16_T,
MPI_UINT32_T, and MPI_UINT64_T
Fortran integer: MPI_INTEGER
and handles returned from
MPI_TYPE_CREATE_F90_INTEGER
and, if available, MPI_INTEGER1,
MPI_INTEGER2, MPI_INTEGER4,
MPI_INTEGER8, and MPI_INTEGER16
Floating point: MPI_FLOAT, MPI_DOUBLE, MPI_REAL,
MPI_DOUBLE_PRECISION,
MPI_LONG_DOUBLE,
and handles returned from
MPI_TYPE_CREATE_F90_REAL
and, if available, MPI_REAL2,
MPI_REAL4, MPI_REAL8, and MPI_REAL16
Logical: MPI_LOGICAL, MPI_C_BOOL,
MPI_CXX_BOOL,
and, if available, MPI_LOGICAL1,
MPI_LOGICAL2, MPI_LOGICAL4,
MPI_LOGICAL8, and MPI_LOGICAL16,
Complex: MPI_COMPLEX, MPI_C_COMPLEX,
MPI_C_FLOAT_COMPLEX (as synonym),
MPI_C_DOUBLE_COMPLEX,
MPI_C_LONG_DOUBLE_COMPLEX,
MPI_CXX_FLOAT_COMPLEX,
MPI_CXX_DOUBLE_COMPLEX,
MPI_CXX_LONG_DOUBLE_COMPLEX,
and handles returned from
MPI_TYPE_CREATE_F90_COMPLEX
and, if available, MPI_DOUBLE_COMPLEX,
MPI_COMPLEX4, MPI_COMPLEX8,
MPI_COMPLEX16, and MPI_COMPLEX32
Byte: MPI_BYTE
Multi-language types: MPI_AINT, MPI_OFFSET, and MPI_COUNT
Now, the valid datatypes for each operation are specified below.

[170pt] OpAllowed Types
  • [ MPI_MAX, MPI_MIN]C integer, Fortran integer, Floating point,
  • Multi-language types
  • [ MPI_SUM, MPI_PROD]C integer, Fortran integer, Floating point, Complex,
  • Multi-language types
  • [ MPI_LAND, MPI_LOR, MPI_LXOR]C integer, Logical
  • [ MPI_BAND, MPI_BOR, MPI_BXOR]C integer, Fortran integer, Byte, Multilanguage types
  • These operations together with all listed datatypes are valid in all supported programming languages, see also Reduce Operations in Section MPI Opaque Objects.

    The following examples use intra-communicators.


    Example A routine that computes the dot product of two vectors that are distributed across a group of MPI processes and returns the answer at node zero.


    SUBROUTINE PAR_BLAS1(m, a, b, c, comm) 
    USE MPI 
    REAL a(m), b(m)       ! local slice of array 
    REAL c                ! result (at node zero) 
    REAL sum 
    INTEGER m, comm, i, ierr 
     
    ! local sum 
    sum = 0.0 
    DO i = 1, m 
       sum = sum + a(i)*b(i) 
    END DO 
     
    ! global sum 
    CALL MPI_REDUCE(sum, c, 1, MPI_REAL, MPI_SUM, 0, comm, ierr) 
    RETURN 
    END 
    


    Example A routine that computes the product of a vector and an array that are distributed across a group of MPI processes and returns the answer at node zero.


    SUBROUTINE PAR_BLAS2(m, n, a, b, c, comm) 
    USE MPI 
    REAL a(m), b(m,n)    ! local slice of array 
    REAL c(n)            ! result 
    REAL sum(n) 
    INTEGER m, n, comm, i, j, ierr 
     
    ! local sum 
    DO j=1,n 
       sum(j) = 0.0 
       DO i=1,m 
          sum(j) = sum(j) + a(i)*b(i,j) 
       END DO 
    END DO 
     
    ! global sum 
    CALL MPI_REDUCE(sum, c, n, MPI_REAL, MPI_SUM, 0, comm, ierr) 
     
    ! return result at node zero (and garbage at the other nodes) 
    RETURN 
    END 
    


    PreviousUpNext
    Up: Global Reduction Operations Next: Signed Characters and Reductions Previous: Reduce


    Return to MPI-5.0 Standard Index
    Return to MPI Forum Home Page

    (Unofficial) MPI-5.0 of June 9, 2025
    HTML Generated on March 2, 2025