7.12.1. Nonblocking Barrier Synchronization

PreviousUpNext
Up: Nonblocking Collective Operations Next: Nonblocking Broadcast Previous: Nonblocking Collective Operations

MPI_IBARRIER(comm, request)
IN commcommunicator (handle)
OUT requestcommunication request (handle)
C binding
int MPI_Ibarrier(MPI_Comm comm, MPI_Request *request)
Fortran 2008 binding
MPI_Ibarrier(comm, request, ierror)

TYPE(MPI_Comm), INTENT(IN) :: comm
TYPE(MPI_Request), INTENT(OUT) :: request
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
Fortran binding
MPI_IBARRIER(COMM, REQUEST, IERROR)

INTEGER COMM, REQUEST, IERROR

MPI_IBARRIER is a nonblocking version of MPI_BARRIER. By calling MPI_IBARRIER, an MPI process notifies that it has reached the barrier. The call returns immediately, independent of whether other MPI processes have called MPI_IBARRIER. The usual barrier semantics are enforced at the corresponding completion operation (test or wait), which in the intra-communicator case will complete only after all other MPI processes in the communicator have called MPI_IBARRIER. In the inter-communicator case, it will complete when all MPI processes in the remote group have called MPI_IBARRIER.


Advice to users.

A nonblocking barrier can be used to hide latency. Moving independent computations between the MPI_IBARRIER and the subsequent completion call can overlap the barrier latency and therefore shorten possible waiting times. The semantic properties are also useful when mixing collective operations and point-to-point messages. ( End of advice to users.)


PreviousUpNext
Up: Nonblocking Collective Operations Next: Nonblocking Broadcast Previous: Nonblocking Collective Operations


Return to MPI-4.1 Standard Index
Return to MPI Forum Home Page

(Unofficial) MPI-4.1 of November 2, 2023
HTML Generated on November 19, 2023