4.7.3. Communication Completion

PreviousUpNext
Up: Nonblocking Communication Next: Semantics of Nonblocking Communication Operations Previous: Communication Initiation

The functions MPI_WAIT and MPI_TEST are used to complete a nonblocking communication. The completion of a send operation indicates that the sender is now free to update the send buffer (the send operation itself leaves the content of the send buffer unchanged). It does not indicate that the message has been received, rather, it may have been buffered by the communication subsystem. However, if a synchronous mode send was used, the completion of the send operation indicates that a matching receive was initiated, and that the message will eventually be received by this matching receive.

The completion of a receive operation indicates that the receive buffer contains the received message, the receiver is now free to access it, and that the status object is set. It does not indicate that the matching send operation has completed (but indicates, of course, that the send was initiated).

We shall use the following terminology: A null handle is a handle with value MPI_REQUEST_NULL. A persistent communication request and the handle to it are inactive if the request is not associated with any ongoing communication (see Section Persistent Communication Requests). A handle is active if it is neither null nor inactive. An empty status is a status that is set to return tag = MPI_ANY_TAG, source = MPI_ANY_SOURCE, error = MPI_SUCCESS, and is also internally configured so that calls to MPI_GET_COUNT and MPI_GET_ELEMENTS return count = 0 and MPI_TEST_CANCELLED returns false. We set a status variable to empty when the value returned by it is not significant. Status is set in this way so as to prevent errors due to accesses of stale information.

The fields in a status object returned by a call to MPI_WAIT, MPI_TEST, or any of the other derived functions ( MPI_{ TEST|WAIT}{ ALL|SOME|ANY}), where the request corresponds to a send call, are undefined, with two exceptions: The error status field will contain valid information if the wait or test call returned with MPI_ERR_IN_STATUS; and the returned status can be queried by the call MPI_TEST_CANCELLED.

Error codes belonging to the error class MPI_ERR_IN_STATUS should be returned only by the MPI completion functions that take arrays of MPI_Status. For the functions that take a single MPI_Status argument, the error code is returned by the function, and the value of the MPI_ERROR field in the MPI_Status argument is undefined (see Return Status).

MPI_WAIT(request, status)
INOUT requestrequest (handle)
OUT statusstatus object (status)
C binding
int MPI_Wait(MPI_Request *request, MPI_Status *status)
Fortran 2008 binding
MPI_Wait(request, status, ierror)

TYPE(MPI_Request), INTENT(INOUT) :: request
TYPE(MPI_Status) :: status
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
Fortran binding
MPI_WAIT(REQUEST, STATUS, IERROR)

INTEGER REQUEST, STATUS(MPI_STATUS_SIZE), IERROR

A call to MPI_WAIT returns when the operation identified by request is complete. If the request is an active persistent communication request, it is marked inactive. Any other type of request is deallocated and the request handle is set to MPI_REQUEST_NULL. MPI_WAIT is in general a nonlocal procedure. When the operation represented by the request is enabled then a call to MPI_WAIT is a local procedure call.

The call returns, in status, information on the completed operation. The content of the status object for a receive operation can be accessed as described in Section Return Status. The status object for a send operation may be queried by a call to MPI_TEST_CANCELLED (see Section Probe and Cancel).

One is allowed to call MPI_WAIT with a null or inactive request argument. In this case the procedure returns immediately with empty status.


Advice to users.

Successful return of MPI_WAIT after a MPI_IBSEND implies that the user send buffer can be reused---i.e., data has been sent out or copied into a buffer attached with MPI_BUFFER_ATTACH, MPI_COMM_ATTACH_BUFFER or MPI_SESSION_ATTACH_BUFFER. Further, at this point, we can no longer cancel the send (see Section Probe and Cancel). If a matching receive is never started, then the buffer cannot be freed. This runs somewhat counter to the stated goal of MPI_CANCEL (always being able to free program space that was committed to the communication subsystem). ( End of advice to users.)

Advice to implementors.

In a multithreaded environment, a call to MPI_WAIT should block only the calling thread, allowing the thread scheduler to schedule another thread for execution. ( End of advice to implementors.)

MPI_TEST(request, flag, status)
INOUT requestcommunication request (handle)
OUT flag true if operation completed (logical)
OUT statusstatus object (status)
C binding
int MPI_Test(MPI_Request *request, int *flag, MPI_Status *status)
Fortran 2008 binding
MPI_Test(request, flag, status, ierror)

TYPE(MPI_Request), INTENT(INOUT) :: request
LOGICAL, INTENT(OUT) :: flag
TYPE(MPI_Status) :: status
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
Fortran binding
MPI_TEST(REQUEST, FLAG, STATUS, IERROR)

INTEGER REQUEST, STATUS(MPI_STATUS_SIZE), IERROR
LOGICAL FLAG

A call to MPI_TEST returns flag = true if the operation identified by request is complete. In such a case, the status object is set to contain information on the completed operation. If the request is an active persistent communication request, it is marked as inactive. Any other type of request is deallocated and the request handle is set to MPI_REQUEST_NULL. The call returns flag = false if the operation identified by request is not complete. In this case, the value of the status object is undefined. MPI_TEST is a local procedure.

The return status object for a receive operation carries information that can be accessed as described in Section Return Status. The status object for a send operation carries information that can be accessed by a call to MPI_TEST_CANCELLED (see Section Probe and Cancel).

One is allowed to call MPI_TEST with a null or inactive request argument. In such a case the procedure returns with flag = true and empty status.

The procedures MPI_WAIT and MPI_TEST can be used to complete any request-based nonblocking or persistent operation.


Advice to users.

The use of the nonblocking MPI_TEST call allows the user to schedule alternative activities within a single thread of execution. An event-driven thread scheduler can be emulated with periodic calls to MPI_TEST. ( End of advice to users.)

Example Simple usage of nonblocking operations and MPI_WAIT.

Image file

A request object can be freed using the following MPI procedure.

MPI_REQUEST_FREE(request)
INOUT requestcommunication request (handle)
C binding
int MPI_Request_free(MPI_Request *request)
Fortran 2008 binding
MPI_Request_free(request, ierror)

TYPE(MPI_Request), INTENT(INOUT) :: request
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
Fortran binding
MPI_REQUEST_FREE(REQUEST, IERROR)

INTEGER REQUEST, IERROR

MPI_REQUEST_FREE is a local procedure. Upon successful return, MPI_REQUEST_FREE sets request to MPI_REQUEST_NULL. For an inactive request representing any type of MPI operation, MPI_REQUEST_FREE shall do the freeing stage of the associated operation during its execution.

For a request representing a nonblocking point-to-point or a persistent point-to-point operation, it is permitted (although strongly discouraged) to call MPI_REQUEST_FREE when the request is active. In this special case, MPI_REQUEST_FREE will only mark the request for freeing and MPI will actually do the freeing stage of the operation associated with the request later.

The use of this procedure for generalized requests is described in Section Generalized Requests.

Calling MPI_REQUEST_FREE with an active request representing any other type of MPI operation (e.g., any partitioned operation (see Chapter Partitioned Point-to-Point Communication), any collective operation (see Chapter Collective Communication), any I/O operation (see Chapter I/O), or any request-based RMA operation (see Chapter One-Sided Communications)) is erroneous.


Rationale.

For point-to-point operations, the MPI_REQUEST_FREE mechanism is provided for reasons of performance and convenience on the sending side. ( End of rationale.)

Advice to users.

Once a request is freed by a call to MPI_REQUEST_FREE, it is not possible to check for the successful completion of the associated communication with calls to MPI_WAIT or MPI_TEST. Also, if an error occurs subsequently during the communication, an error code cannot be returned to the user---such an error must be treated as fatal. An active receive request should never be freed as the receiver will have no way to verify that the receive has completed and the receive buffer can be reused. ( End of advice to users.)

Example An example using MPI_REQUEST_FREE.

Image file


PreviousUpNext
Up: Nonblocking Communication Next: Semantics of Nonblocking Communication Operations Previous: Communication Initiation


Return to MPI-4.1 Standard Index
Return to MPI Forum Home Page

(Unofficial) MPI-4.1 of November 2, 2023
HTML Generated on November 19, 2023