3.2.2. Clarification of MPI_FINALIZE


Up: MPI-1.0 and MPI-1.1 Clarifications Next: Clarification of status after MPI_WAIT and MPI_TEST Previous: Clarification of MPI_INITIALIZED

This routine cleans up all MPI state. Each process must call MPI_FINALIZE before it exits. Unless there has been a call to MPI_ABORT, each process must ensure that all pending non-blocking communications are (locally) complete before calling MPI_FINALIZE. Further, at the instant at which the last process calls MPI_FINALIZE, all pending sends must be matched by a receive, and all pending receives must be matched by a send.

For example, the following program is correct:

        Process 0                Process 1 
        ---------                --------- 
        MPI_Init();              MPI_Init(); 
        MPI_Send(dest=1);        MPI_Recv(src=0); 
        MPI_Finalize();          MPI_Finalize(); 
Without the matching receive, the program is erroneous:
        Process 0                Process 1 
        -----------              ----------- 
        MPI_Init();              MPI_Init(); 
        MPI_Send (dest=1); 
        MPI_Finalize();          MPI_Finalize(); 

A successful return from a blocking communication operation or from MPI_WAIT or MPI_TEST tells the user that the buffer can be reused and means that the communication is completed by the user, but does not guarantee that the local process has no more work to do. A successful return from MPI_REQUEST_FREE with a request handle generated by an MPI_ISEND nullifies the handle but provides no assurance of operation completion. The MPI_ISEND is complete only when it is known by some means that a matching receive has completed. MPI_FINALIZE guarantees that all local actions required by communications the user has completed will, in fact, occur before it returns.

MPI_FINALIZE guarantees nothing about pending communications that have not been completed (completion is assured only by MPI_WAIT, MPI_TEST, or MPI_REQUEST_FREE combined with some other verification of completion).


Example This program is correct:

rank 0                          rank 1 
===================================================== 
...                             ... 
MPI_Isend();                    MPI_Recv(); 
MPI_Request_free();             MPI_Barrier(); 
MPI_Barrier();                  MPI_Finalize(); 
MPI_Finalize();                 exit(); 
exit();                         


Example This program is erroneous and its behavior is undefined:

rank 0                          rank 1 
===================================================== 
...                             ... 
MPI_Isend();                    MPI_Recv(); 
MPI_Request_free();             MPI_Finalize(); 
MPI_Finalize();                 exit(); 
exit();                         

If no MPI_BUFFER_DETACH occurs between an MPI_BSEND (or other buffered send) and MPI_FINALIZE, the MPI_FINALIZE implicitly supplies the MPI_BUFFER_DETACH.


Example This program is correct, and after the MPI_Finalize, it is as if the buffer had been detached.

rank 0                          rank 1 
===================================================== 
...                             ... 
buffer = malloc(1000000);       MPI_Recv(); 
MPI_Buffer_attach();            MPI_Finalize(); 
MPI_Bsend();                    exit();               
MPI_Finalize(); 
free(buffer); 
exit();                         


Example In this example, MPI_Iprobe() must return a FALSE flag. MPI_Test_cancelled() must return a TRUE flag, independent of the relative order of execution of MPI_Cancel() in process 0 and MPI_Finalize() in process 1.

The MPI_Iprobe() call is there to make sure the implementation knows that the ``tag1'' message exists at the destination, without being able to claim that the user knows about it.


rank 0                          rank 1 
======================================================== 
MPI_Init();                     MPI_Init(); 
MPI_Isend(tag1); 
MPI_Barrier();                  MPI_Barrier(); 
                                MPI_Iprobe(tag2); 
MPI_Barrier();                  MPI_Barrier(); 
                                MPI_Finalize(); 
                                exit(); 
MPI_Cancel(); 
MPI_Wait(); 
MPI_Test_cancelled(); 
MPI_Finalize(); 
exit(); 
 

Advice to implementors.

An implementation may need to delay the return from MPI_FINALIZE until all potential future message cancellations have been processed. One possible solution is to place a barrier inside MPI_FINALIZE ( End of advice to implementors.)

Once MPI_FINALIZE returns, no MPI routine (not even MPI_INIT) may be called, except for MPI_GET_VERSION, MPI_INITIALIZED, and the MPI-2 function MPI_FINALIZED. Each process must complete any pending communication it initiated before it calls MPI_FINALIZE. If the call returns, each process may continue local computations, or exit, without participating in further MPI communication with other processes. MPI_FINALIZE is collective on MPI_COMM_WORLD.


Advice to implementors.

Even though a process has completed all the communication it initiated, such communication may not yet be completed from the viewpoint of the underlying MPI system. E.g., a blocking send may have completed, even though the data is still buffered at the sender. The MPI implementation must ensure that a process has completed any involvement in MPI communication before MPI_FINALIZE returns. Thus, if a process exits after the call to MPI_FINALIZE, this will not cause an ongoing communication to fail. ( End of advice to implementors.)

Although it is not required that all processes return from MPI_FINALIZE, it is required that at least process 0 in MPI_COMM_WORLD return, so that users can know that the MPI portion of the computation is over. In addition, in a POSIX environment, they may desire to supply an exit code for each process that returns from MPI_FINALIZE.


Example The following illustrates the use of requiring that at least one process return and that it be known that process 0 is one of the processes that return. One wants code like the following to work no matter how many processes return.


    ... 
    MPI_Comm_rank(MPI_COMM_WORLD, &myrank); 
    ... 
    MPI_Finalize(); 
    if (myrank == 0) { 
        resultfile = fopen("outfile","w"); 
        dump_results(resultfile); 
        fclose(resultfile); 
    } 
    exit(0); 



Up: MPI-1.0 and MPI-1.1 Clarifications Next: Clarification of status after MPI_WAIT and MPI_TEST Previous: Clarification of MPI_INITIALIZED


Return to MPI-2 Standard Index
Return to MPI 1.1 Standard Index
Return to MPI Forum Home Page

MPI-2.0 of July 18, 1997
HTML Generated on September 10, 2001