The MPI 4.0 standardization efforts aim at adding new techniques, approaches, or concepts to the MPI standard that will help MPI address the need of current and next generation applications and architectures. In particular, the following additions are currently being proposed and worked on:

  • Extensions to better support hybrid programming models
  • Support for fault tolerance in MPI applications
  • Persistent collectives
  • Performance Assertions and Hints
  • RMA/One-sided communication

Additionally, several working groups are working on new ideas and concepts, incl.

  • Active messages
  • Stream messaging
  • Rework of the MPI profiling interface
  • Extensions to MPI_T
  • Generalized requests
  • Hybrid MPI+X concerns (esp. MPI+CAF)
  • Send cancelation
  • Attribute callback
  • Large count

Further, the tools WG is discussing additional 3rd party tool interfaces, which are generally published as side documents:

  • Handle introspection from debuggers
  • Debug DLL detection and identification

Note, though, that all of these efforts or new concepts are currently only being discussed or proposed and there is no guarantee that any particular one will be included in any upcoming MPI version.

Process for proposing new ideas for MPI 4.0

The forum encourages that new items will be brought forward through the respective working group. All working groups are listed below. The working group will the place for discussion, the creation of a preliminary proposal as well as drive the socialization of the idea in the forum once a certain level of maturity has been reached. Once the idea is mature enough, the working group will help to develop a formal proposal, which includes the proposed text as well as entry in the MPI ticket management system linked of the Wiki. Once complete and deemed ready by the working group, the proposal goes through the MPI forum voting process, which is detailed in the next section.

Link to the MPI-Forum GitHub Issue/Ticket System

Voting Rules

On June 30 2020, the MPI forum voted for version 3.3 of these voting rules (effective June 30th, 2020).

Active Working Groups

The following working groups are currently participating in the MPI 4.0 efforts. For more information on each working group, current topics, and meeting schedules, please follow the links to the respective Wiki pages.

Collective, Communicators, Context, Persistent, Partitioned, Groups, Topologies

  • Lead: Torsten Hoefler, Andrew Lumsdaine, Anthony Skjellum
  • Scope: This working group considers cross-cutting issues of groups, context, communicators, and collective operations as well as features such as persistence, partitioning, topologies, and operational semantics (e.g., blocking, nonblocking, local, synchronizing) thereof.

Fault Tolerance

  • Leads: Wesley Bland, Aurélien Bouteiller and Rich Graham
  • Scope: To define any additional support needed in the MPI standard to enable implementation of portable Fault Tolerant solutions for MPI based applications.


  • Lead: Guillaume Mercier
  • Scope: Address questions like how can hardware resources (I/O, cores, caches, I/O proxies, etc.) be discovered, queried upon and distributed between execution flows? Define portable primitives inside MPI to explore and take advantage of the hardware topology either at the node or the process level.

Hybrid & Accelerator

  • Lead: Pavan Balaji and Jim Dinan
  • Scope: Ensure that MPI has the features necessary to facilitate efficient hybrid programming and investigate what changes are needed in MPI to better support traditional thread interfaces (e.g., Pthreads, OpenMP), emerging interfaces (like TBB, OpenCL, CUDA, and Ct), and PGAS (UPC, CAF, etc.).


  • Lead: Martin Ruefenacht, Tony Skjellum
  • Scope: Ensure MPI has robust support for present and future language expressions and to introduce new languages encapsulating the MPI concepts.

Remote Memory Access

  • Lead: Bill Gropp and Rajeev Thakur
  • Scope: To re-examine the MPI RMA interface and consider additions and or changes needed to better support the one-sided programming model within MPI.

Semantic Terms

  • Lead: Purushotham Bangalore and Rolf Rabenseifner
  • Scope: Review and update semantic terms used throughout the MPI Standard.


  • Lead: Dan Holmes, Howard Pritchard
  • Scope: Explore alternate concepts to MPI_Init and MPI_Finalize


  • Lead: Kathryn Mohror and Marc-Andre Hermanns
  • Scope: Definition of interfaces for debugging and performance tools

Working Groups on Hold

In addition to the active working groups, several working groups exist that have been on hold.


  • Scope: To investigate a modernisation of the Fortran language bindings beyond Fortran 2008.

Generalized Requests

  • Scope: Redefine the generalized requests interface. A more flexible interface between the user defined requests and the MPI library is required in order to allow the provider of the generalized request to integrate a progress function inside the MPI library. The ultimate goal is to allow the generalized requests progress to be done without a special test or wait function.


  • Scope: Definition of API extensions for I/O operations

Point to Point Communication

  • Scope: To re-examine the MPI peer communication semantics and interface, and consider additions and/or changes needed to better support point-to-point data movement within MPI.


  • Scope: Work on definitions and specifications of operations that support higher performance forms of existing MPI operations when there is an ability to “plan once” and “amortize costs”. Current work focuses on collective operations and neighborhood collectives.

Large Counts

  • Scope: Understanding and fixing the issues associated with integer counts and displacements.

Chapter Committees

# Chapter Chair Members  
  Front Matter Bill Gropp Rolf Rabenseifner, Martin Schulz  
1 Introduction Bill Gropp Rolf Rabenseifner, Martin Schulz  
2 MPI Terms and Conventions Claudia Blaas-Schenner Rolf Rabenseifner, Bill Gropp, Tony Skjellum, Puri Bangalore, Guillaume Mercier, Dan Holmes, Julien Jaeger  
3 Point to Point Communcation Dan Holmes Ken Raffenetti, Ryan Grant, Bill Gropp, Brian Smith  
4 Partitioned Communication Ryan Grant Tony Skjellum, Puri Bangalore, Dan Holmes, Matthew Dosanjh  
5 Datatypes George Bosilca Bill Gropp, Martin Ruefenacht, Dan Holmes  
6 Collective Communication Tony Skjellum  Torsten Höfler, Brian Smith, Wesley Bland, Martin Schulz, Julien Jaeger  
7 Groups, Contexts, Communicators, Caching Guillaume Mercier Bill Gropp, Tony Skjellum, Pavan Balaji  
8 Process Topologies Rolf Rabenseifner Guillaume Mercier, Claudia Blaas-Schenner, Torsten Höfler, Tony Skjellum, Mahdieh Ghazimirsaeed, Christoph Niethammer  
9 MPI Environmental Management George Bosilca Ken Raffenetti, Wesley Bland, Thomas Naughton  
10 The Info Object Martin Schulz Ryan Grant, Guillaume Mercier, Wesley Bland  
11 Process Creation and Mangement Howard Pritchard Ken Raffenetti, Dan Holmes, Martin Schulz, Thomas Naughton  
12 One-Sided Communication Bill Gropp Pavan Balaji, Joseph Schuchart, Nathan Hjelm, Artem Polyakov  
14 External Interfaces Martin Schulz  Pavan Balaji, Brian Smith, Tony Skjellum  
14 I/O Tony Skjellum Quincey Koziol, Shinji Sumimoto  
15 Tool Support Marc-Andre Hermanns Martin Schulz, Kathryn Mohror  
16 Deprecated Functions Rolf Rabenseifner Martin Schulz, Wesley Bland  
17 Removed Interfaces Rolf Rabenseifner Martin Schulz, Wesley Bland  
18 Backward Incompatibilities Wesley Bland Martin Schulz   
19 Language Bindings Puri Bangalore Tony Skjellum, Rolf Rabenseifner  
A Language Bindings Summary Rolf Rabenseifner Puri Bangalore, Tony Skjellum, Hubert Ritzdorf  
B Change-Log Rolf Rabenseifner Marc-Andre Hermanns, Hubert Ritzdorf