Meeting Agenda

June 8th - 10th, 2009

Logistics Agenda Presentations Attendance Votes Notes

Agenda

All times US Central

Mon, June 8, 2009

Time Title Call Info Recording
1:00pm - 1:30pm Reports from the working groups Zoom Info
1:30pm - 2:00pm MPI 3.0 process and timeline discussion Zoom Info
2:00pm - 5:00pm MPI 2.2 Voting Zoom Info
5:00pm - 6:30pm Hybrid Programming Working Group Zoom Info
6:30pm - 7:00pm Non Blocking Collectives - Second Vote Zoom Info

Tue, June 9, 2009

Time Title Call Info Recording
9:00am - 10:45am MPI 3.0 Plenary session - Corrections Zoom Info
10:45am - 11:00am Break
11:00am - 1:00pm Working Lunch - RMA working group ; Fortran Working Group Zoom Info
1:00pm - 3:00pm Collectives working group Zoom Info
3:00pm - 5:00pm Active Messages Working Group Zoom Info
5:00pm - 5:15pm Break
5:15pm - 7:00pm MPI 3.0 Plenary session - Corrections, cont'd Zoom Info

Wed, June 10, 2009

Time Title Call Info Recording
9:00am - 11:30am Fault tolerance working group ; Tools Working Group Zoom Info
11:30am - 12:00pm Wrap up Zoom Info

Votes

First Vote

Issue #PR #Topic
7 Function pointer typedefs: '_function' vs. '_fn'
13 Small clarifications in chapter 10
18 New Predefined Datatypes
19 Inconsistent comments about intercommunicators
24 Add a local Reduction Function
27 Regular (non-vector) version of MPI_Reduce_scatter
31 Add MPI_IN_PLACE option to Alltoall
33 Fix Scalability Issues in Graph Topology Interface
37 Clarify semantics of one-sided semantics when changing synchronization mode
50 Ibsend and Irsend Advice to Users misleading
51 Inconsistent use of MPI_ANY_SOURCE in argument description
55 MPI-2.1 Cross-language attribute example is wrong
57 Fortran specific length types are not consistently listed
59 Clarification on MPI::FILE_NULL, MPI::WIN_NULL and MPI::COMM_NULL
64 Parameterized and optional named predefined datatypes in reduction operations
65 Predefined handles before MPI_Init and constants in general, Clarification Solution 1
66 Extending MPI_COMM_CREATE to create several disjoint sub-communicators from an intracommunicator
70 Misleading rationale for MPI_Test (and MPI_Win_test)
71 Specify order of atribute delete callbacks on MPI_COMM_SELF at MPI_FINALIZE
72 Convenience function: MPI_CART_SHIFT_VECTOR
77 Version 2.2 intro and text
80 Misleading discussion of thread ordering
94 Add MPI_IN_PLACE option to Exscan
98 Additional change requried for changes to Send Buffer restriction (#45)
99 Change-Log is also used for important clarifications for the MPI users.
100 Change-Log from Version 2.1 to Version 2.2
103 Fortran in this document refers to Fortran 90
105 Matching arguments and collective routines.
107 Types of predefined constants in Appendix A
116 Data type chapter example corrections
121 Filling out list of CHAR types in Section 5.9.3
122 Typo in MPI_CART_SHIFT example 7.4
124 Slightly changed description of MPI_REDUCE_SCATTER, explanation on 'in place'
127 Add C++ versions of Fortran COMPLEX8 etc.
128 Verify and correct example 'Building Name Service for Intercommunications'
132 Change 'a data item' to 'data' in allgather on intercommunicators
135 Define matching semantics of collective operations in threaded environments
136 Advice to users about associativity in reduction operations
137 MPI_REQUEST_GET_STATUS should allow inactive and NULL request arguments
141 MPI_Aint/MPI_ADDRESS_KIND and MPI_Offset/MPI_OFFSET_KIND equality
142 Fix incorrect mentions of 3 routines
143 MPI_Request_free bad advice to users
146 Fix MPI_INIT description text
148 Missing entries in the Index pages.
149 Obsolete reference to deprecated function MPI_Attr_get
150 Deprecate the C++ bindings
151 Predefined handles before MPI_Init and constants in general, Enhancment Solution 2

Second Vote

Issue #PR #Topic
109 MPI-3: Nonblocking Collective Operations
1 Fortran MPI_*_ERRHANDLER callback functions are varargs
3 Repeating a Neighbor in a Graph Communicator Ambiguity with MPI_GRAPH_NEIGHBOR[_COUNT]
4 Remove MPI-2.1 A.1.1 p494:31-32 table
8 Text Updates to Language Bindings Chapter
30 Clarification to intercomm MPI_Barrier
40 MPI-2.1 Errata MPI::F_DOUBLE_COMPLEX (page 495 line 11)
43 MPI_REPLACE in MPI_Accumulate
44 Non blocking versus non-blocking versus nonblocking
53 Explicitly encourage routines for 'good' one-sided memory for all memory types
60 Modernize example on p 279
61 MPI-2.1 Change-Log: Version number modified to 2.1
63 MPI_CHAR for printable characters - C & C++ consisteny
67 MPI-2.1 Errata: Error in name of participating institution for MPI 2.1
74 Nonnegative vs. non-negative
87 Wording changes to collective chapter for consistency
89 Philosophical difference with current classification of barriers
90 Minor grammar corrections for collectives chapter
91 Undefined term in description of Reduce-Scatter
92 Need to fix MPI_EXSCAN advice to users
93 Inconsistent description of arrays in collective operations
97 Small bug in RMA example
101 Version number changed to (2,2)
104 MPI_ARGV_NULL and MPI_ARGVS_NULL missing on p465:13-15.
113 Minor typo in MPI_ALLTOALLV
115 Corrections to point-to-point chapter examples
118 Corrections to collectives chapter: fix examples and remove deprecated functions
120 Typos in Collectives Chapter
123 Fix errors in example in profiling chapter