After having thought about this for quite some time and after some
helpful but not fully satisfying discussion on comp.parallel.mpi I now
venture to use some bandwidth in this discussion group.
According to the MPI 1.1
(sections 2.4.5 and 2.9 of "MPI the complete reference")
standard a receive with MPI_ANY_SOURCE does
not guarantee fairness while MPI_Waitsome does. My concern is, that
if the number of processors P involved is large (I use machines with P
up to 1024), MPI_Waitsome must inevitably become quite slow because at
least every field of the array_of_request argument must be inspected
since it might have been manipulated.
On the other hand, I have used and developed a number of algorithms
which are critically dependent on an efficient, fair receive operation
which accepts messages from any processor. If you are interested, I
can give details. Several of these have the property that message
traffic is randomized. For a particular receive, there will only be a
small number of completed sends. The penalty for having to traverse a
large array every time appears quite large here. (Furthermore, I would
bet that for random traffic most MPI implementations will actually
behave fair with MPI_ANY_SOURCE (Any comments?).)
There are several possible additions to MPI which would help:
1 Always require Fairness
2 Offer a fair variant of Receive. E. g. something like
3 Introduce variants of MPI_Wait_some,... which are
optimized for a small number of successful requests and
an invariant array_of_request parameter.
While the first alternative would be quite convenient from the user
point of view, it might result in severe problems for some
implementations. The last alternative offers some additional
optimization opportunities unrelated to fairness but is the most
complicated one. The middle one seems quite sensible to me.
Now, why did I also sent a carbon copy to mpi-1sided?
- In section 4.9.3 of the MPI-2 draft from Jan 12, 1996
it is stated that remote memory access is fair.
On many distributed memory machines this appears to
require a fair server which does the memory access.
Why is the same functionality not offered for receive?
- I see a possible work-around for my problems involving
Post a receive for every possible sender and
associate a handler-routine with each request
which does the actions I would otherwise do
after a MPI_Recv(...MPI_ANY_SOURCE...).
Unfortunately, the draft standard does not
specify whether the handlers are called in a fair order.
The analogy to the remote memory access functions suggests
yes. But if the answer is no, the workaround will fail.
Any general comments?
Did nobody report this kind of problem before?
Are there any other work-arounds? (perhaps in MPI-2?)
Are there many implementations which cause problems
for the case of random communication?
University of Karlsruhe