The case of MPI_SPAWN_MULTIPLE_INDEPENDENT will be worse,
since the parent can't directly tell the children.
However, the situation is no worse than MPI-1. For instance,
if you start up an MPI-1 program with 3 of appA and 4 of appB,
the application knows only that there are 7 processes. It
doesn't automatically know how many of each type there
are. At NAS, we have this MPIRUN library in which the
application calls MPIRUN_INIT to find out how many
processes of each type there are. MPIRUN_INIT must coordinate
with the process launching mechanism (in our case, the NAS
version of "mpirun"). This is all handled externally,
where mpirun creates a file and MPIRUN_INIT reads it.
There is no reason why this same approach can't be
taken in MPI-2. A library MPIRUN_SPAWN could be a
wrapper around MPI_SPAWN_INDEPENDENT that gets the information to
the children. In fact, the easiest way to implement
it would be to call MPI_SPAWN under the covers, send
information to the children through the intercommunicator,
and free the intercommunicator (though you'd really like
MPI_PARENT to subsequently return MPI_COMM_NULL...).
> 4) If we go back to the old definition, without the min and max, there
> is still no way for the program spawned to know what is the size of
> the program. It only know the size of all the programs together since
> there is only one MPI_COMM_WORLD. Some outside help will be needed.
> One way is to pass these infomation back in argv when the program calls
> MPI_init. For whatever machanism choose, that need to be defined.
I think the problem here really exists for MPI-1 applications
as well. There is no standard way for the application
to find out about logical groups of processes (where groups
may correspond to different binaries, different disciplines,
different grids, etc) that are determined outside of the
application. The method described above will work, but
it can get a bit clumsy.