You're right. There has been some work on this in mpi-io and mpi-external
on MPI Datatype encoding/decoding. The remaining issue was canonical
machine format or metadata for heterogeneous transfers (as you are
discussing) and for persistent stores. Leslie Hart and I met in Boulder a
few weeks before SC'96 and discussed this issue. We arrived at a set of
solutions that meet the needs of io, external, and impl. I say "set"
because we defined a functionality and constraints, but decided to let the
Forum hammer out which religious option on implementation to pursue.
O.K., so where's the write up? Right underneath this 400-page manual
I've been writing for my ARPA project! I'll try to crank it out this
On Thu, 12 Dec 1996, James Cownie wrote:
> I've sent this to mpi-io as well, since they're also interested in
> canonical data representations.
> Bill Gropp has been arguing (see below) that the solution to MPI
> interoperability is most easily provided by *not* defining a limited
> MPI interface between disjoint MPI implementations, but rather definig
> a standard data representation so that user's can pack data and
> transfer it via sockets, pipes, http protocol, ...
> It seems to me that this is also what is needed (and maybe already
> exists ?) in the IO chapter to handle writing files which will later
> be read by unspecified machines.
> Am I right ?
> Do we already have this ?
> Would having it help IO ?
> -- Jim
> James Cownie
> Dolphin Interconnect Solutions
> Phone : +44 117 9071438
> E-Mail: email@example.com
> > What I meant is that the only capability not (easily) provided by
> > well-documented Unix or Windows routines is the the common data
> > format, particularly in translating from MPI datatypes on one system
> > to MPI datatypes on another. By exposing the communication layer
> > instead of providing an unimplementable abstract model (receiver
> > always has enough space), the problem is defined away. Note that
> > this fixes Rolf's example - interoperability is provided by using
> > preexisting mechanisms for inter-system communication, not MPI. All
> > that is needed in this case is a way for the user to import and
> > export data to and from the local MPI layers. This isn't as
> > convenient for the student examples, but more closely matches the
> > needs we've seen for such examples (where the special properties and
> > requirements of the interoperability link don't cleanly fit the MPI
> > model). A radical option, I admit, but the more I think about it,
> > the more I believe it is the correct one.
> > Bill
> It seem