263. Data Access with Shared File Pointers


Up: Data Access Next: Noncollective Operations Previous: Data Access with Individual File Pointers

MPI maintains exactly one shared file pointer per collective MPI_FILE_OPEN (shared among processes in the communicator group). The current value of this pointer implicitly specifies the offset in the data access routines described in this section. These routines only use and update the shared file pointer maintained by MPI. The individual file pointers are not used nor updated.

The shared file pointer routines have the same semantics as the data access with explicit offset routines described in Section Data Access with Explicit Offsets , page Data Access with Explicit Offsets , with the following modifications:


For the noncollective shared file pointer routines, the serialization ordering is not deterministic. The user needs to use other synchronization means to enforce a specific order.

After a shared file pointer operation is initiated, the shared file pointer is updated to point to the next etype after the last one that will be accessed. The file pointer is updated relative to the current view of the file.



Up: Data Access Next: Noncollective Operations Previous: Data Access with Individual File Pointers


263.1. Noncollective Operations


Up: Data Access with Shared File Pointers Next: Collective Operations Previous: Data Access with Shared File Pointers

MPI_FILE_READ_SHARED(fh, buf, count, datatype, status)
INOUT fhfile handle (handle)
OUT bufinitial address of buffer (choice)
IN countnumber of elements in buffer (integer)
IN datatypedatatype of each buffer element (handle)
OUT statusstatus object (Status)

int MPI_File_read_shared(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Status *status)
MPI_FILE_READ_SHARED(FH, BUF, COUNT, DATATYPE, STATUS, IERROR)
<type> BUF(*)
INTEGER FH, COUNT, DATATYPE, STATUS(MPI_STATUS_SIZE), IERROR
{ void MPI::File::Read_shared(void* buf, int count, const MPI::Datatype& datatype, MPI::Status& status) (binding deprecated, see Section Deprecated since MPI-2.2 ) }
{ void MPI::File::Read_shared(void* buf, int count, const MPI::Datatype& datatype) (binding deprecated, see Section Deprecated since MPI-2.2 ) }

MPI_FILE_READ_SHARED reads a file using the shared file pointer.

MPI_FILE_WRITE_SHARED(fh, buf, count, datatype, status)
INOUT fhfile handle (handle)
IN bufinitial address of buffer (choice)
IN countnumber of elements in buffer (integer)
IN datatypedatatype of each buffer element (handle)
OUT statusstatus object (Status)

int MPI_File_write_shared(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Status *status)
MPI_FILE_WRITE_SHARED(FH, BUF, COUNT, DATATYPE, STATUS, IERROR)
<type> BUF(*)
INTEGER FH, COUNT, DATATYPE, STATUS(MPI_STATUS_SIZE), IERROR
{ void MPI::File::Write_shared(const void* buf, int count, const MPI::Datatype& datatype, MPI::Status& status) (binding deprecated, see Section Deprecated since MPI-2.2 ) }
{ void MPI::File::Write_shared(const void* buf, int count, const MPI::Datatype& datatype) (binding deprecated, see Section Deprecated since MPI-2.2 ) }

MPI_FILE_WRITE_SHARED writes a file using the shared file pointer.

MPI_FILE_IREAD_SHARED(fh, buf, count, datatype, request)
INOUT fhfile handle (handle)
OUT bufinitial address of buffer (choice)
IN countnumber of elements in buffer (integer)
IN datatypedatatype of each buffer element (handle)
OUT requestrequest object (handle)

int MPI_File_iread_shared(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Request *request)
MPI_FILE_IREAD_SHARED(FH, BUF, COUNT, DATATYPE, REQUEST, IERROR)
<type> BUF(*)
INTEGER FH, COUNT, DATATYPE, REQUEST, IERROR
{ MPI::Request MPI::File::Iread_shared(void* buf, int count, const MPI::Datatype& datatype) (binding deprecated, see Section Deprecated since MPI-2.2 ) }

MPI_FILE_IREAD_SHARED is a nonblocking version of the MPI_FILE_READ_SHARED interface.

MPI_FILE_IWRITE_SHARED(fh, buf, count, datatype, request)
INOUT fhfile handle (handle)
IN bufinitial address of buffer (choice)
IN countnumber of elements in buffer (integer)
IN datatypedatatype of each buffer element (handle)
OUT requestrequest object (handle)

int MPI_File_iwrite_shared(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Request *request)
MPI_FILE_IWRITE_SHARED(FH, BUF, COUNT, DATATYPE, REQUEST, IERROR)
<type> BUF(*)
INTEGER FH, COUNT, DATATYPE, REQUEST, IERROR
{ MPI::Request MPI::File::Iwrite_shared(const void* buf, int count, const MPI::Datatype& datatype) (binding deprecated, see Section Deprecated since MPI-2.2 ) }

MPI_FILE_IWRITE_SHARED is a nonblocking version of the MPI_FILE_WRITE_SHARED interface.



Up: Data Access with Shared File Pointers Next: Collective Operations Previous: Data Access with Shared File Pointers


263.2. Collective Operations


Up: Data Access with Shared File Pointers Next: Seek Previous: Noncollective Operations

The semantics of a collective access using a shared file pointer is that the accesses to the file will be in the order determined by the ranks of the processes within the group. For each process, the location in the file at which data is accessed is the position at which the shared file pointer would be after all processes whose ranks within the group less than that of this process had accessed their data. In addition, in order to prevent subsequent shared offset accesses by the same processes from interfering with this collective access, the call might return only after all the processes within the group have initiated their accesses. When the call returns, the shared file pointer points to the next etype accessible, according to the file view used by all processes, after the last etype requested.


Advice to users.

There may be some programs in which all processes in the group need to access the file using the shared file pointer, but the program may not require that data be accessed in order of process rank. In such programs, using the shared ordered routines (e.g., MPI_FILE_WRITE_ORDERED rather than MPI_FILE_WRITE_SHARED) may enable an implementation to optimize access, improving performance. ( End of advice to users.)

Advice to implementors.

Accesses to the data requested by all processes do not have to be serialized. Once all processes have issued their requests, locations within the file for all accesses can be computed, and accesses can proceed independently from each other, possibly in parallel. ( End of advice to implementors.)
MPI_FILE_READ_ORDERED(fh, buf, count, datatype, status)
INOUT fhfile handle (handle)
OUT bufinitial address of buffer (choice)
IN countnumber of elements in buffer (integer)
IN datatypedatatype of each buffer element (handle)
OUT statusstatus object (Status)

int MPI_File_read_ordered(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Status *status)
MPI_FILE_READ_ORDERED(FH, BUF, COUNT, DATATYPE, STATUS, IERROR)
<type> BUF(*)
INTEGER FH, COUNT, DATATYPE, STATUS(MPI_STATUS_SIZE), IERROR
{ void MPI::File::Read_ordered(void* buf, int count, const MPI::Datatype& datatype, MPI::Status& status) (binding deprecated, see Section Deprecated since MPI-2.2 ) }
{ void MPI::File::Read_ordered(void* buf, int count, const MPI::Datatype& datatype) (binding deprecated, see Section Deprecated since MPI-2.2 ) }

MPI_FILE_READ_ORDERED is a collective version of the MPI_FILE_READ_SHARED interface.

MPI_FILE_WRITE_ORDERED(fh, buf, count, datatype, status)
INOUT fhfile handle (handle)
IN bufinitial address of buffer (choice)
IN countnumber of elements in buffer (integer)
IN datatypedatatype of each buffer element (handle)
OUT statusstatus object (Status)

int MPI_File_write_ordered(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Status *status)
MPI_FILE_WRITE_ORDERED(FH, BUF, COUNT, DATATYPE, STATUS, IERROR)
<type> BUF(*)
INTEGER FH, COUNT, DATATYPE, STATUS(MPI_STATUS_SIZE), IERROR
{ void MPI::File::Write_ordered(const void* buf, int count, const MPI::Datatype& datatype, MPI::Status& status) (binding deprecated, see Section Deprecated since MPI-2.2 ) }
{ void MPI::File::Write_ordered(const void* buf, int count, const MPI::Datatype& datatype) (binding deprecated, see Section Deprecated since MPI-2.2 ) }

MPI_FILE_WRITE_ORDERED is a collective version of the MPI_FILE_WRITE_SHARED interface.



Up: Data Access with Shared File Pointers Next: Seek Previous: Noncollective Operations


263.3. Seek


Up: Data Access with Shared File Pointers Next: Split Collective Data Access Routines Previous: Collective Operations

If MPI_MODE_SEQUENTIAL mode was specified when the file was opened, it is erroneous to call the following two routines ( MPI_FILE_SEEK_SHARED and MPI_FILE_GET_POSITION_SHARED).

MPI_FILE_SEEK_SHARED(fh, offset, whence)
INOUT fhfile handle (handle)
IN offsetfile offset (integer)
IN whenceupdate mode (state)

int MPI_File_seek_shared(MPI_File fh, MPI_Offset offset, int whence)
MPI_FILE_SEEK_SHARED(FH, OFFSET, WHENCE, IERROR)
INTEGER FH, WHENCE, IERROR
INTEGER(KIND=MPI_OFFSET_KIND) OFFSET
{ void MPI::File::Seek_shared(MPI::Offset offset, int whence) (binding deprecated, see Section Deprecated since MPI-2.2 ) }

MPI_FILE_SEEK_SHARED updates the shared file pointer according to whence, which has the following possible values:

  • MPI_SEEK_SET: the pointer is set to offset
  • MPI_SEEK_CUR: the pointer is set to the current pointer position plus offset
  • MPI_SEEK_END: the pointer is set to the end of file plus offset

MPI_FILE_SEEK_SHARED is collective; all the processes in the communicator group associated with the file handle fh must call MPI_FILE_SEEK_SHARED with the same values for offset and whence.

The offset can be negative, which allows seeking backwards. It is erroneous to seek to a negative position in the view.

MPI_FILE_GET_POSITION_SHARED(fh, offset)
IN fhfile handle (handle)
OUT offsetoffset of shared pointer (integer)

int MPI_File_get_position_shared(MPI_File fh, MPI_Offset *offset)
MPI_FILE_GET_POSITION_SHARED(FH, OFFSET, IERROR)
INTEGER FH, IERROR
INTEGER(KIND=MPI_OFFSET_KIND) OFFSET
{ MPI::Offset MPI::File::Get_position_shared() const (binding deprecated, see Section Deprecated since MPI-2.2 ) }

MPI_FILE_GET_POSITION_SHARED returns, in offset, the current position of the shared file pointer in etype units relative to the current view.


Advice to users.

The offset can be used in a future call to MPI_FILE_SEEK_SHARED using whence = MPI_SEEK_SET to return to the current position. To set the displacement to the current file pointer position, first convert offset into an absolute byte position using MPI_FILE_GET_BYTE_OFFSET, then call MPI_FILE_SET_VIEW with the resulting displacement. ( End of advice to users.)



Up: Data Access with Shared File Pointers Next: Split Collective Data Access Routines Previous: Collective Operations


Return to MPI-2.2 Standard Index
Return to MPI Forum Home Page

(Unofficial) MPI-2.2 of September 4, 2009
HTML Generated on September 10, 2009