Languages and message passing



Data parallel programming languages


Fortran-90

This is a superset of Fortran-77. Fortran-90 provides significant new facilities some of which, such as array syntax, make it easier for a compiler to determine that operations may be carried out concurrently.

Compilers available from NAG, IBM, DEC, Cray Research, and others.

Array Features

One of the most significant aspects of Fortran-90.

Array section references

INTEGER, DIMENSION(1:100) :: array1
INTEGER, DIMENSION(1:50) :: array2
INTEGER, DIMENSION(1:100) :: array3, array4

array1(1:50) = 5
array1(51:100) = 25
array2 = arr(1:50)
array3(51:100) = array2
array3(1:50) = array1(51:100)
array(1:100:2) = array2

Array Intrinsic Functions

Other features include


High Performance Fortran (HPF)

HPF is the latest set of extensions to the Fortran language and is still under development. Although the published specification has not yet been adopted by a Standards body almost all parallel machine vendors have announced implementation efforts.

Subset HPF

A minimal starting language defined to encourage early releases of compilers with HPF features.

Goals of HPF

Some goals of HPF, as laid down by the HPF Forum, are:

Data Parallelism

This is executing the same operations (either synchronously or asynchronously) in parallel on different sets of data. HPF expresses data parallelism in several ways. There are also constructs to describe operations that can be performed in parallel if the computer has the resources:
	The FORALL statement.

	  FORALL (I = 0:9)
	    X(I+1) = X(I)
	  END FORALL

Data Mapping

HPF can describe how data is to be divided among the processors in a parallel machine:

Message Passing


In the message passing model, parallel processes do not have direct access to each others' local memory. Communication is achieved through exchange of data, using SEND and RECEIVE primitives.

PVM

Parallel Virtual Machine (PVM) is a software package that allows a heterogeneous network of computers (parallel, vector, and serial) to appear as a single concurrent computational resource - virtual machine.

Point-to-point communication

Uses system buffers at SEND and RECEIVE ends to pack and unpack data. However, recent versions e.g. Cray T3D avoid this.
	CALL PVMFSEND(TID, MSGTAG, INFO)
This is an asynchronous send - no synchronous version.
	CALL PVMFRECV(TID, MSGTAG, BUFID)
This is a blocking receive.

Collective communication

Tasks can broadcast messages to groups whether or not they belong to that group.
	CALL PVMFBCAST(GROUP, MSGTAG, INFO)

Availability of PVM


MPI

Message Passing Interface (MPI) is a new library specification for message-passing, proposed as a standard by a broadly based committee of vendors, implementors, and users. MPI provides source-code portability for message-passing programs while allowing efficient vendor implementations. Features of MPI are:

Point-to-point communication

There are four SEND modes.

Standard Send

	CALL MPI_SEND( BUF, COUNT, DATATYPE, DEST, TAG, COMM, IERROR)

Receive

	CALL MPI_RECV(BUF, COUNT, DATATYPE, SOURCE, TAG, COMM, STATUS, IERROR)

Collective communication

Availability of MPI

For example, the Argonne National Laboratory /Mississippi State University implementation. Available by anonymous ftp from
info.mcs.anl.gov in pub/mpi

PREV NEXT
UP


Submitted by Mark Johnston,
last updated on 21 February 1995.