An Introduction to HPC


Definition of HPC

No strict definition but any compute system delivering over 1 Gigaflop/s; alternatively, any computer that has an order of magnitude, or greater, performance than current top-end workstations.


The need for HPC

Areas suitable for application are:

Why the parallel approach?

To get higher performance from contemporary sequential computers requires: Increasingly sophisticated chip design, which requires complex memory hierarchy and multiple functional units.

This is expensive!!


Economic issues

There are only about at most 200 sales of supercomputers world wide per year.

So must leverage chip design for PC's and workstations.


Parallel Processing

Technological and economic constraints on single processor computers has driven the approach towards parallel computation. This is more cost-effective - in terms of hardware - since commodity state-of-the-art chips may be used. The major features of parallel computers are:

Machine Classification

Here we do not classify according to the structure of the machine, but rather on how the machine relates its instructions to data being processed.

MIMD (Multiple Instruction, Multiple Data)

SIMD (Single Instruction, Multiple Data)

MasPar MP-2, Thinking Machine's CM-2, and others.


MIMD systems

In these systems:

MIMD - tightly coupled

In shared memory systems:

Examples: SG PowerChallenge, Sun SPARCserver 1000.


MIMD - loosely coupled

In distributed memory systems:

Examples: Meiko CS-2, IBM SP-2, Thinking Machine's CM-5, and Cray T3D.


MIMD - workstation cluster

In these distributed memory systems:

Example: CERN HP workstation cluster for High Energy Physics event processing.


SIMD systems

In these systems:

Examples include Thinking Machines CM-2, and MasPar MP-2.


Interconnections

Mechanisms for communication between processors is provided by:

Physical Node topologies

Processors or nodes may be connected in a variety of topologies. These include:

A non-computing example of High Performance processing

Construction of a wall. This illustrates:

Summary of different types of parallelism in constructing the wall

The construction of the wall also illustrates different forms of parallelism found in computing.

NEXT
UP


Submitted by Mark Johnston,
last updated on 9 November 1994.