High Performance Computing High Performance Computing University of Southampton

Numerical Analysis II

An overview of advanced numerical methods, such as solution of ordinary and partial differential equations, and optimization methods (such as genetic algorithms).

(To read this with no frames type this location into your web browser: training/numan_ii.html)

High Performance Computing

High Performance Computing (HPC) is an engineering discipline which develops the technology (hardware, algorithms, and software) to deliver cost-effective computational results.

Numerical Analysis

Allows you to choose a solution technique for your problem which is

The question "why bother?" often arises, to which the answer is "to avoid investing time and effort in solving the wrong problem."


References names used in the text are shown in square brackets after the book name e.g. [NR] for Numerical Recipes.

See also Ken Thomas’ multigrid bibliography:




Differential Equations

Most physical phenomena in science and engineering can be described in terms of an underlying differential equation (or set of differential equations).

The general strategy will always be to re-write differentials in terms of finite differences:


( 1 )

There are many different choices for finite difference schemes, and appropriate use of the correct scheme is at the heart of ensuring that the solution method is stable.

Ordinary Differential Equations

e.g. y’ = f (t , y)

Initial value problems

We know the yi for some starting xj, and we would like to know them for later xj. Conceptually we start at the initial value and march out cautiously along the function. Methods to use are the Runge-Kutta technique (see Numerical Recipes 16.1), or adaptive techniques, which attempt to control the errors in the solution by taking smaller steps where necessary (see NR, 16.2).

Two point boundary values problems

Here the value of y is fixed at (usually) the beginning and the end of the region of interest. These are much harder than initial value problems, since we must only consider functions which obey the boundary conditions. Conceptually we start at the beginning of the region and "shoot out" to the end of the region: on successive iterations we aim to match the boundary conditions.


If the ratio of the largest : smallest eigenvalues of the system matrix is large, then the system of equations will be stiff. This ratio is measures by the condition number of the matrix. In this case, great care must be taken: solving stiff problems can lead to a dramatic loss of precision. See NR 16.6, or Stoer and Bullirsch 7.2.16. Consider using an implicit method (see below).

Implicit vs Explicit methods

In explicit methods, it is possible to solve (at a point) directly for the value of all unknowns in the finite difference scheme.

In an implicit scheme, there is no explicit formula at a points, only a set of simultaneous equations which must be solved to give the solution over the whole grid. Implicit methods are stable for all step sizes.


Partial Differential Equations

In contrast to ODEs, a partial differential equation relates a number of variables and their derivatives.


Hyperbolic Equations

Wave equation


( 2 )


Parabolic Equations

Diffusion Equation


( 3 )


Elliptic Equations

Poisson equation


( 4 )


Initial Value Problems

Here we wish to propagate the solution forward in time.

Boundary Value Problems

The solution is static.


The Multigrid method

This is a most important method for the solution of linear (and non-linear) elliptic equations. It is O(N), where N is the number of grid points. The key idea is to introduce a hierarchy of grids. Methods, such as Gauss-Seidel tend to stall, since they do not attenuate smooth error modes. However, on a coarser grid, it is not possible to represent these smooth errors, so by alternating between fine and coarse grids, the system relaxes more quickly.


Finite Element Methods

Used extensively in engineering for elliptic PDE solution over irregular bodies. There is extensive literature on these techniques and where they are useful. In principle a finite element calculation consists of



Optimization Methods

The general optimization problem can be stated as follows:

"Minimize F(x) subject to g(x)"

F(x) is the objective function, and the set of functions g(x) are the constraints, which may be linear or non-linear.

General Optimization Algorithm

  1. Specify an initial guess, x0, for the solution
  2. for i = 0, 1, …
  3. if xi is optimal, then stop else form xi+1, and repeat (3)

With derivatives or without?

In the simplest case where we wish to minimize F(x), the most important question is whether you can compute the derivatives of F(x).

With Derivatives

  1. Conjugate gradient type methods, such as Fletcher-Reeves. These rely on a set of successive line minimizations to perform the optimization
  2. Quasi Newton methods. These rely on computing an approximation to the underlying Hessian Matrix

No derivatives

  1. The simplex method (Nelder-Mead simplex, not linear programming simplex). This a robust method to "get something working", and should not be overlooked.
  2. Powell’s method, which derives information about the local gradient of the function, and is quadratically convergent.
  3. If the function is very complicated, consider a probabilistic method such as simulated annealing or a genetic algorithm (see below)


Genetic Algorithms


These are a powerful optimization technique, which rely on natural selection. There are many variants, and this is an area of active research. We will be holding a mini- workshop on these methods with the CEDC in February (keep an eye on our seminar page for details comp_model/seminars.html).


  1. Set up a population of individuals, by encoding the variables in the problem as a set of genes.
  2. Evaluate the fitness of the individuals, according to the objective function.
  3. Breed the solutions, using a genetic operator.
  4. Perform some random mutations in the gene pool.
  5. Repeat from (2).

Again constraints may be incorporated by a repair algorithm after breeding / mutation (to ensure that the solutions remain feasible), or by using a penalty function.


Advantages of genetic algorithms

Disadvantages of genetic algorithms




Before applying a numerical method to a problem understand carefully any possible problems which may arise, and ensure that the implementation you have chosen is

Certain problems are intrinsically "hard", but well understood: you should recognise them and choose the appropriate solution technique.

The techniques discussed and the problems which arise are exemplars for other areas of numerical analysis.




Last updated 3-Dec-97. Maintained by SJ Cox