Numerical Analysis I
Some basic aspects of numerical computation, including errors, interpolation, polynomials, integration, and some of the pitfalls: badly-conditioned problems and poor scaling. We use Maple and MATLAB to demonstrate these techniques.
(To read this with no frames type this location into your web browser:training/numan_i.html)
High Performance Computing
High Performance Computing (HPC) is an engineering discipline which develops the technology (hardware, algorithms, and software) to deliver cost-effective computational results.
Allows you to choose a solution technique for your problem which is
The question "why bother?" often arises, to which the answer is "to avoid investing time and effort in solving the wrong problem."
References names used in the text are shown in square brackets after the book name e.g. [NR] for Numerical Recipes.
There are several kinds of error
Roundoff errors and Floating point arithmetic
As with decimal (e.g. 3.) it is not possible to represent every number exactly in binary due to the use of a finite number of bits.
A floating point number is represented in the form
A simple example
In MATLAB try the following:
EDUğ a = 1.000000001^10
Raising a number to a power, n, and then taking the nth root may not return the same number!
In just three calculations, we were > 30 orders of magnitude out. Just because an equation can be algebraically simplified, it may not cancel numerically.
What goes wrong?
In the first case the computer was unable to represent our 10th root sufficiently accurately. In the second case, we chose a function with a singularity, so any small rounding errors in the computation would be magnified.
x + (y + z)ı (x + y) + z (on a finite precision machine)
In general this is because the lowest significant digits are lost. In general this can be addressed by adding numbers starting with the smallest first, or adding in pairs.
Numerical Integration (Quadrature)
( 1 )
These rely on discretizing the integral and approximating the area under the curve by finite sums. In effect we replace the function by a suitable interpolating polynomial and integrate this.
( 2 )
Open formulas do not use the value of the function at its endpoints (a and b). An open formula can be useful if the function has an integrable singularity at one end.
Closed formula use the value of the functions at its endpoints. The most widely known of these is the trapezium rule, which is equivalent to performing linear interpolation between the points.
EDUğ x = linspace(-1, 2, 30);
EDUğ y = humps(x);
EDUğ hold on
EDUğ title('Trapezium Rule for Humps (30 points)')
EDUğ hold off
MATLAB allows the area to be found using:
To find the total area between the curve and the x axis by working with the absolute value of the function (replacey = abs(humps(x)); above). The enclosed area is ğ 37.5159.
Notes about the trapezium rule
The error in the trapezium rule depends upon the value of the derivatives at the end points, so if these derivatives vanish, or if the function is being integrated over one period, the trapezium rule gives good value for money.
From the plot, it is clear that some of the area has been missed. We could add more points along the range of interest, until the approximation was good enough, however this would result in a considerable amount of extra work. There are two solutions to this:
In the previous cases, we picked equally spaced abscissas. In the case of the peak in our function, simple linear interpolation missed some of the area. Why not exploit our knowledge of the function to pick unequal spacing for the abscissa to better reflect where the function has most of its area? We must then weight each of our abscissa correctly
( 3 )
The derivation of the formula and weights relies on the expansion of a function in terms of suitable orthogonal polynomials. However, there are many sources of information on Gaussian Integration weights and abscissas (e.g. section 4.5 of NR for more details and Abramowitz and Stegun Chapter 25). Naturally, you can use Maple to derive weights and abscissa not listed in tables- efficient procedures exist to do this (see [GH] for details).
These are harder, due to the large number of samples of the function required in order to achieve a given accuracy.
Evaluation of a polynomial should be performed by a using a Horner scheme. Consider the following example in Maple:
> y := a*x^4 + b*x^3 + c*x^2 + d*x + e;
e + (d + (c + (b + a x) x) x) x
Why? The nested Horner representation takes only O(N) multiplications, whereas the conventional notation requires O(N2) multiplications. By storing the coefficients of the polynomial in an array, the Horner scheme allows a succinct way to evaluate the polynomial. This method also allows one to evaluate a polynomial and its derivatives in a single loop (such as one might require for a Newton-Raphson iteration to find a root)
There are clever ways to:
There is considerable literature about finding the roots of polynomials. The problems involved with such an innocuous task are useful to motivate the study of numerical analysis. Problems include:
-1, -1, -1, -1
Now change the coefficient in front of the x term:
-.9750019523 - .3172265943 I,
-.9750019523 + .3172265943 I,
-0.9750 + 0.3172i
-0.9750 - 0.3172i
So beware. In fact this is example is pathologically ill-conditioned, since the original function has a quadruple root at x = -1.
Numerical Linear Algebra: Some Pitfalls
Unfortunately, at the heart of most interesting problems in engineering and science are "badly-conditioned" matrices. If we attempt to solve the set of linear equationsAx = b when the matrix A is badly conditioned, a small change in b will result in a large change in the solution x. Remember, b represents the data we have, A represents some physical knowledge about the system, and x represents the quantities we are trying to estimate. There can easily be small random errors in b. The following 3 by 3 matrix shows the dangers. This matrix is from the MATLAB gallery.
EDUğ A = gallery(3)
-149 -50 -154
537 180 546
-27 -9 -25
The system of equations will have a solution sincedet(A)=6. So this all looks fine. Now let us find the solution for two different right hand sides, using the numerically stable LU decomposition:
EDUğ rh1 = [-711; 2535; -120], x = A \ rh1
EDUğ rh2 = rh1 + [1; 1; 0.1], x = A \ rh2
What has gone wrong? We perturbed the right hand side by 0.1%, well within the bounds of experimental error, but the solution changed by two orders of magnitude in one case! The matrix is very badly conditioned. We can measure this using:
This number is a measure of how badly conditioned the matrix is. The larger the number, the worse the conditioning for the matrix, which in turn means that even small errors in the datab will be amplified and make huge changes in our estimates of x. There are other ways to measure, or estimate how badly-conditioned a matrix is, in MATLAB see help cond for details. Using MATLAB, it is a simple matter to determine the condition number of a given matrix, before attempting to work with it.
A different problem that can arise is "poor scaling", in which matrix elements vary over several orders of magnitude. Numerical rounding errors cause a loss of accuracy in the final solution. Fortunately MATLAB's built-in functions use techniques such as pivoting to reduce the effects of poor scaling. If you find a difference between MATLAB's results and those from a numerical FORTRAN or C routine, you should check to see whether the coefficients of the problem are poorly scaled. MATLAB will have attempted to control any loss of accuracy.
Before applying a numerical method to a problem understand carefully any possible problems which may arise, and ensure that the implementation you have chosen is
In the next numerical analysis lectures we will consider the problems involved in more advanced numerical methods, such as the solution of ordinary and partial differential equations, finding eigenvalues, and optimization methods (such as genetic algorithms). A number of specific numerical problems arise in parallel programming, including use of halos in computation, parallel linear algebra, long range forces, and random number generation.
Last updated 12-Nov-97. Maintained by SJ Cox