Numerical Analysis I
Some basic aspects of numerical computation, including errors, interpolation, polynomials, integration, and some of the pitfalls: badlyconditioned problems and poor scaling. We use Maple and MATLAB to demonstrate these techniques.
(To read this with no frames type this location into your web browser: training/numan_i.html)
High Performance Computing
High Performance Computing (HPC) is an engineering discipline which develops the technology (hardware, algorithms, and software) to deliver costeffective computational results.
Numerical Analysis
Allows you to choose a solution technique for your problem which is
The question "why bother?" often arises, to which the answer is "to avoid investing time and effort in solving the wrong problem."
Bibliography
References names used in the text are shown in square brackets after the book name e.g. [NR] for Numerical Recipes.
Errors
There are several kinds of error
Roundoff errors and Floating point arithmetic
As with decimal (e.g. 3.) it is not possible to represent every number exactly in binary due to the use of a finite number of bits.
A floating point number is represented in the form
Sign 
mantissa 
Exponent 
 
0.12634 
4 
A simple example
In MATLAB try the following:
EDUğ a = 1.000000001^10
a =
1.00000001000000
EDUğ b=a^(1/10)
b =
1.00000000100000
EDUğ ab
ans =
9.000000744663339e009
Raising a number to a power, n, and then taking the nth root may not return the same number!
Similarly
EDUğ tan(tan(tan(atan(atan(atan(1e50))))))
ans =
1.633177872838384e+016
In just three calculations, we were > 30 orders of magnitude out. Just because an equation can be algebraically simplified, it may not cancel numerically.
What goes wrong?
In the first case the computer was unable to represent our 10th root sufficiently accurately. In the second case, we chose a function with a singularity, so any small rounding errors in the computation would be magnified.
x + (y + z) ı (x + y) + z (on a finite precision machine)
In general this is because the lowest significant digits are lost. In general this can be addressed by adding numbers starting with the smallest first, or adding in pairs.
Lessons
Interpolation
Numerical Integration (Quadrature)

( 1 ) 
NewtonCotes Methods
These rely on discretizing the integral and approximating the area under the curve by finite sums. In effect we replace the function by a suitable interpolating polynomial and integrate this.

, where 
( 2 ) 
Open formulas do not use the value of the function at its endpoints (a and b). An open formula can be useful if the function has an integrable singularity at one end.
Closed formula use the value of the functions at its endpoints. The most widely known of these is the trapezium rule, which is equivalent to performing linear interpolation between the points.
EDUğ fplot('humps',[1,2],'r')
EDUğ x = linspace(1, 2, 30);
EDUğ y = humps(x);
EDUğ hold on
EDUğ stem(x,y,':')
EDUğ plot(x,y,':')
EDUğ title('Trapezium Rule for Humps (30 points)')
EDUğ hold off
MATLAB allows the area to be found using:
trapz(x,y)
ans =
26.2102
To find the total area between the curve and the x axis by working with the absolute value of the function (replace y = abs(humps(x)); above). The enclosed area is ğ 37.5159.
Notes about the trapezium rule
The error in the trapezium rule depends upon the value of the derivatives at the end points, so if these derivatives vanish, or if the function is being integrated over one period, the trapezium rule gives good value for money.
From the plot, it is clear that some of the area has been missed. We could add more points along the range of interest, until the approximation was good enough, however this would result in a considerable amount of extra work. There are two solutions to this:
Gaussian Integration
In the previous cases, we picked equally spaced abscissas. In the case of the peak in our function, simple linear interpolation missed some of the area. Why not exploit our knowledge of the function to pick unequal spacing for the abscissa to better reflect where the function has most of its area? We must then weight each of our abscissa correctly

( 3 ) 
The derivation of the formula and weights relies on the expansion of a function in terms of suitable orthogonal polynomials. However, there are many sources of information on Gaussian Integration weights and abscissas (e.g. section 4.5 of NR for more details and Abramowitz and Stegun Chapter 25). Naturally, you can use Maple to derive weights and abscissa not listed in tables efficient procedures exist to do this (see [GH] for details).
Multidimensional Integrals
These are harder, due to the large number of samples of the function required in order to achieve a given accuracy.
Polynomials
Horner Scheme
Evaluation of a polynomial should be performed by a using a Horner scheme. Consider the following example in Maple:
> y := a*x^4 + b*x^3 + c*x^2 + d*x + e;
> convert(y,'horner',x);
>
e + (d + (c + (b + a x) x) x) x
Why? The nested Horner representation takes only O(N) multiplications, whereas the conventional notation requires O(N^{2}) multiplications. By storing the coefficients of the polynomial in an array, the Horner scheme allows a succinct way to evaluate the polynomial. This method also allows one to evaluate a polynomial and its derivatives in a single loop (such as one might require for a NewtonRaphson iteration to find a root)
There are clever ways to:
Polynomial Roots
There is considerable literature about finding the roots of polynomials. The problems involved with such an innocuous task are useful to motivate the study of numerical analysis. Problems include:
>solve(x^4+4*x^3+6*x^2+4*x+1,x);
1, 1, 1, 1
Now change the coefficient in front of the x term:
>solve(x^4+4*x^3+6*x^2+4.01*x+1,x);
1.340248595,
.9750019523  .3172265943 I,
.9750019523 + .3172265943 I,
.7097475007
and MATLAB:
EDUğ roots([1,4,6,4.01,1])
ans =
1.3402
0.9750 + 0.3172i
0.9750  0.3172i
0.7097
So beware. In fact this is example is pathologically illconditioned, since the original function has a quadruple root at x = 1.
Numerical Linear Algebra: Some Pitfalls
Unfortunately, at the heart of most interesting problems in engineering and science are "badlyconditioned" matrices. If we attempt to solve the set of linear equations Ax = b when the matrix A is badly conditioned, a small change in b will result in a large change in the solution x. Remember, b represents the data we have, A represents some physical knowledge about the system, and x represents the quantities we are trying to estimate. There can easily be small random errors in b. The following 3 by 3 matrix shows the dangers. This matrix is from the MATLAB gallery.
EDUğ A = gallery(3)
A =
149 50 154
537 180 546
27 9 25
The system of equations will have a solution since det(A)=6. So this all looks fine. Now let us find the solution for two different right hand sides, using the numerically stable LU decomposition:
EDUğ rh1 = [711; 2535; 120], x = A \ rh1
rh1 =
711
2535
120
x =
1.0000
2.0000
3.0000
EDUğ rh2 = rh1 + [1; 1; 0.1], x = A \ rh2
rh2 =
710
2536
120
x =
99.6667
312.0667
9.5000
What has gone wrong? We perturbed the right hand side by 0.1%, well within the bounds of experimental error, but the solution changed by two orders of magnitude in one case! The matrix is very badly conditioned. We can measure this using:
EDUğ cond(A)
ans =
2.7585e+005
This number is a measure of how badly conditioned the matrix is. The larger the number, the worse the conditioning for the matrix, which in turn means that even small errors in the data b will be amplified and make huge changes in our estimates of x. There are other ways to measure, or estimate how badlyconditioned a matrix is, in MATLAB see help cond for details. Using MATLAB, it is a simple matter to determine the condition number of a given matrix, before attempting to work with it.
A different problem that can arise is "poor scaling", in which matrix elements vary over several orders of magnitude. Numerical rounding errors cause a loss of accuracy in the final solution. Fortunately MATLAB's builtin functions use techniques such as pivoting to reduce the effects of poor scaling. If you find a difference between MATLAB's results and those from a numerical FORTRAN or C routine, you should check to see whether the coefficients of the problem are poorly scaled. MATLAB will have attempted to control any loss of accuracy.
Summary
Before applying a numerical method to a problem understand carefully any possible problems which may arise, and ensure that the implementation you have chosen is
In the next numerical analysis lectures we will consider the problems involved in more advanced numerical methods, such as the solution of ordinary and partial differential equations, finding eigenvalues, and optimization methods (such as genetic algorithms). A number of specific numerical problems arise in parallel programming, including use of halos in computation, parallel linear algebra, long range forces, and random number generation.
Last updated 12Nov97. Maintained by SJ Cox