Numerical s

58
, UECM2623/UCCM2623 \ NUMERICAL METHODS & STATISTICS UECM1693 MATHEMATICS FOR PHYSICS 11 UNIVERSITI TUNKU ABDUL RAHMAN LECTURER: YAP LEE KEN - . -

description

statistics

Transcript of Numerical s

  • ,

    UECM2623/UCCM2623 \ NUMERICAL METHODS & STATISTICS

    UECM1693 MATHEMATICS FOR PHYSICS 11

    UNIVERSITI TUNKU ABDUL RAHMAN

    LECTURER: YAP LEE KEN

    -

    .

    -

  • ---- - -- -.~ .--- .. ---"--.~-,

    Contents

    1 Preliminaries 3 J.l Introduction _ .,. 3 1.2 Error Analysis . . . . . 3

    2 Numerical Differentiation 5

    3 Nume rical Integration 7 3.1 The Trapezoidal Rule . 7 3.2 Simpson IS Rule 8

    4 Roots Of Equations 10 4.1 Introduction _ _ _ 10 4.2 Bracketing Methods. 10

    4.2.1 The Bisection Method 11 4.2.2 The Method of False-Position 13

    4.3 Open Methods ... - - . - - - . 15 4.3.1 Fixed-Point Method ... 15 4.3.2 Newton-Raphson Method 17 4.3.3 Finite Difference Method _ 19

    5 Some Topics In Linear Algebra 20 5.1 Iterative Methods For Solving Linear Systems .. 20 5.2 A Review On Eigenvalues 25 5.3 Approximation of Eigenvalues 26

    5.3.1 The power Method .. 26 5.3.2 -Power Method with Scaling 28 5.3.3 Inverse Power Method with Scaling . . .. . . . 29

    1

  • 5.3.4 Shifted Inverse Power Method with Scaling .

    6 Optimization 6.1 Direct Search Methods

    6.1.1 Golden Section Search (Maximum) 6.1.2 Gradient Method ......... .

    7 Numerical Methods For Ordinary Differential Equations 7.1 First-Order Initial-Value Problems

    7.1.1 Euler's Method .... 7.1.2 Heun's Method/Improved Euler's Method 7.1.3 Taylor Series Method of Order p . 7.1.4 Runge-Kutta Method of Order p 7.1.5 Multi-step Methods ..... . . . 7.1.6 Adarns-Bashforth/ Adarns-Moulton Method . 7.1.7 Summary:Orders Of Errors For Different Methods.

    7.2 Higher-Order Initial Value Problems 7.2.1 The Linear Shooting Method 7.2.2 Finite Difference Method . ..

    8 Numerical Methods For Partial Differential Equations 8.1 Second Order Linear Partial Differential Equations 8.2 Numerical Approximation To Derivatives :1-Variable Functions. 8.3 Numerical Approximation To Derivatives :2-Variable Functions. 8.4 Methods for Parabolic Equations

    8.4.1 FTCS Explicit Scheme 8.4.2 Crank-Nicolson Method

    8.5 A Numerical Method for Elliptic Equations . 8.6 CTCS scheme for Hyperbolic Equations ..

    2

    . . . . . 30

    32 32 32 34

    36 36 36 38 40 40 42 42 43 44 45 47

    48 48 49 49 50 50 51 54 55

  • Chapter 1

    Preliminaries

    1.1 Introduction

    Numerical meth ods are methods for solving problems on a computer or a pocket calculator. Such methods are needed for many real life problems that do not have analytic solutions or in other cases, the analytic solutions may be practically useless.

    1.2 Error Analysis

    Definition 1.2.1.

    (8) The er ror in a computed quantity is defined as

    Error = True value - Approximation.

    (b) The a bsolute error is defined as

    Absolute error = I True value - Approximation 1. (c) The rela tive error is defined as

    R 1 . True value - Approximation

    e atlve error = Tr 1 . ue va ue

    (cl) The p e rcentage r elative e rror is defined as . True value - Approximation Percentage relative error = Tr x 100%.

    ue value

    3

  • D efinition 1.2.2. (Errors In Numerical Methods) There are two major sources of errors in numerical computations. (a) Round-off error occurs when a computer or calculator is used to perform real-number calcula-

    tions. Remark. The error arises because for machine computation, the number must be represented by a number with a finite number of digits. These errors become important as the number of computat ions gets very large. To understand Lhe nature of round-off errors, it is necessary to learn the ways numbers are stored and arithmetic operations are performed in a computer. The effect of the round-off errors can be illustrated by the following example.

    Example 1. (The effect of round-off error) x2 - !

    Let f(x) ~ " . x-, 0.111 2 - 0.1111

    (i) If a 4-(ugit calculator is used to find f(0 .3334), we will obtain 0.3334 _ 0.3333 '" 1.

    (i i) On the other hand, by using the formula f(x) ~ x +~, we will obtain 0.3334 + ~ '" 0.6667. (b) Truncation errors are those that resul t from using an approximation in place of an exact

    mathematical procedure.

    Example 2. Recall that for all x ,

    In particular, jf we let x = 1, then 1 1 e ~ I + I + - +-+ 2! 3!

    1 I 1 . . I I If we use 1 + 1 + 21 + 31 + ... + 101 to approximate e, then the truncatIon error IS -11- ! + -12-1 + .... Remark. (Truncation Errors) Ma.ny numerical schemes are deri ved from the Taylor series

    y(x + h) ~ y(x) + hy'(x) + h: y"(x) + .... 2.

    If the t runcated Taylor series used to approximate y(x) is the order n Taylor polynomial h2 h" Pn(x) ~ y(x) + hy'(x) + 2fY"(x) + .. . + n! y1n)(x),

    then the approximation is called an nth order method since it is accurate to the terms of order hn. T he neglected remainder term

    hn+! hn+2 .,.----,-:-;-;y1n+l)(x) + y(n+2) (x) + ... (n + I )! (n+2) !

    is called the (local) truncation error (TE). We say the TE per step is of order hn+! i.e TE= O(/tn +I) .

    4

  • Chapter 2

    Numerical Differentiation

    We replace the derivatives of a function f by their corresponding difference quotients based on the Taylor series:

    (i) f(x + h) ~ f(x) + hJ'(x) + ;1h' f"(x) + ~1h3 f"'(x) + .. . (ii) f(x - h) ~ f(x) - hJ'(x) + ~h' f"(x) - ~h3 f"'(x) + .. .

    21 31

    ( ... ) (.) f'() f(x + h) - f(x) 1 hf"( ) 1lI1=> X= -- x- h 21 i.e. J'(x) ~ f(x + h) - f(x) + O(h)

    h (the forward difference formula for 1')

    (iv) (ii)= J'(x) ~ f(x + h) - f(x - h) + O(h')

    2h (the central difference formula for J' )

    (vi) (i) + (ii)=> J"(x) ~ f(x + h) - 2{~x) + f(x - h) + O(h') (the central difference formula for /")

    5

  • Example 3 . Given the following table of data: x 1.00 1.01 1.02 1.03 f(x) 5 6.01 7.04 8.09

    (a) Use forward and backward difference approximations of O(h) and a central difference approxima-tion of 0(h2) to estimate /,(1.02) using a step size h = 0.01.

    (b) Calculate the percentage errors for the approximations in part (a) if the actual value /'(1.02) = 104.

    Reading Assignment 2.0.1. Given the following table of data: x 1.00 1.25 1.50 1.75 2.00 f(x) -8 -9.93359375 -11.4375 -11.99609375 -11 .0000

    (a) Use forward and backward difference approximations of O(h) and a central difference approxima-tion of O(/t2 ) to estimate the first derivative of I(x) at x = 1.5 using a step size It = 0.25.

    (b) Calculate the percentage errors for the approximations in part (a) if the actual value /,(1.5) = - 4.5.

    Answer.

    (a) For It = 0.25, (i) using the forward difference formula f '(15) f(1.5 + It) - f(1.5) f(1.75) - f(1.5) -1 1.99609375 - (- 11.4375)

    . '" It = 0.25 = 0.25 = - 2.234375 (ii) using the backward difference formula f '(15) f(1.5) - f(1.5 - h) f(1.5) - f(1.25) - 11.4375 - (- 9.93359375) . '" = = = - 6.015625 h h O.~ (Hi) using the central difference formula f '(1.5) '" f(1.5 + It) - f(1.5 - iI) = f(1.75) - f(1.25) = -11.99609375 - (-9.93359375) = _ 2h 0.5 0.5 4.125

    actual value - approximation (b) percentage error = al al x 100% actu v ue

    (i) For the forward difference approximation: - 4.5 - (- 2.234375)

    percentage error = x 100 ~ 50.35% - 4.5

    (ii) For the backward difference approximation: - 4.5 - (-6.015625)

    percentage error = x 100 ~ - 33.68% -4.5

    (iii) For the central difference approximation: -4.5 - (-4.125)

    percentage error = x 100 ~ 8.33% - 4.5

    6

  • Chapter 3

    Numerical Integration

    The ideal way to evaluate a definite integral J: f (x) dx is, of course, to find a formula F(x) for an anti-derivative of f. But some anti-derivatives are difficult or impossible to find. For example, there arc no elementary formulas for the anti-derivatives of 8i~~. v'l + X4 and e~2. When we cannot evaluate a definite integral with an anti-derivative, we turn to numerical methods such as the Trapezoidsl Rule and Simpson's Rule. The problem of numerica1 integration is the numerical evaluation of integrals

    1 = [f( X) dx where a and b arc constants and f is a function given analytically or empirically by a table of values. Geometrically, if f(x) ;::: 0 V x E [a, hI, then I is equal to the area under the curve of f between a and b.

    3.1 The Trapezoida l Rule

    Use to approximate a definite integra] by adding up the areas under trapezoids. Recall the area formula for the trape-.wid: Partition [a, bl into n subintervals of equal length !::::..X = b:a. Area of A, = t;,"(J(x,- d + f(x,)) where x, = a + iL'lx = a + i('~.). An approximation for the area under the curve y = f( x) from x = a to x = b is

    n

    T = L A, .: I

    n L'I = L -i- (J( x,- d + f(x ,) )

    1= 1

    = ~x [f(xo) + 2f(xd + ... + 2f(xn _ , ) + f(xn ) J. l b h ... f (x) dx '" '2 [f (xo) + 2f(xd + ... + 2f(xn - d + f(xn) J b - a where h= - - . n

    Remark. The absolute error incurred by the Trapezoidal approximation is given by Er = I J: f(x) dx - TI This error will decrease as the step size 6.x decreases, because the trapezoids fit the curve better as their number increases.

    7

  • T heorem 3.1.1 (Error Bound for Trapezoidal R ule). If f" is cont inuous and If"(x)i ~ M for (b - a)'

    all x E [a, bJ, then IErI ~ 12n' M.

    Example 4 . Estimate / 2 Xl dx by Trapezoidal Rule with n = 4. Answer . f(x} = x2,6x = ~ ,xo = 1,xl = ~,X2 = ~ , X3 = ~,X4 = ~ = 2. :. T = ~ [f(xo) + 2f (xt) + 2f (x,) + 2f(x,) + f(x.) ] = ~ [(xo)' + 2(xt)' + 2(x,)' + 2(x,)' + (x.)' ] = ~ [1+ 2

  • Theorem 3.2.1 (Error Bound for Simpson's Rule). If J
  • Chapter 4

    Roots Of Equations

    4.1 Int roduction

    Definit ion 4.1.1. Any number r for which f (r ) = 0 is called a solution or a root of that equat ion or a ze ro of f.

    b Example 8 . The root of t he linear equation ax + b = 0 is x = - -, a "# O.

    a

    Example 9. The roots of the quadratic equation ax2 + bx + c = 0 are given by

    - b V b' 4ac 2a

    Example 10. Find the roots of the equation x 3 - 4x = O.

    4.2 Bracketing M ethods

    These methods are use to solve the equation f (x) = 0 where J is a cont inuous funct ion. They are based on the Intermediate Value Theorem that says that if f is a continuous function on [a, bj that has values of opposite signs at a and b, then f has at least one root in the interval (a , b). They are called bracketing methods because 2 initial guesses !l bracketing" the root are required to start t he procedure. The solution is found by systematically reducing the width of the bracket. Two examples of bracketing methods are : (a) The Bisect ion Method (b) The Method of False-Position Example 11. Show that t he equation x = cos x has at least one solution in t he interval (0,1f/2).

    10

  • Reading Assignment 4.2.1. Show that the equation Xl - 4x + 1 = 0 has at least one solution in the interval (1,2).

    Answer. (a) Let f(x ) = x' - 4x + I. Being a polynomial, f is continuous on (1,2J. (b) f(l) = - 2, f(2) = 1 => f(l)f(2) is negative. Hence, by the Intermediate Value Theorem, the equation has a solution in (1,2).

    4.2.1 The Bisection Method

    The method calls for a repeated halving of subintervals of [a, bj, and at each step, picking the half where f changes sign. The basic algorithm is as fo llows: Step 1. Choose lower x, and upper x. guesses for the root such that f (x,)f(x.) < 0. Step 2. An estimate of the root is determined by

    Xl +Xu Xr = 2

    Step 3. Determine which subinterval the root lies as follows: (a) [ f(x,)f(x.) < 0, the root lies in (x"x.). c. set x. = x., and return to Step 2. (b) [ f(x,)f(x.) > 0, the root lies in (X., x.). c. set x, = x., and return to Step 2. (c) [ f(x,)f(x.) = 0, then the root equals x . Stop.

    Termination Criteria and Error Estimates

    We must have a stopping criterion es to terminate the computation. In practice, we require an error estimate that is not contingent on foreknowledge of the root . In particular, an approximate percentage ea can be calculated as

    I I I present approximation - previous approximation I '" I x~ew - X~ld I '" ea = . . X 10070 = X 100/0 present approXlmatlOn x~ew

    11

  • Example 12. Usc bisect ion to find the root of f (x) = x lO - 1. Employ initial guesses of Xl = 0 and xu = 1.3 and iterate until the estimated percentage crror a falls below a stopping criterion of (, = 8%. Answer.

    (a) /(0)/(1.3) < 0,* the initial estimate of t he root is

    1 = 0 + 1.3 = 0 65 x,. 2 ..

    (b) 1(0)/(0.65) > 0 '* the root is in (0.65, 1.3). So set XI = 0.65, Xu = 1.3. Hence 2 _ 0.65 + 1.3 _ 0 97'

    x,. - 2 - . o.

    The estimated error is

    If.1 = IX~~ - X~d 1 x 100% = 1 0.975 - 0.65 1 x 100% = 33.3%. x;ew 0.975

    (c) /(0.65)/(0.975) > 0 '* the root is in (0.975, 1.3). So set X I = 0.975, Xu = 1.3. Hence

    x~ = 0.975 + 1.3 = 1.1375. 2

    The estimated error is

    I 1= 11.1375 - 0.975 1 x 1000/, = 14.30/,.

    a 1.1375 0 0

    (d) 1 (0.975)/(1.1375) < 0 '* the root is in (0.975, 1.1375) . So set XI = 0.975, Xu = 1.1375. Hence 4 _ 0.975 + 1.1375 _ 0 62

    Xc - 2 - 1. 5 5.

    The estimated error is

    11.05625 - 1.1375 1 If.1 = 1.05625 x 100% = 7.7%.

    After 4 iterations, the estimated percentage error is reduced to less than 8%.

    Advantages & Disadvantages of Bisection Method

    (a) Advantages (i) T his method is guaranteed to converge to thc root .

    (ii) The error bound is guaranteed to decrease by one half with each iteration. (b) Disadvantages

    (i) It generally converges more slowly than most other methods. (ii) It requires two initial estimates at which f has opposite signs to start the procedure. (iii) If f is not continuous I the method may converge to a wrong point. Should check the values

    I (x). (iv) If f does not change sign over any interval , then the method will not work.

    12

  • 4.2.2 The Method of False-Position

    In this method, we use the graphical ins ight that "if f( c) is closer to zero than I(d) , then c is closer to the root than cf' (which is not true in general).

    f(x)

    By using this idea, we obtain X j -x

    x, ~ x. - f(x,) _ f (x./(x.) . By replacing th is formula in the Step 2 of the Bisection method, we obtain the algorithm for the method of false-position as follows: Step 1. Choose lower x, and upper x. guesses for the root such that f( x,)f(x.) < O. Step 2. An estimate of the root is determined by

    Step 3. Determine which subinterval the root lies as foHows: (a) If f( x,)f(x,) < 0, t he root lies in (x"x,). :. set x. ~ Xn and return to Step 2. (b) If f( x,) f(x, ) > 0, the root lies in (xn x.). :. set x, ~ Xn and return to Step 2. (c) If f (x,)f(x,) ~ 0, then the root equals x,. Stop.

    13

  • Example 13. Use the Method of False-Position to find the zero of f (x) = x - e-~. Use initial guesses of 0 and 1.

    Answer.

    (i) First iteration, Xl ~ 0, f (xl) ~ - 1 x. ~ 1, f(x.) ~ 0.63212

    0 - 1 X, ~ 1 - - 1 - 0.63212(0.63212) ~ 0.61270, f (x,) ~ 0.07081 f (xl)f(x,) < 0 => r E (x" x,)

    (ii) Second iteration, Xl ~ 0, f(xl) ~ - 1 x. ~ 0.61270, f(x.) ~ 0.0.07081

    0 - 0.61270 x, ~ 0.61270 - - 1 - 0.07081 (0.07081) ~ 0.57218, f(x,) ~ 0.00789

    (iii ) The calculations are summarized in the following table: Xl X. X, f(x l) f(x,) f (xl) f (x,) 0.00000 1.00000 0.61270 -1.00000 0.07081 -0.07081 0.00000 0.61270 0.57218 -1.00000 0.00789 -0.00789 0.00000 0.57218 0.56770 -1.00000 0.00087 -0.00087 0.00000 0.56770 0.56721 -1.00000 0.00010 -0.00010 0.00000 0.56721 0.56715 -1.00000 0.00001 -0.00001 0.00000 0.56715 0.56714 -1.00000 0.00000 0.00000

    Hence, the approximate root is 0.56714.

    Reading Assignment 4.2.2. Use the Method of False-Position to find the zero of f (x) = x - e-~. Use initial guesses of 0 and 1. Iterate until two successive approximations differ by less than 0.01. Answer . Let ea = Ix~ew _ X~ld I. (i) First iteration,

    Xl ~ 0, f(XI) ~ - 1 x. ~ 1, f (x.) ~ 0.63212 X, ~ 1 - - 1 ~ ~:3212 (0.63212) ~ 0.61270 f (xl)f(x,) < 0 => T E (XI , X,)

    (ii) Second iteration, Xl ~ 0, f (XI) ~ - 1 x. ~ 0.61270, f(x.) ~ 0.0.07081 X, ~ 0.61270 _ 0 - 0.61270 (0.07081) ~ 0.57218

    - 1 - 0.07081 e. ~ 10.57218 - 0.612701 ~ 0.04052

    (iii ) T hird iteration, Xl ~ 0, f( XI) ~ - 1 x. ~ 0.57218, f(x.) ~ 0.0.07081

    Xl - X. 0 - 0.57218 X, ~ X. - f(xl ) _ f(x./(x,) X, ~ 0.57218 - - 1 _ 000789 (0.00789) ~ 0.56770 e. ~ 10.56770 - 0.572181 ~ 0.00448 < om (tolerance satisfied). Hence, the approximate root is 0.56770.

    14

  • 4.3 Open Methods

    In contrast to the bracketing methods, the open methods are based on formulas that require a single starting value or two starting values that do not necessarily bracket the root. Hence, t hey sometimes diverge from the true root . However , when they converge, they tend to converge much faster than the bracketing methods. Examples of open methods: (a) Fixed-Point Method (b) Newton-Ilaphson Method (c) Secant Method

    4.3.1 Fixed-Point Method

    To solve f(x) = 0, rearrange f(x) = 0 into the form x = g(x). Then the scheme is given by Xn+l = g(xn), n = 0, 1,2, ...

    Remark. This method is also called the successive substitution method , or one-point iteration.

    Example 14. The function f( x) = x2 - 3x + eX - 2 is known to have two roots , one negative and one positive. Find the smaller root by using t he fixed-point method.

    Answer. The smaller root is the negative root. f( - I) = 2+e- 1 > 0 and 1(0) = - 1 < 0 =} f( - I)f (O) < 0 =} the negative root lies in (- 1, 0) .

    x2+ex _ 2 f(x) = 0 can be wri tten as x = 3 = g(x ), so the algorithm is

    X2 + en - 2 Xn+l = g(xn) = n 3 ,n = 0, 1, 2, ...

    If we choose the initial guess Xo = - 0.5, we obtain x2 + eo - 2 (- 0.5)2 + e- O.5 - 2

    Xl = 0 3 = 3 ::::: - 0.3811564468 and so on. The results are summarized in the following table : k x, o - 0.5(initial guess) 1 - 0.381156446 2 - 0.390549582 3 -0.390262048 4 - 0.390272019 5 - 0.390271674 6 - 0.390271686 7 - 0.390271686

    Hence the root is approximately - 0.390271686.

    15

  • Note 1:

    (a) There are many ways to change the equation f(x) ~ 0 to the form x ~ g(x) and the speed of convergence of the corresponding iterative sequences {Xn } may differ accordingly. For instance, if we use the arrangement x = -J3x e'Z + 2 = g(x) in the above example, the sequence might not converge at all .

    (b) It is simple to implement but in this case slow to converge. (c) Even in the case where convergence is possible, divergence can occur if the initial guess is not

    sufficiently close to the root.

    Example 15 . If we solve x3 + 6x - 3 = 0 using the Fixed-point algorithm 3 - x3

    - n Xn+ 1 - 6 we obtain the following results with IO = 0.5, Xo = 1.5, IO = 2.5 and Xo = 5.5. n Xn n Xn 0 0.5 0 1.5 1 0.4791667 1 - 0.0625 2 0.4816638 2 0.5000407 3 0.4813757 3 0.4791616 4 0.4814091 5 0.4814052 6 0.4814057 7 0.4814056 8 0.4814056 8 0.4814057

    9 0.4814056 10 0.4814056

    n Xn 0 2.5 1 -2.1041667 2 2.0527057

    13 0.4814013 14 0.4814055 16 0.4814056 17 0.4814056

    n Xn 0 5.5 1 -27.2291667 2 3365.242242 3 -6351813600 4 0.4271122072 x 10"

    D I V E R G E S !!

    Note that when the initial guess Xo is close enough to the fixed point T, then the method will converge, but if it is too far away from 1', the method will diverge. Theorem 4.3.1 (Convergence of Fixed-Point M ethod). Let 9 be a continuous function on [a, bl and a ~ g(x) ~ b for all x E la, bJ. Then g(x) has at least one fixed point r in (a, b) . If in addition, 9 is differentiable and satisfies 19'(x)1 ~ M < 1 for all x in [a, bJ, M a constant, then the fixed point is unique and the method converges for any choice of initial point Xo in (a , b).

    16

  • 4.3.2 Newton-Raphson Method

    This is a method used to approximate a root of an equation f(x) = 0 assuming that f has a continuous derivatives /' . It consists of the following steps: Step 1. Guess a first approximation Xo to the root. (A graph may be helpfuL) Step 2. Use the first approximation to get the second, t he second to get the third, and so on, using the formula f(xn )

    xn+J = Xn - f'(Xn) , n = 0, 1,2,3, ...

    where Xn is the nth approximation. Stop when IXn+l - Inl < E,. the pre-specified stopping criterion i,. Remark. You may also stop when I Xn+l - Xn I x 100% < i, %

    Xn+l

    Note 2: The underlying idea is that we approximate the graph of f by suitable tangents. If you are writing a program for this method, don't forget also to include an upper limit to the number of iterations in the procedure.

    Example 16. Use Newton-Raphson Method to approximate the root of f(x) = x - e- = 0 that lies between 0 and 2. Continue the iterations until two successive approximations differ less than 10- 8 .

    A Th b (Xn - e- ') "x.::.n.-:+_ l=-nswer. e Iteration IS gIVen y Xn +l = Xn - = -Starting with Xo = 1! we obtain X, = Xo + 1 = 1 + 1 '" 0.537882842

    e:to+ l e+ l X I + 1 X, = '" 0.566986991 e:tl + 1

    X3 '" 0.567143286 x. '" 0.56714329

    l +e:tn e%n+l

    Since lx, - x31 = 0.000000004 = 0.4 x 10-' < 10- ', we stop the process and take x, '" 0.56714329 as the required root .

    17

  • Reading Assignment 4 .3.2. The equation 2x3 + X2 - X + 1 = 0 has only one real root. Use the Newton-Raphson Method to find this root. Continue the iterations until two successive approximations differ less than 0.0001. Answer. Let f (x) = 2x3 + x' - x + 1. (a) First, use the intermediate Value Theorem to local the root. Since f( - 2)f(-I) = (- 9)(1) = - 9 < 0, the root is in (-2, - 1). (b) Using the iterative formula

    f(xn} 2x~ + x! - Xn + 1 Xn+1 = Xn - f'(x

    n) = Xn - 6' 2 1 xn + Xn -

    with the initial approximation Xo = [(-2) + (-1)]/2 = - 1.5, we obtain Xl = Xo - 2~~!:t:oil = -1.289473684,lxl - xol > 0.0001 x, = -1.236967446, lx, - xd > 0.0001 X3 = - 1.233763552, IX3 - xd > 0.0001 x, = -1.233751929 Since IX4 - x3 1 = 0.000011623 < 0.0001 , the required approximation to the root is X4 = - 1.233751929.

    Exercise 3. Use Newton-Raphson method to approximate ffi correct to six decimal places. Answer. 2.165737

    Advantages

    (a) It needs only 1 initial guess. (b) It converges very rapidly.

    Disadvantages

    (a) It may not converge if the initial guess is not sufficiently close to the true root . (b) The calculations of f'(xn) may be very complicated. (c) Difficulties may arise if 1f'(xn)1 is very small near a solution of f(x) = O.

    18

  • 4.3.3 Finite Diffe rence M ethod

    Consider a second order boundary value problem (BVP)

    Y" + P (x)y' + Q(x)y = f(x), y(a) = et, y(b) = (J. Suppose a = Xo < Xl < ... < Xn - ] < Xn = b with Xi - Xi_I = h for all i = 1, 2, ... , n . Let y, = y(x,) , Po = P(x,) , Q, = Q(x,) and J; = f(x ,). Then by replacing y' and y" with their central difference approximations in the BVP, we get

    Yi+1 - 2Yi + Yi- l Yi+l - Yi - l . h2 + Pi 2h + QiYi = h, t = 1, 2, ... , n - 1.

    or , after simplify ing

    The last equation, known as a finite difference equation, is an approximation to the differential equation. It enables us to approximate the solution at X I , . . ,Xn_ l ..

    Example 17. Solving BVPs Using Finite Difference Method Use the above difference equation with n = 4 to approximate the solution of the BVP

    y" - 4y = 0, y(O) = 0, y( l ) = 5.

    Answer. Here h = (1- 0)/4 = 0.25, x, = 0.25i, i = 0,1,2,3,4. Hence, t he difference equation is

    That is,

    Yi+l - 2.25Yi + Yi-I = 0, i = 1,2,3

    Y2 - 2.25YI + Yo - 0 Y3 - 2.25Y2 + YI 0 y, - 2.25Y3+Y2 - 0

    With the BCs Yo = 0 and Y4 = 5, the above system becomes

    112 - 2.25YI 113 - 2.251/2 + YI

    -2.25Y3 +y,

    -

    -

    0 0 -5

    Solving the system gives YI = 0.7256, Y2 = 1.6327, and Y3 = 2.9479.

    Notes : We can improve the accuracy by using smaller h. But for that we have to pay a price , i. e. we have to solve a larger system of equations .

    19

  • Chapter 5

    Some Topics In Linear Algebra

    5.1 Iterative Methods For Solving Linear Systems

    The iterative methods start with an initial approximation to a solut ion and then generate a succes-sion of better and better approximations that (may) tend toward an exact solution. We shall study the following two iterative methods:

    (8.) Jacobi iteration: The order in which t he equations are examined is irrelevant, since the Jacohi method treats them independently. For this reason, the Jacohi method is also known as the me thod of simultaneous corrections, since the updates could in principle be done simultane-ously.

    (b) Gauss-Seidel iteration: this is a method of successive corrections. It is very similar to the Jacohi technique except it replaces approximations by corresponding new ones as soon as the latter are available.

    Definition 5.1.1. An n x n matrix A = [Oij] is strictly diagonally dominant if n

    la .. 1 > 2:)a"l. V k ~ 1. 2 .. . n . ,

    .. , 1#1<

    That is, A is strictly diagonally dominant if the absolute value of each diagonal entry is greater than the sum of the absolute values of the remaining entries in the same row.

    20

  • Example 18. Show that A = [ ~ - 3

    -7 4] 1 6 is not strictly diagonally dominant. 5 9

    Answer. Since in t he first row, lalll = 2 :f la121 + la131 = 7 + 4 = 11 and in the second row, la,,1 = 11- la,,1 + lad = 8 + 6 = 14, matrix A is not strictly diagonally dominant.

    However, if we interchange t he first and second rows, the resulting matrix B = [ ~ ~7:] is strictly -3 5 9

    diagonally dominant since lal d = 8 > la,,1 + lad = 1 + 6 = 7, la,,1 = 7 > la,d + la,,1 = 2+4 = 6, la331 = 9 > la,d + la,,1 = 3 + 5 = 8.

    Theorem 5.1.1 ( Convergence of the Iterative Methods). If the square matrix A is strictly diagonally dominant, then the Gauss-Seidel and Jacobi approximations to the solution of the linear system Ax = b both converge to the exact solution for all choice'> of the initial approximation.

    Remark (Termination Criterion). We can stop the computation when Ix;iP+IJ - xiPi I < ,

    for all i, where lE is the pre-specified stopping criterion.

    Example 19. Use Jacohi Iteration Technique to solve

    Xl - lOx2 + X3 - 13 20Xl + X2 - X3 - 17

    -Xl + X2 + 10x3 - 18

    Iterate until IXjlP+IJ - XjIPq < 0.0002, for all i. Prepare all the computations in 5 decimal places. Answer .

    (i) To ensure the convergence of this method, we rearrange the equations to obtain a strictly diago-nally dominant system :

    20XI + X2 - X3 - 17 Xl - lOx2 + X3 - 13

    -Xl + X2 + lOX3 - 18

    21

  • (ii) Make each diagonal element the subject of each equation: 1

    Xl 20(17 -x,+x3) 1 (*) x, 10(-13 + Xl +X3) 1

    X3 10 (18 + XI - x,)

    (Hi) Start with a reasonable initial approximation to the solution:

    e g Xlol_ O xlol= O xlol= O .. I - ,2 '3 .

    (iv) Substitute this initial approximation into the RHS of (*) , and calcula.te the new approximation XI(P+ I]

    - 210(17 - xrl + x!rl)

    X2(P+l] -

    ~(- 13 +x~1 + x~l) 10 I 3 X3{P+I]

    -110

    (18 + xfl - x~l) where xf] is the pth iteration of the approximation to Xi That is, the new approximation is

    Xliii _ 2~(17 -x\OI +x\OI) = 0.850 X,III 11

    0(- 13 + x\OI +x\OI) = - 1.3

    x3[11 - 110 (18 + x\OI - x~l) = 1.8 (v) To improve the approximation, we can repeat the substitution process. The next approximation

    is

    - 2~(17 - (- 1.3) + 1.8) = 1.005 1

    10 (- 13 + 0.85 + 1.8) = - 1.035 1

    - 10(18 + 0.85 - (- 1.3)) = 2.015

    (vi) As Ix,I'1 - x,lslj < 0.0002 for all i, we stop the computation. Th 1 ed e resu ts obtain are summarized in the followinp; table:

    m 0 1 2 3 4 5 6 XI,ffl, 0 0.850 1.005 1.0025 1.0001 0.99997 1.00000 X"ffl, 0 -1.3 -1.035 -0.9980 -0.99935 -0.99999 -1.00000 X3,ffl, 0 1.8 2.015 2.004 2.00005 1.99995 2.00000

    22

  • Example 20. Use Gauss-Seidel Method to solve

    Xl - lOx2 + X3 - 13 20XI + X2 - X3 17

    -Xl + X2 + lOx3 - 18

    Iterate until IXjIP+I ] - Xj: IP] I < 0.0002. for all i. Prepare all the computations in 5 decimal places. Answer.

    (i) Make sure the matrix is diagonally dominant (rearrange it if necessary). Rearranging the equations leads to

    20XI +X2 - X3 - 17 Xl - lOX2 +X3 - 13

    - Xl + X2 + lOx3 - 18

    which is strictly diagonally dominant. (ii) Make each diagonal element t he subject of each equation:

    I XI 20{17 - X, + X3)

    I X, - 1O { -13 + XI + X3)

    I X3 10 (18 + X I - X3)

    (iii) Start with a reasonable initial approximation to the solution: e.g. x~oJ = 0, x~oJ = 0, x~oJ = O.

    (.)

    (iv) Substitute this initial approximation into the RHS of (*" and calculate the new approximation

    210{17 - Xrl+X~I) ~(- 13 + X IP+II + xIPI)

    - 10 I 3

    :0 (18 + x\",11 - Xr+II) where xr' is the pth iteration of the approximation to Xi . That is, the new approximation is

    XliII 210 (17 - x~1 + xil) = 0.850

    x,iIl _ 110 (- 13 + x\11 + xil) = -1.215

    x3 iII - 110 (18 + xPI - x\ll) = 2.0065

    23

  • (v) To improve the approximation, we can repeat the substitution process. The next approximation IS

    XI!'! 2~(17 - (- 1.215) + 2.0065) = 1.01108 x,!'! - 110 (- 13 + 1.0111 + 2.0065) = - 0.99824

    X3!'! - 110 (18 + 1.0111 - (-0.99824)) = 2.00093

    w h b'd'hill e summarize t e resu ts 0 tame mt eoowm table; m 0 I 2 3 4

    XI,m, 0 0.850 1.01108 0.99996 1.00000 x,lmJ 0 -1.215 -0.99824 -0.99991 -1.00000 X3JmJ 0 2.0065 2.00093 1.99999 2.00000

    Note 3:

    (a) The Gauss-Seidel technique is not appropriate for use on vector computer, as the sct of equations must be solved in series.

    (b) The Gauss-Seidel technique requires less storage than the Jacobi technique and leads to a conver-gent solution almost twice as fast .

    24

  • 5.2 A Review On Eigenvalues

    Definition 5.2.1 . Let A be a.n n x n matrix. A scalar ..\ is called an eigenvalue of A if there exists a nonzero vector x E !Rn such that

    Ax ~ AX. In this case, the nonzero vector x is called an e igenvector of A corresponding to A.

    Example 21. Let A = [~ ~]. Verify that VI = [!] is an eigenvector of A and find the corresponding eigenvalue.

    Answer. AVt = [~ ~] [~] = ... = 5 Vl => [!] is an eigenvector of A corresp,onding to the eigenvalue A ~ 5.

    Note 4:

    (8.) To find an eigenvalue of A, we solve the characteristic equation IA - MI ~ O

    for A. (b) To find the eigenvectors corresponding to A, we solve the linear system

    (A - M )x ~ 0 for nonzero x.

    Exercise 4. Find the eigcnvalues and eigenvectors of A = [~ ;]. Answer. A, ~ I,A, ~ 5; v, ~ [~3] , v, ~ [:] Definition 5 .2.2. A square matrix A is called diagonalizable if there is an invertible matrix P such that p - l AP is a diagonal matrix; the matrix P is said to diagonalize A.

    Exercise 5 . Let A = [~ ~l] and P = [~ !]. Verify that P diagonalizes A. Answer. p - I AP ~ [~I ~] [~ ~I] [~ ;J ~ [~I ~], a diagonal matrix, so P diaganalizes A. Theorem 5.2.1.

    Let A be an n x n matrix. (a) A is diagonalizable if and only if it has n linearly independent eigenvectors. (b) These n linearly independent eigenvectors form a basis of !Rn.

    25

  • 5.3 Approximation of Eigenvalues

    Definition 5.3.1. An eigenvalue of an n x n matrix A is called the dominant eigenvalue of A if its absolute value is larger than the absolute values of the remaining n - 1 eigenvalues. An eigenvector corresponding to the dominant eigenvalue is called a dominant e igenvector of A.

    Example 22.

    (a) If 4 x 4 matrix A has eigenvalues

    then ...\1 = - 5 is the dominant eigenvalue since

    IA,I > IA,I for all i 11. (b) A 3 x 3 matrix B with eigenvalues

    A, = 2, A, = -5, A3 = 5 has no dominant eigenvalue.

    5.3.1 The Power Method

    Theorem 5.3.1 (The Power Method (or Direct Iteration Method)) . Let A be a diagonalizable n x n matrix with eigenvalues

    Assume that VI" ", Vn are unit eigenvectors of A associated with Al ..\2 ... An respectively. Let Xo be a nonzero vector in !Rn that is an initial guess for the dominant eigenvector v._ Then the vector Xk = Ak:xo is a good approximation to a dominant vector of A VI when the exponent k is sufficiently large,

    Note 5:

    (a) Xo is an initial guess or approximation for the dominant eigenvector. (b) IT Xk = Ak:xo is an approximation to the dominant eigenvector , then the dominant eigenvalue AI

    can be approximated by the Rayleigh quotient

    \ AXk'Xk A l ::::::

    X k' Xk

    (c) The rate of convergence depends on the ratios A2 An AI "'" AI'

    The smaller the ratios, the faster the rate of convergence.

    26

  • Example 23. Let A = [~ =~l . Use the power method to approximate the dominant eigenvalue and a corresponding eigenvector with Xo = [~]. Prepare 6 iterations. Answer.

    We compute Xm +1 = AXm for m = 0, 1,2, ....

    XI = AXo = [~ =~] [~] = [~] = 4 [0~5] , X, = AXI = [I~] = 10 [019] X, = Ax, = [;~] "' 22 [09~45] , '" = Ax, = [::] "' 46 [09~83] , X, = A", = [~~] "' 94 [09~94] , Xo=Ax, = [:~~] ", 190[09~47] So Xl) = [~~~] is an approximation to a dominant eigenvector and the dominant eigenvalue

    Remark.

    (3823811. [190] 189 -----;[c-:-;*] "' 2.0132. (190 1891 . 190

    189

    (a) From the above calculations, it is clear that the vectors Km are getting closer and closer to scalar multiples of [n which is a dominant eigenvector of A. It can also be checked that ).. = 2.

    (b) The convergence is rather slow. (c) Termination Criteria Let A(i) denote the approximation to the eigenvalue A at the ith step.

    We can stop the computation once the relative error

    is less than the pre-specified error criterion l. Unfortunately, the actual value A is usually unknown. So instead, we will stop the computation at the ith step if the estimated relative error

    \A(i)-A(i- 1)\

    A(i) < ,.

    The value obtained by multiplying estimated relative error by 100% is called the est imated percentage error.

    27

  • 5.3.2 Power Method with Scaling

    Theorem 5.3. 2. The power method often generates a sequence of vectors {A'~Xo} that have incon-veniently large entries. This can be avoided by scaling the iterative vector at each step. That is, we

    multiply AXo = [~,] by {I 1 11 1 1 I} and label the resulting matrix as X, . We repeat the max Xl 1 X2 1 1 Xn

    Xn process with the vector AXI to obtain the scaled-down vector X2, and so on.

    The algorithm for the Power Method with Scaling as follows: 1. Compute Yk = AXk _ l

    1 2. Set Xk = max{ly.!} Yk

    . . . AXk'Xk 3. Dommant elgenvalue of matnx A = -.=........= XI:' XI:

    Example 24. Repeat the iterations of Example 23 using the power method with scaling.

    Answer.

    (i) y, = AXo = [~] . we define X, = ~YI = ~ [~] = [0~5] ' .. [4 -2] [1] [2.5] 1 1 [2.5] [1] (11) y, = Ax, = 3 - 1 0.75 = 2.25 we define X, = 2.5Y' = 2.5 2.25 = 0.9 .

    Continue in this manner, we will obtain

    Finally, the dominant eigenvalue

    A"". "" [2.0106 2.00531' [0.9~947] Al '" [ 1 ] '" 4.0053/ 1.9894 '" 2.0133

    \ X . "" [10.99471' 0.99947

    Observe that the sequence of vectors {Xk} is approaching the eigenvector [i].

    28

  • 5.3.3 Inverse Power Method with Scaling

    Theorem 5.3.3. If in addition, A is an invertible matrix, then AI > A, 2: A3 2: ... 2: An > O.

    I Hence >: will be the dominant eigenvalue of A- I, We could apply the power method on A- I to find I n

    An and hence An' This is called the inverse power method.

    The algorithm for the Inverse Power Method with Scaling as follows: 1. Compute B = A- I

    . 2. Compute Yk = BXk_ 1 1

    3. Set x, ~ (I Ij Yk max Yk

    D . al f ' AI Bx,x, 4. omlllaut clgenv ue 0 matnx - = = J1.

    5. Smallest eigenvalue of A ~ .!. jJ.

    xk' Xk

    Example 25. Let >"1 and ).2 be the eigenvalues of A = [~ ~] such that Al > .A2' (a) Use an appropriate power method with scaling to approximate an eigenvector corresponding to

    "\2. Start with t he initial approximation Xo = [-~.7]. Round off' all computations to four decimal places, and stop after two iterations.

    (b) Use the result of part (a) to approximate A,. (c) Find the estimated percentage error in the approximation of A2 _

    Answer.

    (a) We use the inverse power method with scaling to find A2 , t he smallest eigenvalue of A. B ~ A- I ~ [- 0.1 0.3]

    0.4 - 0.2

    Iteration I YI ~ Bx. ~ [~03;:], XI ~ y1/0.48 ~ [0 ~~08] I . 2 B [- 0.3771] / 05083 [- 0.7419] teratIon : Y2 = Xl = 0.5083 ,X2 = Y2 = 1 is an approximation to the required eigenvector.

    (b) An approximation to A,2 is the second approximation "\2(2) = X2' X2 = - 2.0020 BX2' X2

    (c) The first approximation of A, is A,(I) ~ XI' XI ~ - 1.9952 BXI"xj

    I A,(2) - A,( I ) I The percentage error is A,(2) x 100% ~ 0.3397%

    29

  • 5.3.4 Shifted Inverse Power Method with Scaling

    Theorem 5 .3.4. Let A be a diagonalizable n x n matrix with eigenvalues Al l '" I An and assume that

    ,

    This method can be used to find any eigenvalue and eigcnvector of A. Let a be any number and let Ak be the eigenvalue of A closest to a. The inverse power iteration with A - aI will converge to IAk - al - 1 and a multiple of Vk .

    The algorithm for the Shifted Inverse Power Method with Scaling as follows: 1. Compute C = (A - aI)-1 2. Compute Yk = CXk_ l

    1 3. Set Xk = max{IYkllYk

    .. . - 1 eXit; Xk 4. Dominant clgenvalue of matnx (A - aJ) = = {J X It; XIt;

    . 1 5. Elgenvalue closest to a = p + a

    Example 26. Apply the shifted inverse power method with scaling (2 steps) to A = [~ =~] to find the eigenvalue nearest to a = 6. Start with the initial guess Xo = [~] . Answer. A - al = A - 61 = [~ =;]. C = (A - aI)- 1 = = [~ =~] Iteration L

    YI = Cx. = m, XI = ~YI = [04;86] ' Iteration 2:

    C [6.1428] 1 [1] . ... d 112 = X l = 2.5714' X2 = 6.1428Y2 = 0.4186 IS an apprOXimatIOn to an elgenvector correspon -ing to the required eigenvalue,say, A. Let {3 be the dominant eigenvalue of C.

    ex, x, 7.2433 . [6.1628] T hen {3 '" x,. x, = 1.1752 = 6.1634 smce Cx, = 2.5814 .

    1 1 Hence, A = a + fj = 6 + 6.1634 = 6.1622.

    30

  • Remark.

    (a) The Power Method converges rather slowly. Shifting can improve the rate of convergence. (b) If we have some knowledge of what the eigenvalues of A are, then this method can be used. to find

    any eigenvalue and eigenvector of A. (c) We can estimate the eigenvalues of A by using the Gerschgorin 's Theorem.

    Theorem 5.3.5 (Gerschgorin's Theorem). Let). be an eigenvalue of an n x n matrix A. Then for some integer i( l :S i :S n), we have

    lA - a,,1 $ T, = la,d + ... + la",-d + 1a.,'+l1 + ... + Ia.nl That is , the eigenvalues of A lie in the union of the n discs with radius Ti centered at llii. Furthermore, if a union of k of these n discs form a connected. region that is disjoint from all the remaining n - k discs, then there are precisely k eigenvalues of A in this region.

    Example 27. Draw the Gerschgorin discs corresponding to

    A = [! ~8 =~] -4 - 1 -8

    What can be concluded about the eigenvalues of A?

    Answer. The Gerschgorin discs are

    D, : Center 7, radius 8

    D2 = DJ : Center -8, radius 5 (i) The eigenvalues of A must lie in D, U D2 . (ii) A is symmetric, so all the eigenvalues are real numbers. (iii) Hence, - 1 $ Al $ 15 and - 13 $ A" A3 $ - 3

    31

  • Chapter 6

    Optimization

    6.1 Direct Search Methods

    Direct search methods apply primarily to strictly unimodal I-variable functions. The idea of these methods is to identify the interval of uncertainty that contains the optimal solution point. The procedure locates the optimum by iteratively narrowing the interval of ullcertainty to any desired level of accuracy. We will discuss only t he golden section method. T his method is used to find the maximum value of a unimodal function f(x) over a given interval [a, b] . Definition 6 .1.1. A function f(x) is unimodal on [a, bj if it has exactly one maximum (or minimum) on [a, b] .

    6.1.1 Golden Section Search (Maximum) The general steps for this method are as follows: Let f(x) be a unimodal function over a given interval [a, b] with the optimal point x, Let h - l -(XL , xn) be the current interval of uncertainty with 10 = [a, b1 , Define

    Xl = X n - r(xR - xd , X2 = XL + r(xn - XL) vis - 1

    where r = 2 ' the golden ratio. (Clearly, XL < Xl < X2 < xn.) The next interval of uncertainty h is determined in the following way: (i) If f(xl) > f(x,), then XI. < x' < x,.'Set XR = X, and I, = [XL,X,].

    (ii) If f(xd < f(x,) , then Xl < X ! XR Set XL = Xl and h = [XI. XR]. (iii) If f(xd = f(x,), then X l < x' < X,. Set XL = XI,XR = X" and I, = [Xl ,X,]. Remark. (a) The choice of XI and X2 ensures that h e l k _ I ' (b) Let L, = 11 /, 11, the length of I ,. Then the algorithm terminates at iteration k if L, < " the user

    specified level of accuracy. (c) It can be seen that Lk = rLk_1 and Lk = rk(b - a), Thus, the algorithm will terminate at k

    iterations where Lk = rk(b - a) < E.

    32

  • Example 28. Find the maximum value of f(x) = 3 + 6x - 4x' on the interval [0, 1[ using the golden section search method with the final interval of uncertainty having a length less than 0.25.

    Answer. Solving r'(b - a) < 0.25

    for the number k of iterations that must be performed, we obtain

    k > 2.88.

    Thus 3 iterations of golden section search must be performed.

    Iteration 1 : XL = 0, Xn = 1 vis - 1

    => XI = XR - 2 (XR - xd = 0.381966, vis - 1

    X, = XL + 2 (XR - xd = 0.618034 f(xl) = 4.708204 < f (x,) = 5.180340 => take XL = XI = 0.381966 with the same XR

    Iteration 2 : X L = 0.381966, XR = 1 vis - 1

    => XI = XR - 2 (Xll - xd = 0.618034, vis - 1

    X, = XL + 2 (XR - xd = 0.763932 f(xl) = 5.180340 < f(x,) = 5.249224 => take XL = XI = 0.618034 with the same XR i.e. , XR = 1 .

    Iteration 3 : XL = 0.618034, XR = 1 vis - 1

    => X l = XR - 2 (XR - xd = 0.763932, vis - 1

    x, = XL + 2 (XR - xd = 0.854102 f(xd = 5.249224 > f(x,) = 5.206651 => take XL remains the same with the same XR = X, = 0.854102 Le., XL = 0.618034, XR = 0.854102

    Thus, 13 = [0.618034,0.8541021 and L:. = 0.854102 - 0.618034 = 0.236068 < 0.25 as wanted. So the required maximum point lies within 1, = [0.618034, 0.8541021

    33

  • 6.1.2 Gradient Method

    Review. Recall that the gradient vector of a function f at a point x,

    'Il I(x ),

    points in the direction of maximum increase (or the direction of steepest ascent). And,

    - 'Il I(x),

    is the direction of maximum decrease (or the direction of steepest descent)

    Theorem 6.1.1 (Method of Steepest Ascent / Gradient Method) . An algorithm for finding the nearest local maximum of a twice continuous differentiable function !(x",) , which presupposes that the gradient of the function can be computed. The method of steepest ascent, also called the gradient method, starts at a point Xo and, as many times as needed, moves from X l; to X k+ 1 by maximizing along the line extending from Xl; in the direction of 'iJ !(XIc) , the local uphill gradient. That is, we determine the value of t and the corresponding point

    at which the function g(t) = I (XH I(t) )

    has a maximum. We take X k+} as the next approximation after Xk.

    Remark : This method has the severe drawback of requiring a great many iterations (hence a slow convergence) for functions which have long, narrow valley structures. Example 29. Use the method of steepest ascent to determine a maximum of f(x) = _ x2 _ y2 starting from the point Xo = (1, 1). Answer . 'Ill(x) = (- 2x,-2y) (i) At Xo = (1,1), 'Ill = (- 2, - 2) . The new approximation is

    XI = Xo + t'll I(Xo) where t is a value that will maximize the function

    g(t) = I( Xo + t'll I(Xo)) = 1(1 - 2t , 1 - 2t) = - (1 - 2t)' - (1 - 2t)'.

    Solving 9'(t) = 0, we obtain 4(1 - 2t) + 4(1 - 2t) = 0 => t = 1/ 2. 1

    Hence XI = (1, 1) + 2(-2, - 2) = (0,0) Now 'Il l (xl ) = 'Il 1 (0,0) = (0, 0) , and we terminate the algorithm. So the maximum point is XI = (0,0) with maximum value 1(0,0) = o.

    34

  • Reading Assignment 6.1.2. Starting at the point (0, 0) , use one iteration of the steepest-descent algorithm to approximate the minimum value of the function

    f(x , y) = 2(x+y)' + (x - y)' +3x + 2y. Answer.

    f. = 4(x +y) + 2(x -y) +3 = 6x+2y+3 f. = 4(x + y) - 2(x - y) + 2 = 2x + 6y + 2 \1f(x, y) = (/ .. f.) = (6x+2y+3,2x+6y+2) Start with the initial approximation Xc = (0,0), we obtain the next approximation Xl = "0 + t\1 /("0) = (0,0) + t(3, 2) = (3t,2t) since \1 /("0) = \1 f(O , 0) = (3,2).

    Find t that minimizes 9(t) = f(x.) = /(3t , 2t) = 2(5t)' + (t)' + 3(3t) + 2(2t) = 51t' + 13t Solve 9'(t) = 102t + 13 = 0 => t = - 13/ 102 = - 0.127

    Hence Xl = (-0.382, - 0.255) and f(XI) = f( - 0.382, - 0.255) = -0.828", minimum value of f

    35

  • Chapter 7

    Numerical Methods For Ordinary Differential Equations

    7.1 First-Order Initial-Value Problem s

    7.1.1 E uler's Method

    Given the initial-value problem dy

    . dx = f(x , y), y(xo) = Yo EuleT method with step size h consists in using the iterative formula

    Yn+ ! = Yn + hf(xno Yn) to approximate the solution y(x) of the IVP at the mesh points

    Xn+l = Xn + h = IO + (n + l)h, n = 0, 1, 2, .... Note 6 (TI-uncation error in Euler's Method): (8) As the truncation is performed after the first term of the Taylar series of y(x), Euler's method is

    a first order method with TE = O(h2). This error is a local error as it applies at each and every step as the solution develops.

    (b) The g lobal e rro r , which applies to the final solution is O(h). since the number of operations . I

    would be proportional to h'

    36

  • Example 30. Consider the IVP y' = x + y, y(O) = 1.

    (a) Use Euler's method to obtain a five-decimal approximation to y(O.5) using the step size h = 0.1. (b) (i) Estimate the truncation error in your approximation to y(O.I) using the next two terms in

    the corresponding Taylor series. (ii) The exact value for y(O.I) is 1.11034 (to 5 decimal places) . Calculate the error between

    the actual value y(O.I) and your approximation Yt. How does this error compare with the truncation error you obtained in (i)?

    (c) The exact value for y(0.5) is 1.79744 (to 5 decimal places) . Calculate the absolute error between the actual value y(O.5) and your approximation Ys.

    Answer. Here f (x , y) = x + y, Xo = 0 and Yo = 1 so that the Euler's scheme becomes

    Yn+l = Yn + O.l(xn + Yn) = O.lxn + 1.1Yn.

    (a) (i) y, = O.lxo + 1.1yo = 0.1(0) + 1.1(1) = 1.1 (ii) y, = O.lx, + 1.1y, = 0.1(0.1) + 1.1(1.1) = 1.22 (iii) Rounding values to 5 decimal places, the remaining values obtained in this manner are

    Y3 = 1.36200, y, = 1.52820, ys = 1.72102.

    (iv) Therefore y(0.5) '" Ys = 1.72102. (b) (i) The truncation error is 0.01033.

    (ii) The actual error is 0.01034. (c) 0.07642

    37

  • Example 31. Given that the exact solution of the initial-value problem y' ~ x + y, y(O) ~ 1

    is y(x) ~ 2e" - x - 1. The following table shows (a) the approximate values obtained by using Euler's method with step size h = 0.1 (b) the approximate values with h = 0.05, and the actual values of the solution

    x = 0.1, 0.2, 0.3, 0.4, 0.5 :

    Yn Yn exact absolute error Xn with h ~ 0.1 with h ~ 0.05 value with h ~ 0.1 0.0 1.00000 1.00000 1.00000 0.00000 0.1 1.10000 1.10500 1.11034 0.01034 0.2 1.22000 1.23101 1.24281 0.02281 0.3 1.36200 1.38019 1.39972 0.03772 0.4 1.52820 1.55491 1.58365 0.05545 0.5 1.72102 1.75779 1.79744 0.07642

    Based on the above information, give some comments . Answer.

    Comments

    absolute error with h ~ 0.05

    0.00000 000534 0.01180 0.01953 0.02874 0.03965

    at the points

    (a) By comparing the absolute errors , wc sce that Euler's method is more accurate for the smaller h. (b) The column of data for It = 0.1 requires only 5 steps, whereas 10 steps are required to reach x = 0.5

    with h = 0.05. In general , more calculations are required for smaller h. As the consequence, a larger roundoff error is expected.

    (c) The global error for Euler's method is O(h) . It follows that if the step size h is halved, this error would be approximately halved as welL This can be seen from the above table that when the step size is halved from h = 0. 1 to h = 0.05, the global error at x = 0.5 is reduced from 0.07642 to 0.03965. This is a reduction of approximately one half.

    7.1.2 Heun's M ethod/ Improved Euler's Method

    In each step of this method, we compute first the auxiliary value

    y~+ l ~ Yn + hf(x., Yn ) (1 ) and then the new value

    Note 7:

    (a) This is an example of a predictor-corrector method-it uses (1) to predict a value of Y( Xn+l) and then uses (2 ) to correct this value.

    (b) The local TE for Heun 's method is 0(h3 ). (c) The global TE for Heun's method is 0(h2).

    38

  • Example 32. Use Improved Euler's method with step size h = 0.1 to approximate the solution of the IVP

    y' =X+ y, y(O) = I on the interval [0, 1]. The exact solution is y(x) = 2e% - x - 1. Make a table showing the approximate values, the actual values together with the absolute errors.

    Answer. Here f(x , y) = x + y, Xo = 0 and Yo = I so that (a) Y~+l = Yn + O.I(xn + Yn) = O.lxn + I.1Yn

    Yn+1 = Yn + 0.05 [(xn + Yn) + (Xn+l + Y~+l)] (b) Yl' = O.lxo + I.1Yo = 0.1(0) + 1.1(1) = 1.1

    Yl = Yo + 0.05 [(Xo + Yo) + (Xl + Yl')] = 1+ 0.05[1 + (0.1 + I.1)J = 1.11 (c) y, = 1.11 + 0.1[(0.1 + 1.11) = 1.231

    11> = 1.11 + 0.05[(0.1 + 1.11) + (0.2 + 1.231)J = 1.24205

    The remaining calculations are summarized in the following table: n Xn Yn exact absolute error 0 0.0 1.00000 1.00000 0.00000 I 0.1 1.11000 1.11034 0.00034 2 0.2 1.24205 1.24281 0.00076 3 0.3 1.39847 1.39972 0.00125 4 0.4 1.58181 1.58365 0.00184 5 0.5 1.79490 1.79744 0.00254 6 0.6 2.04086 2.04424 0.00338 7 0.7 2.32315 2.32751 0.00436 8 0.8 2.64558 2.65108 0.00550 9 0.9 3.01237 3.01921 0.00684 10 1.0 3.42817 3.43656 0.00839

    Remark.

    (a) Considerable improvement in accuracy over Euler's method. (b) For the case h = 0.1, the TE ill approximation to y(O.I ) is '" ":y'~' = (O~)'(2) = 0.00033, very

    close to the actual error 0.00034.

    39

  • 7.1.3 Taylor Series Method of Order p

    _ I h2" h3 (3) . . . hP (P) Yn+l - Un + hYn + 2! Un + 31 Un + + pI Yn

    Example 33.

    Consider the IVP y' = 2xy , y(l) = I.

    Use Taylor series method of order 2 to approximate y(1.2) using the step size It = 0.1. Answer. Here f(x , y) = 2xy, Xo = 1, and Yo = 1. Th hd ' hi h

    2" 01' 000'" 012'h h e met 0 IS Yn+ l = Yn + Yn + 2fYn = Yn + . Yn + . vUn) n = ) , , '" WIt Xn+l = Xn + =

    X n + 0.1.

    (a) When n = 0 : Xo = I (i) y' = 2xy "'" y~ = 2xoYo = 2(1)(1) = 2 (ii) y" = 2y + 2xy' "'" y~ = 2yo + 2xov. = 2(1) + 2(1)(2) = 6 :. y, = Yo + 0.1v. + 0. 005y'~ = I + 0.1(2) + 0.005(6) = 1.23

    (b) When n = I : x, = Xo + 0.1 = 1.1 (i) y' = 2xy "'" V, = 2x,y, = 2(1.1 )(1.23) = 2.706

    (ii) y" = 2y + 2xy' "'" y'{ = 2y, + 2x,v, = 2(1.23) + 2(1.1)(2.706) = 8.4132 :. y, = y, + 0.1v, + 0.005y'{ = 1.23 + 0.1(2.706) + 0.005(8.4132) = 1.542666 '" y(x,) = y(1.2)

    7.1.4 Runge-Kutta Method of Order p (a) These methods are all of the form

    where L Cl.j = 1. (b) The Fourth-Order Runge-Kutta Method is

    where k, = hf(xn, Yn) k, = hf (xn + i h , Yn + ik.) kJ = hf(xn + , h, Yn + ,k,) k, = hf(xn + h, Yn + kJ)

    I Yn+' = Yn + 6(k, + 2k, + 2kJ + k.)

    This method is a 4th order method, so its TE is O(h').

    40

  • Example 34. Consider y' = x + y, y(O) = 1. Use Runge-Kutta method of order 4 to obtain an approximation to y(O .2) using the step size h = 0.1. Answer. Here f(x , y) = x+Y, xo = O,Yo = 1, h = O.l , xn+! = Xn +0.1. (a) k, = O.l(xn + Yn) (b) Iv, = O.l(xn + 0.05 + Yn + ~kd (c) k3 = O.l(xn + 0.05 + Yn + ~k,) (d) k, = O.l(xn + 0.1 + Yn + k3) (a) For y,:

    (i) k, = O.l(xo + Yo) = . .. = 0.1 (ii) k, = O.l(xo + 0.05 + Yo + ~kl) = 0.1(0 + 0.05 + 1 + 0.05) = 0.11 (iii) k3 = O.l(xo + 0.05 + Yo + ~Iv,) = 0.1(0 + 0.05 + 1 + 0.055) = 0.1105 ' (iv) k, = 0.1(xo+0.1+Yo+k3 ) = 0.1(0+0.1+1+0.1105) =0.12105

    1 c. Y, = Yo + 6(k, + 21v, + 2k3 + k,) = .. . = 1.110341667

    (b) For y,: (i) k, = O.l(XI + yd = 0.1(0.1 + 1.110341667) = 0.121034166

    (ii) Iv, = O.l(XI + 0.05 + y, + ~kd = 0.132085875 (iii) k3 = O.l(XI + 0.05 + Y, + ~Iv,) = 0.132638460 (iv) k, = O.I(XI + 0.1 + Y, + k3) = 0.144298012

    1 c. y, = y, + 6(k, + 2k, + 2k3 + k,) = 1.242805142 '" y(x,) = y(0.2)

    41

  • 7.1.5 Multi-step Methods

    The methods of Euler, Heull , and RK are called one-step methods because the approximation for the mesh point Xn+ l involves information from only the previous mesh point , Xn. That is, only the initial point (xo ,Yo ) is used to compute (xl ,yJ) and in general , Yn is needed to compute Yn+ )'

    Methods which usc more than one previous mesh I)oints to fiud the next approximation arc c:all l.'( l multi-step methods.

    7.1.6 Adams-Bashforth/ Adams-Moulton Method

    This method uses the Adams-Bashforth formula

    y~+l = y .. + 2~ (55/n - 59/n_' + 37/n_' - 9/n- ,) ,n ~ 3 where /; = ! (Xj ,Yj) , as a predictor, and the Adams-Moulton formula

    Yn+' = y. + ~~ (9/~+ , + 19/n - 5/n- l + In - ' ) where f~+ l = f(xn+l , Y~+ I ) ' as a corrector.

    No te 8: (a) This is a order 4 method with TE= O(h' ). (b) It is not self starting. To get started, Vb V2, V3 must be computed by using a method of same

    accuracy or better such as the Runge-Kutta order 4 method.

    Example 35. Use the Adams-Bashforth/ Adams-Moulton Method with h = 0.2 to obtain an approx-imation to y(0.8) for the solution of

    y' = x + y, y(O) = 1. Answer. With h = 0.2, y(0.8) ::::::l Y4. We use the RK4 method as the sta.rter method, with Xo = 0, Yo = 1 to obtain (a) y, = 1.242800000 (b) y, = 1.583635920 (c) y, = 2.044212913 (d) In = l (xn, Yn) = Xn +Yn

    => 10 = Xo + Yo = 0 + 1 = 1 => " = X, + Y, = 0.2 + 1.242800000 = 1.442800000 => /, = 0.4 + 1.583635920 = 1.983635920 => /, = 0.6 + 2.044212913 = 2.644212913

    (e) Y; = y,+ ~~(55/,-59h+ 37/, - 9/0) = 2.044212913 + ~~(55(2 .644212913)-59 ( 1.983635920)+ 37(1.442800000) - 9(1)) = 2.650719504

    (f) I:' = I(x" Y; ) = x, + y, = 0.8 + 2.650719504 = 3.450719504 (g) y, = y, + ~~ (9/; + 19/, - 5/, + f,) = 2.651055757 '" y(0.8)

    42

  • 7.1.7 Summary:Orders Of Errors For Different Methods

    Method order Local TE Global TE Euler 1 O(h') O(h) Heun 2 O(h3) O(h')

    Second-Order Taylor Series 2 O(h') O(h') Third-Order Taylor Series 3 O(h ) O(h') Fburth-Order Runge-Kutta 4 O(h") O(h)

    Adams-Bashforthj Adams-Moulton 4 O(h') O(h')

    43

  • 7.2 Higher-Order Initial Value Problems

    The methods use to solve first order IVPs can be applied to higher order IVPs because a higher IVP can be replaced by a system of first-order IVPs. For example, the second-order initial value problem

    y" = f{x , y , y') , y{xo) = Yo, y'{xo) = y~ can be decomposed to a system of two first-order initial value problems by using the substitution y' = u;

    y' = u u' = f{x, y, u)

    , y{xo) = Yo , u{xo) = if,

    Each equation can be solved by the numerical techniques presented earlier. For example, the Euler method for this system would be

    Yn+l = Yn + hUn Un+I = Un + hf(xn, Yn , Un)

    Remark. The Euler method for a general system of two first oruer differcntiaJ equations,

    is given as follows

    u' - f{x , y , u) , u{xo) = "0 y' = g{x, y, u) , y{xo) = Yo

    u,,+, = u" + hf{xn, yn, u,,) Yn+l = Yn + hg(xn Yn, Un)

    R eading Assignment 7 .2.1. Use the Euler method to approximate y{O.2) and y'{0.2) in two steps, where y{x) is the solution of the IVP

    y" + xy' + y = 0, y{O) = I , y'{O) = 2. Answer . Let y' = u.. Then the equation is equivalent to the system

    y' = u u' = -X'U - Y

    Using the step size h = 0.1, the Euler method is given by

    Yn+! = Yn + 0.1un Un+! = 'Un + O.l( - xnun - Yn)

    With Xo = O,Yo = 1, UQ = lIo = 2, we get (a) y, = yo + 0.1"0 = 1 + 0.1(2) = 1.2

    u, = "0 + O.I{ - XoUo - Yo) = 2 + 0.1[-{O)(2) - IJ = 1.9 (b) y, = y, + O.lu, = 1.2 + 0.1{1.9) = 1.39

    u, = u, + O.I{-x,u, - y,) = 1.9 + 0.1 [-{0.I)(1.9) - 1.2J = 1.761 T hat is, y{0.2) '" y, = 1.39 and y'{0.2) '" u, = 1.761.

    44

  • 7.2.1 The Linear Shooting Method

    Consider the linear second order BVP

    yU ~ p(x)y' + q(x)y + "(x), a S x S b with boundary conditions y(a) ~ n, y(b) ~ (3. The shooting method replaces the BVP by two IVPs in the following way:

    (a) Let YI be the solution of t he IVP

    yU ~ p(x)y' + q(x)y + r(x), y(a) ~ n, y'(a) ~ O. Replace this second-order IVP by a system of two first-order IVPs. Then solve each of the two first-order IVPs using, say, the fourth-order Runge-Kutta method to obtfl.in Yl-

    (b) Let y, be the solution of the IVP

    yU ~ p(x)y' + q(x)y, y(a) ~ 0, y'(a) ~ 1. Find y, as in Step (a) .

    (c) Find Cl and ~ so that y = ClYI + C2Y2 is the solution of the BVP: (i) p(x)y' + q(x)y + r(x) ~ p(clY; + o,1/,) + q(cIYI + c,y,) + r ~ Cl (w, + qYI) + o,(w, + qV,) + r

    (ii) yU ~ Cly'{ + o,y~ ~ CI(PY; + qYI + r) + c,(P1h + qV,) ~ CIWY; + qy.) + o,(Py; + qV,) + clr (iii) (i)~(ii) => r ~ clr => Cl ~ 1 (iv) y(a) ~ clYI(a) + c,y,(a) ~ 1 . YI(a) + c, . 0 ~ n => y(a) ~ n (v) Choose c, so that y(b) ~ (3 ;

    y(b) ~ cly,(b) + c,y,(b) ~ YI(b) + o,y,(b) ~ (3 => 0, ~ (3 ~,r;t), if y,(b) # 0 (d) Therefore

    (3 - YI(b) y(x) ~ YI(x) + y,(b) y,(x)

    is the solution to the BVP, provided y,(b) # O.

    45

  • Example 36. Use the shooting method (together with the fourth-order Runge-Kutta method with 1 h = :3) to solve the BVP

    Y" = 4(y - x), 0., x ., 1, y(O) = 0, y(l) = 2. Answer.

    (a) y" = 4(y - x), y(O) = 0, y'(O) = 0 => {~; = 4(Y"- x) : ~~~~: ~ I~I (i) Use RK4 to solve [1] and [2] : we obtain Y1W = - 0.02469136, Y1W = - 0.21439821 and Yl(l) = - 0.80973178

    ( ) " ( ) y( ) {y = u , y(O) = 0 [3] b Y = 4y, Y 0 = 0, 0 = 1 => u' = 4y , ufO) = 1 [4] (i) Use RK4 to solve [3] and [4] : we obtain y,W = 0.35802469, y,W = 0.88106488 and y,(I) = 1.80973178

    2 - Yl(l) 2+0.80973178 (c) y(x) = Yl(X) + y,(I) y,(x) = Yl(X) + 1.80973178 y,(x) => y(~) = 0.53116634, y(~) = 1.15351499, y(l) = 2

    Note 9:

    An advantage of the shooting method is that the existing programs for initial value problems may be used .

    However, the shooting method is sensitive to round-off errors and it becomes rather difficult to use when there are more than two boundary conditions. For these reasons, we may want to use alternative methods. This brings us to the next topic.

    46

  • 7.2.2 Finite Difference M ethod

    Consider a second order BVP

    yU + P(x)y' + Q(x)y = f(x) , y(a) = , y(b) = (3. Suppose a = Xo < XI < ... < Xn - l < In = b with Xi - Xi- l = h for all i = 1,2, ... ) n. Let y, = y(x,), P; = P(x,), Q, = Q(x,) , and /; = f(x,). Then by replacing y' and y" with their central difference approximations in the BVP, we get

    Yi+l - 2Yi + Yi- I + P. Yi+l - Yi- l + Q . . = J,. h2 t 2h tU. 11 i = 1,2, . . . , n - 1.

    or I after simpli fying

    ( h) 2 h 2 1 +"2P; YHl + (h Q, - 2)y, + (I - "2P;)Y'- l = h k The last eq'uation, known as a finite difference equation, is an approximation to the DE. [t enables us to approximate the solution at Xl .. . , Xn_ I ..

    Example 37. Solving BVPs Using Finite Difference Method Use F"ini te Difference Method with h = 1 to approximate the solution of the BVP

    yU _ (1 - ~)y = x , y(l) = 2, y(3) = -1. Answer. Here Pi = 0, Qi = - 1 + I fi = X i' Hence , the difference equation is

    x Yi+1 + (- 3+ 51 )Yi +Yi- l = Xi. i = 1

    That is ,

    With Xl = 2 and the boundary conditions Yo = 2 and Y2 = - 1, solving the above equation gives Yl = - 0.3846

    Notes : We can improve the accuracy by using smaller h. But for that we have to pay a price, Le. we have to solve a larger system of equations .

    47

  • Chapter 8

    Numerical Methods For Partial Differential Equations

    8.1 Second Order Linear Partial Differential Equations

    Definition 8.1.1 (Three Basic Types of Second Order Linear Equations). The linear PDE

    Au.. + Bu., + Gu" + Du. + Eu, + Fu = G

    (where A, B , C, D, E , F, G are given functions of x and y) is called (a) parabolic if B2 _ 4AC = O. Parabolic equations often describe heat. flow aJld diffusion phenomena,

    such as heat flow through the earth's surface. (b) hyperbolic if B2 - 4AC > O. Hyperbolic equations often describe wave motion and vibrating

    phenomena, such as violin 's strings and drum heads. (c) e lliptic if 8 2 - 4AC < O. Elliptic equations are often used to describe steady state phenomena

    and thus do not depend 0 11 time. Elliptic equations are important in the study of electricity and magnetism.

    Example 38. Some classical examples of PDEs. (a) The I-D wave equation Uu = e~z is a hyperbolic equation since

    B' - 4AG = 0' - 4( o. (b) The I-D heat equation u, = 2uzz is a parabolic equation since

    B' - 4AG = 0' - 4(c')(0) = O. (c) The 2-D Laplace equation U zz + Uyy = 0 is an elliptic equation since

    B' - 4AG = 0' - 4(1)(1) = -4 < O.

    48

  • 8.2 Numerical Approximation To Derivatives :l-Variable Func-tions

    We replace derivatives by their corresponding difference quotients based on the Taylar series :

    (i) u(x + h) = u(x) + hu'(x) + 2\h'U"(x) + ~h3u"'(x) + ... . 3.

    (ii) u(x - Il) = u(x) - hu'(x) + ~h2U"(X) - ~h3u"'(X) + ... 2! 3! ( ... ) (.) '() u(x + It) - u(x) 1 h "( ) 1ll I =>UX = --ux-'" It 2!

    i.e. u'(x) = u(x + It~ - u(x) + O(h) (the forward difference formula for u') u(x)-u(x-h) . (iv) (ii)=> u'(x) = h +O(h) (the backward difference formula for u')

    (v) (i) _ (ii)=> u'(x) = u(x + h) ;;. u(x - ~) + O(h') (the central difference formula for u' ) ( .) (.) ( .. ) "() u(x + h) - 2u(x) + u(x - h) O(h') VI I + Il ::::} 'U X = h2 +

    (the central difference formula for u")

    8.3 Numerical Approximation To Derivatives :2-Variable Func-tions

    Consider a function u(x,y) over a 2-dimensional grid with f:j.x = hand !1y = k. At a typical point P(x, y), where x = ih and y = jk, write

    Up = u(ih,ik) = 'Ui,j' Then aUI 'Ui l --Uj ' ax p = + J h J + O(h) the FD for u, at P

    aul "' 1-'" ay P = J+ k J + O(k)the FD for u, at P aul 'Ui+ l ' - Uj I ' ax p = J 2h - J + O(h') the CD for u, at P ~:~ Ip = 'Ui+l,j - 2~~j + 'Ui - l ,; + O(Jt2)the FD for ~z at P

    49

  • 8.4 Methods for Parabolic Equations

    8.4.1 FTCS Explicit Scheme

    Example 39. Using F ,D for u, and C.D for tL,;:I;' find the finite difference approximation for the 1-D heat equation

    au a'u at = 8x2 ' 0 < X < a, t > O.

    Answer.

    Let "',; = u(iIL, jk) = u(x" t;). Then the finite difference approximation is

    Ut,;+! - Ui,; 'Uj+l,; - 2Ui,j + 'ai - I'; -k h'

    Solving for 'UfJ+lo we obtain

    k where r = hl '

    i = i , ... M, j = O, ... ,N

    Note 10: We use the ahove formula to estimate the values of u at time level j + 1 using the values at level j. This is known as the FTCS explicit finite difference method.

    Example 40. Consider the one-dimensional heat equation

    subject to the initial condition

    au at t ~ O

    u(x,O) = sinn, 0 :::; x:::; 1 and boundary conditions

    u(O, t) = u(l , t) = 0, t ~ O. Using an x step of h = 0.1 and a t step of k = 0.0005, use the FTCS scheme to estimate u(O. I , O.OOI ). Answer.

    k 0.0005 r = h' = (0.1)' = 0.05 (a) u(O, t) = u(l , t) = 0 "" UoJ = U10J = 0 (b) u(x,O) = sin "X "" u',o = sin O.I"i (c) U',H1 = (1 - 2r)u'J + r(uH1,; + U'- IJ)

    "',HI = 0.9"'J + 0.05("'+IJ + Ui-lJ)

    50

  • (d ) j = 0 : "i,1 = 0.9"i,0 + 0.05("HI,0 + U;- I,O) (i) i = 1 : " 1,1 = 0.9",,0 + 0.05("2,0 + "0,0) = 0.3075

    since 'lLi,o = sinO.l1ri => UQ,o = 0, UI ,O = sinO.17r = 0.3090, U2,O = sinO.21f = 0.5878 (ii) i = 2 : " 2,1 = 0.9"2,0 + 0.05(U3,0 + ", ,0) = 0.5849

    since U3,O = sin 0.311" = 0.8090 (e) j = 1 : "i,2 = 0.9U;,1 + 0.05(U;+1,1 + U;- I,I)

    (i) i = I : " ' ,2 = 0,%", + 0.05(" 2,1 + "0,1) = 0.3060 '" ,,(0 .1, 0.001).

    Definition 8.4.1 (Stability of Numerical Methods). A method is unstable if round-off errors or any other errors grow too rapidly as the computations proceed.

    Note 11 .- The ?res scheme is stable for a given space step h provided the .t ime step k is restricted by the condition

    T he restriction recommends that we do not move too fast in the t-direction. Note 12: Suppose there is an initial condition: u(x ,O) = f (x). Then the scheme starts with (i,j) = (0,0), t he left... hand corner of the 20 grid. Along the horizontal line j = 0, where t = 0, we have

    U;,O = U( Xi, 0) = f (Xi).

    Note 13: Suppose we include the boundary conditions: u(O, t ) = uta, t ) = O. Then we have 'UO,j = UM ,j = 0, j > O.

    8.4.2 Crank-Nicolson Method

    Example 41 (Crank-Nicolson Implicit Sche me for the H eat Equation) . This CN[ scheme replaces Ul:l: by an average of two CD quotients, one at the time level j and another at j + 1:

    U i,j+ 1 - 'lLj,j + O(k) = ! [Ui+ I ,j+l - 2Uj,j+l + 'Ut- I ,HI + 0(h2) + u H I ,j - 2'lLj ,j + Ui- l,j + 0(h2)] k 2 1t2 1t2

    After simplified, we obtain

    - TU; - IJ+I + 2(1 + T)U; J+l - TU;+lJ +l = TU;- IJ + 2(1 - T)UiJ + TU;+ IJ + kO(k , h2) k

    where r = h2

    Note 14: For each time level j, we obtain an (M - 1) x (M - 1) tridiagonal system which can be solved using iterat ive methods. Note 15: T he CN I scheme has no stability restriction. It is more accurate than the explicit scheme.

    51

  • Example 42. Consider the one-dimensional heat equation

    subject to the initial condition

    8u 82u at - ax2' t2:0

    u(X,O) = sin7rx, S x ~ 1 and boundary conditions

    u(O, t) = ,,(1 , t) = 0, t 2: O. Using an x step of h = 0.2 and a t step of k = 0.001, use the Crank-Nicolson method to estimate u(O.4,O.OO1).

    Answer.

    Since the initial temperature distribution is symmetric with respect to x = 0 .5, we only need to consider the grid points over 0 S x ~ 0.5.

    u(O.4 , 0.001) '" U2,1 k 0.001

    r = h' = (0.2)' = 0.025 (a) ,,(0, t) = u(l, t) = 0 =} "oJ = "5J = 0 (b) ,,(x, 0) = sin TTX =} u',o = sin 0.2rri (c) - rui- IJ+l + 2(1 + r)'UiJ+l - r'Ui+lJ+1 = TUt- IJ + 2(1 - r)tiiJ + rUt+lJ

    -0.025Ut- IJ+l + 2.05uiJ+l - 0.025Ui+lJ+l = 0 .025'Ui_IJ + 1.95UtJ + 0 .025uH IJ (d) j = 0 : - 0.025"'_1,1 + 2.05".,1 - 0.025"'+1,1 = 0.025u'_1,0 + 1.95u.,0 + 0.025"'+1 ,0

    (i) i = 1 : -0.025"0,1 + 2.05uI,1 - 0.025u2,1 = 0.025"0,0 + 1.95uI,0 + 0.025"2,0 since tLo,j = 0 and tii,O = sin 0.2rri =} uo,o = 0, UI ,O = sin 0.2rr = 0.58778525, U2 ,0 = sin 0.4rr = 0.95105652 2.05"1,1 - 0.025u2,1 = 1.95(0.58778525) + 0.025(0.95105652) 2.05"1,1 - 0.025"',1 = 1.1699576505 (A)

    (ii) i = 2 : -0.025"1,1 + 2.05"',1 - 0.025"3,1 = 0.025uI,0 + 1.95"2,0 + 0.025"3,0 since ti3,O = sin 0.6rr = 0.95105652 -0.025"1,1 +2.05"',1 - 0.025u3,1 = 0.025(0.58778525) + 1.95(0.95105652) +0.025(0.95105652) - 0.025uI,1 + 2.05u,,1 - 0.025u3,1 = 1.89303126 - 0.025uI,1 + 2.025"2,1 = 1.89303126 (B) since ti3,l = ti2,l by symmetry

    (iii) Solving (A) and (B) by Gauss elimination or Cramer's Rule or Gauss-Seidel method , we obtain UI ,I = 0.58219907 ' ''',1 = 0.94201789

    Therefore u(O.4 , 0.001) '" " 2,1 = 0.94201789

    52

  • Example 43.

    The exact solution of the 1-0 heat equation.

    is u(x,t) = e- 1f"lt sin(7rx) .

    Ut u(O, t) u(X, 0)

    U xx u( l ,t) = 0

    f(x) = sin7rx t> 0

    O

  • 8.5 A Numerical M ethod for Elliptic Equa tions

    Example 44 (D iffe rence Equation for the Laplace Equa tion) . Using C.D for both 'UX:l; and 'U1I1I1 the finite difference approximation for the 2-D Laplace equation 'U:r:x + Uyll = 0 is

    'Ui+lJ - 2'lLjj + 'Ui-Ij UjJ+l - 21LiJ + 'Ui,j - i h' + k' = 0

    For a uniform space grid with h = k, this becomes

    tii,j = l ('UHt,j + Ui _ l,j + ui,Hi + Ui,j - l) = ~ ('UE + 'UW + UN + us) i.e. Ui,j is the average of its nearest neighbors. Equivalently, 'UE + UN + Uw + Us - 41LiJ = O.

    Example 45. The four sides of a square plate of side 12 cm made of uniform material are kept at constant temperature such that

    LHS = RHS = Bottom = 100'G and Top = O' G.

    Using a mesh size of 4 cm, calculate the temperature u(x, y) at the internal mesh points A(4, 4), B(8, 4), G(8,8) and D(4,8) by using the Gauss-Seidel method if u(x,y) satisfies the Laplace equation Un + Uvu = 0, start with the initial guess 'UA = UB = 80, Uc = Uo = 50. Continue iterating until all lur+ IJ - u~ I < 10- 3 where ur1 are the pth iterates for tLi , i = A, B , C, D . Answer.

    Applying the equation I

    u'J = :\[UE + UN + Uw + Us] to the 4 internal mesh points, we obtain

    1 U. = :\ (UB + Uo + 200) I

    U8 = :\ (u. + Uc + 200) I

    Uc = 4' (UB + Uo + l OO) I

    Uo = 4' (u. + Uc + lOO) This system is strictly diagonally dominant, so we can proceed with the Gauss-Seidel iteration starting with the initial guess UA = UB = 80, Uc = UD = 50. Continue iterating until all lur+1] - u~l < 10- 3 we obtain UA = UB = 87.5,uc = Uo = 62.5.

    54

  • 8.6 CTCS scheme for Hyperbolic Equations

    Example 46 (CTCS scheme for Hyperbolic Equations). Consider the BVP involving the I-D wave equation

    0'1.1 0'1.1 at' ax'

    1.1(0, t) = 1.1(1, t) = 0, t > 0 u(x ,O) = f(x) , 0 :S x :S 1 u,(x,O) = g(x), 0:S x :S 1

    Use a 2-D grid with 6.x = hand f:J.t = k. By replacing both ~:c and Ut! with the central difference quotients , we obtain a eTCS scheme

    U i j+l - 2Ui,j + UiJ- l _ 'U,:+l ,j - 2u,;,j + 'Ui_ l,j k' - h'

    i.e. 'Ui,; +! = PUi+l,j + 2(1 - P)UiJ + fYUi - lJ - Ui,j- l where p = (k j h)'. Note 16:

    (a) This scheme is stable for 0 :S p :S 1 or k :S h. (b) u(x ,O) = f(x) => u(i h, O) = f(ih) or '-'0,0 = /;. (c) Approximate 1.1, with a CD quotient, the IC u,(x, O) = g(x) becomes

    Ut,l - Ui,- l 2k = 9i =? u-':,- l = u,;,l - 2kgi

    where (i. - 1) is a fictitious grid point. (d) To calculate '-'0,1 :

    'lLj ,1 = P'Ui+ l ,O + 2(1 - P)Ui ,O + PUi- I,O - 'tLi, - 1 = P'UH l ,O + 2(1 - p)Uj,O + PUi- l ,O - 'Ui ,l + 2kgi

    1 = 2(PU;+l ,0 + PU; - I,O) + (1 - p)u.,o + kg.

    p => 1.1.,1 = 2(Ji+1 + f.-I) + (1 - p)1; + kg.

    55

  • Example 47. If the string (with fixed ends at x = 0 and x = I) governed by the hyperbolic partial differential equation

    Utt = U XZ ) 0 =:; X :::; 1, t;::: 0 with boundary conditions

    U(O, t) = u(l, t) = 0 starts from its equilibrium position with initial velocity

    g(x) = sin7fx and initial displacement u(x,O) = O. What is its displacement u at time t = 0.4 and x = 0.2,0.4, 0.6, 0.8? (Use the eTCS explicit scheme with /::'x = /::,t = 0.2.) Answer.

    (a) u(O, t) = u(l , t) = 0 => uOJ = us,; = 0 (b) u(x,O) = f(x) = 0 => u',o = f, = 0 (c) g(x) = sin 7fX => g, = sinO.2i7f (d) With h = /::'x = /::,t = k = 0.2 => c = (k/h)2 = I, we have the following CTCS scheme:

    'UiJ+l = tLi+1J + 'Ui-lj - 'U,;j- l with Ui,l = O.29j = 0.2 sin O.2i1r

    (e) "',I = 0.2g, = 0.2 sin 0.2i7f (i) i = I : U',' = 0.2sinO.27f = 0.1176

    (ii) i = 2: U"I = 0.2sinO.47f = 0.1902 (Hi) i = 3 : U3,1 = 0.2 sin 0.67f = 0.1902 (iv) U4.1 = U', ' = 0.1176 (v) US,I = UO,I = 0

    (f) 'Ui,j+l = 'Ui+1,j + 14-1'; - UiJ- l (i) i = I,j = 1 : U '.2 = U2,1 + UO,I - U' ,O = 0.1902 + 0 - 0 = 0.1902 (H) i = 2,j = 1 : U2 ,2 = U3,1 + U',' - U2,O = 0.1902 + 0.1176 - 0 = 0.3078 (iii ) i = 3,j = 1 : U3,2 = U4,1 + U2, 1 - U3,O = 0.1176 + 0.1902 - 0 = 0.3078 (iv) i = 4,j = 1 : U4 ,2 = US,I + U3,1 - U4,O = 0 + 0.1902 - 0 = 0.1902

    56

  • Bibliography

    [l[ Koay Hang Leen, Lecture Note for Numerical Methods and Statistics, 2007. [2[ Peter V.O'Nei! , Advanced Engineering Mathematics. [3) Glyn James, Advanced Modern Engineering Mathematics. !4} Anthony Croft, Robert DavisoD, Martin Hargreaves ,Engineer!ng Mathematics.

    [5) Erwin Kreyszig, Advanced Engineering Mathematics. [6) K.A. Stroud, Advanced Engineering Mathematics. [7) Curtis F.Gerald, Patrick. Wheatley, Applied Numerical Analysis. [8] Levine , D . M., Applied statistics for engineers and scientists.

    [9) Hamdy A. Taha, Operations Research: An Introduction.

    57

    ,

    IMGIMG_0001IMG_0002IMG_0003IMG_0004IMG_0005IMG_0006IMG_0007IMG_0008IMG_0009IMG_0010IMG_0011IMG_0012IMG_0013IMG_0014IMG_0015IMG_0016IMG_0017IMG_0018IMG_0019IMG_0020IMG_0021IMG_0022IMG_0023IMG_0024IMG_0025IMG_0026IMG_0027IMG_0028IMG_0029IMG_0030IMG_0031IMG_0032IMG_0033IMG_0034IMG_0035IMG_0036IMG_0037IMG_0038IMG_0039IMG_0040IMG_0041IMG_0042IMG_0043IMG_0044IMG_0045IMG_0046IMG_0047IMG_0048IMG_0049IMG_0050IMG_0051IMG_0052IMG_0053IMG_0054IMG_0055IMG_0056IMG_0057