Linear’Mul+step’Methods’ (LMMs) - Boston...

Post on 27-Mar-2018

216 views 2 download

Transcript of Linear’Mul+step’Methods’ (LMMs) - Boston...

Linear  Mul+step  Methods  (LMMs)  

Review  of  Methods  to  Solve  ODE  IVPs  

(1) Euler’s forward method

(3) Heun’s method

(2) Euler’s backward method

Predictor:

Corrector:

For ODE Initial Value Problem:

Review  of  Methods  to  Solve  ODE  IVPs  (4) Second-order Runge-Kutta method with

(5) Second-order Runge-Kutta method with

Review  of  Methods  to  Solve  ODE  IVPs  (6) Second-order Runge-Kutta method with

(7) Third-order Runge-Kutta method

Review  of  Methods  to  Solve  ODE  IVPs  (8) (Classical) Fourth-order Runge-Kutta method

Notice that for ODEs that are a function of x alone, the clas-sical fourth-order RK method is similar to Simpson’s 1/3.

Review  of  Methods  to  Solve  ODE  IVPs  (9) Runge-Kutta-Fehlberg (RKF45) method

Review  of  Methods  to  Solve  ODE  IVPs  

The fourth-order Runge-Kutta method is given by: and fifth-order method:

Review  of  Methods  to  Solve  ODE  IVPs  (10) Butcher’s fifth-order Runge-Kutta method

Mul+step  Methods  The methods of Euler, Heun, and Runge-Kutta that have been presented so far are called single-step methods, because they use only the information from one previous point to compute the successive point; that is, only the initial point (x0,y0) is used to compute (x1,y1) and, in general, yi is needed to compute yi+1. After several points have been found, it is feasible to use several prior points in the calculation. This is the basis of multistep methods. One example of these methods is called Adams-Bashforth four-step method, in which yi-3, yi-2, yi-1, and yi is required in the calculation of yi+1.

Mul+step  Methods  This method is not self-starting; four initial points (x0, y0), (x1, y1), (x2, y2), and (x3, y3) must be given in advance in order to generate the points {(xi, yi): i ≥ 4}. A desirable feature of multistep methods is that the local truncation error (LTE) can be determined and a correction term can be included, which improves the accuracy of the answer at each step. Also, it is possible to determine if the step size is small enough to obtain an accurate value for yi+1, yet large enough so that unnecessary and time-consuming calculations are eliminated. Using the combinations of a predictor and corrector requires only two function evaluations of f(x,y) per step.

from

Deriva+on  of  a  Mul+step  Method  Integrate the differential equation

(10.1)

or

(10.2)

to get

By the integral limits

Deriva+on  of  a  Mul+step  Method  Now, the step size

Back to the equation (10.2), if we approximate the integral by Simpson’s 1/3 rule:

to get

Deriva+on  of  a  Mul+step  Method  Putting things together, we get

, so this is a two-step method rather than a one-step

(10.3)

In equation (10.3) above, we require

method.

Given a sequence of equally spaced step levels

General  Form  of  Linear  Mul+step  Methods  (LMMs)  

These schemes are called “linear” because they involve linear combinations of y's and f 's, and “multistep” because (usually) more than one step is involved.

(10.4)

with step size h, the general k-step LMM can be written as

where

Given the approximate solution up to

General  Form  of  Linear  Mul+step  Methods  (LMMs)  

The method is defined through the parameters

(10.5)

we obtain the approximate solution at the new step level from equation (10.4) as

evaluated directly without the need to solve

General  Form  of  Linear  Mul+step  Methods  (LMMs)  

If then the scheme is explicit since can be

the scheme is implicit since we need to solve each step.

Note that to get started, the k-step LMM needs to the first k step levels of the approximate solution, to be specified. The ODE IVPs only give so something extra has to be done.

Standard approaches include using a one-step method to get or using a one-step method to get

a two-step method to get then

a (k-1)-step method to get

If

and then continue with the k-step method.

Newton-­‐Cotes  Open  Formulas  The open formulas can be expressed in the form of a solution of an ODE for n equally spaced data points: where fn(x) is an nth-order interpolating polynomial. If n = 1: If n = 2:

(10.6)

Newton-­‐Cotes  Open  Formulas  If n = 3: If n = 4: If n = 5: where , , etc.

Newton-­‐Cotes  Closed  Formulas  The general expression of the closed form: where the integral is approximated by an nth-order Newton-Cotes closed integration formula. If n = 1: If n = 2:

(10.7)

Newton-­‐Cotes  Closed  Formulas  If n = 3: If n = 4:

Adams-­‐Bashforth  Formulas  Rewrite a forward Taylor series expansion and a 2nd-order backward expansion can be used to approximate the derivative:

(10.8)

(10.9)

Adams-­‐Bashforth  Formulas  Substituting eqn. (10.9) into eqn. (10.8) we get the 2nd-order Adams-Bashforth formula: Higher-order Adams-Bashforth formulas can be developed by substituting higher-difference approximations into eqn. (10.8), generally represented as

(10.10)

(10.11)

Adams-­‐Bashforth  Formulas  

Order     β0   β1   β2   β3   β4   β5  

1   1  

2   3/2   -­‐1/2  

3   23/12   -­‐16/12   5/12  

4   55/24   -­‐59/24   37/24   -­‐9/24  

5   1901/720   -­‐2774/720   2616/720   -­‐1274/720   251/720  

6   4277/720   -­‐7923/720   9982/720   -­‐7298/720   2877/720   -­‐475/720  

Coefficients for Adams-Bashforth predictors

Adams-­‐Moulton  Formulas  Rewrite a backward Taylor series expansion around xi+1 Solving for yi+1 gives Using the same technique as Adams-Bashforth yields the 2nd-order Adams-Moulton formula

(10.12)

(10.14)

(10.13)

Adams-­‐Moulton  Formulas  The nth-order Adams-Moulton formula can be generally written as

(10.15)

Adams-­‐Moulton  Formulas  

Order     β0   β1   β2   β3   β4   β5  

2   1/2   1/2  

3   5/12   8/12   -­‐1/2  

4   9/24   19/24   -­‐5/24   1/24  

5   251/720   646/720   -­‐264/720   106/720   -­‐19/720  

6   475/1440   1427/1440   -­‐798/1440   482/1440   -­‐173/1440   27/1440  

Coefficients for Adams-Moulton correctors

Milne’s  Method  Milne’s method is based on Newton-Cotes integration formulas and uses the three-point Newton-Cotes open formula as a predictor and the three-point Newton-Cotes closed formula (Simpson’s 1/3 rule) as a corrector where j is an index representing the number of iterations of the modifier.

(10.16)

(10.17)

Milne’s  Method  The predictor error is given by and the corrector error is given by

(10.18)

(10.19)

Adams-­‐Bashforth-­‐Moulton  Method  

This method is a popular multistep method that uses the 4th-order Adams-Bashforth formula as the predictor and the 4th-order Adams-Moulton formula as the corrector

(10.20)

(10.21)

Adams-­‐Bashforth-­‐Moulton  Method  

The error coefficients are given as

(10.22)

(10.23)

Why  Bother  with  All  These  Schemes?  

Example  10.1  

Approximate with y(0) = 1 over the interval [0, 10].

Example  10.2  

Approximate with y(0) = 1, step size h = 1/8, over the interval [0, 3].

Local  Trunca+on  Error  of  LMMs  

For general LMMs

The Local Truncation Error (LTE) is defined as:

where y(x) is an exact solution of the ODE

(10.24)

Zero-­‐Stability  

A starting point for establishing if a numerical method for approximating ODEs is any good or not is by seeing if it can solve The solution of this ODE is:

Applying either the Euler’s forward or backward method to yields

This is the case for all Runge-Kutta methods. The property related to solving that is required for k-step LMMs is actually less demanding than getting the right answer. It is called Zero Stability.

(10.25)

(10.26)

Zero-­‐Stability  and  Root  Condi+on  

A Linear Multistep Method is zero-stable if and only if all the roots of the characteristic polynomial satisfy

and any root with is simple.

The characteristic polynomial is obtained by applying the general LMM equation (10.4) to to get

(10.27)

For the general LMM in (10.4), we define the first and second characteristic polynomials:

Consistency  and  Convergence  

Consistency: The LMM approximation scheme consistent with the ODE if:

For most well-behaved ODE systems, LMMs with sensible initial data satisfy

exact – approx à 0 as h à 0

Convergence: for all

LTE à 0 as h à 0

zero stability + consistency ⟹ convergence

LTE = O(hp) ⟹ exact – approx = O(hp)