RK Tutorial
Transcript of RK Tutorial
-
8/3/2019 RK Tutorial
1/171
Introduction Formulation Taylor series: exact solution Approximation Order conditions
John Butchers tutorialsIntroduction to RungeKutta methods
( t ) = 1 ( t )
Introduction to RungeKutta methods
-
8/3/2019 RK Tutorial
2/171
Introduction Formulation Taylor series: exact solution Approximation Order conditions
Introduction
It will be convenient to consider only autonomous initial valueproblems
y (x ) = f (y(x )) , y(x 0) = y0,
f : R N R N .
The Euler method is the simplest way of obtaining numericalapproximations at
x 1 = x0 + h, x 2 = x1 + h , . . .
using the formula
yn = yn 1 + hf (yn 1), h = x n x n 1, n = 1 , 2, . . . .
Introduction to RungeKutta methods
-
8/3/2019 RK Tutorial
3/171
-
8/3/2019 RK Tutorial
4/171
Introduction Formulation Taylor series: exact solution Approximation Order conditions
Introduction
It will be convenient to consider only autonomous initial valueproblems
y (x ) = f (y(x )) , y(x 0) = y0,
f : R N R N .
The Euler method is the simplest way of obtaining numericalapproximations at
x 1 = x0 + h, x 2 = x1 + h , . . .
using the formula
yn = yn 1 + hf (yn 1), h = x n x n 1, n = 1 , 2, . . . .
Introduction to RungeKutta methods
-
8/3/2019 RK Tutorial
5/171
Introduction Formulation Taylor series: exact solution Approximation Order conditions
This method can be made more accurate by using either themid-point quadrature formula
yn = yn 1 + hf yn 1 + 12 hf (yn 1) .
or the trapezoidal rule quadrature formula:
yn = yn 1 + 12 hf (yn 1) +12 hf yn 1 + hf (yn 1) .
These methods from Runges 1895 paper are second orderbecause the error in a single step behaves like O (h3).
This is in contrast to the rst order Euler method where theorder behaviour is O (h 2).
Introduction to RungeKutta methods
-
8/3/2019 RK Tutorial
6/171
Introduction Formulation Taylor series: exact solution Approximation Order conditions
This method can be made more accurate by using either themid-point quadrature formula
yn = yn 1 + hf yn 1 + 12 hf (yn 1) .
or the trapezoidal rule quadrature formula:
yn = yn 1 + 12 hf (yn 1) +12 hf yn 1 + hf (yn 1) .
These methods from Runges 1895 paper are second orderbecause the error in a single step behaves like O (h3).
This is in contrast to the rst order Euler method where theorder behaviour is O (h 2).
Introduction to RungeKutta methods
d i l i l i l i A i i O d di i
-
8/3/2019 RK Tutorial
7/171
Introduction Formulation Taylor series: exact solution Approximation Order conditions
This method can be made more accurate by using either themid-point quadrature formula
yn = yn 1 + hf yn 1 + 12 hf (yn 1) .
or the trapezoidal rule quadrature formula:
yn = yn 1 + 12 hf (yn 1) +12 hf yn 1 + hf (yn 1) .
These methods from Runges 1895 paper are second orderbecause the error in a single step behaves like O (h3).
This is in contrast to the rst order Euler method where theorder behaviour is O (h 2).
Introduction to RungeKutta methods
I t d ti F l ti T l i t l ti A i ti O d diti
-
8/3/2019 RK Tutorial
8/171
Introduction Formulation Taylor series: exact solution Approximation Order conditions
This method can be made more accurate by using either themid-point quadrature formula
yn = yn 1 + hf yn 1 + 12 hf (yn 1) .
or the trapezoidal rule quadrature formula:
yn = yn 1 + 12 hf (yn 1) +12 hf yn 1 + hf (yn 1) .
These methods from Runges 1895 paper are second orderbecause the error in a single step behaves like O (h3).
This is in contrast to the rst order Euler method where theorder behaviour is O (h 2).
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
9/171
Introduction Formulation Taylor series: exact solution Approximation Order conditions
A few years later, Heun gave a full explanation of order 3methods.
Shortly afterwards Kutta gave a detailed analysis of order 4methods.
In the early days of RungeKutta methods the aim seemed tobe to nd explicit methods of higher and higher order.
Later the aim shifted to nding methods that seemed to beoptimal in terms of local truncation error and to nding built-in
error estimators.With the emergence of stiff problems as an importantapplication area, attention moved to implicit methods.
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
10/171
Introduction Formulation Taylor series: exact solution Approximation Order conditions
A few years later, Heun gave a full explanation of order 3methods.
Shortly afterwards Kutta gave a detailed analysis of order 4methods.
In the early days of RungeKutta methods the aim seemed tobe to nd explicit methods of higher and higher order.
Later the aim shifted to nding methods that seemed to beoptimal in terms of local truncation error and to nding built-in
error estimators.With the emergence of stiff problems as an importantapplication area, attention moved to implicit methods.
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
11/171
Introduction Formulation Taylor series: exact solution Approximation Order conditions
A few years later, Heun gave a full explanation of order 3methods.
Shortly afterwards Kutta gave a detailed analysis of order 4methods.
In the early days of RungeKutta methods the aim seemed tobe to nd explicit methods of higher and higher order.
Later the aim shifted to nding methods that seemed to beoptimal in terms of local truncation error and to nding built-in
error estimators.With the emergence of stiff problems as an importantapplication area, attention moved to implicit methods.
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
12/171
Introduction Formulation Taylor series: exact solution Approximation Order conditions
A few years later, Heun gave a full explanation of order 3methods.
Shortly afterwards Kutta gave a detailed analysis of order 4methods.
In the early days of RungeKutta methods the aim seemed tobe to nd explicit methods of higher and higher order.
Later the aim shifted to nding methods that seemed to beoptimal in terms of local truncation error and to nding built-in
error estimators.With the emergence of stiff problems as an importantapplication area, attention moved to implicit methods.
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
13/171
y pp
A few years later, Heun gave a full explanation of order 3methods.
Shortly afterwards Kutta gave a detailed analysis of order 4methods.
In the early days of RungeKutta methods the aim seemed tobe to nd explicit methods of higher and higher order.
Later the aim shifted to nding methods that seemed to beoptimal in terms of local truncation error and to nding built-inerror estimators.
With the emergence of stiff problems as an importantapplication area, attention moved to implicit methods.
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
14/171
y pp
Formulation of RungeKutta methods
In carrying out a step we evaluate s stage valuesY 1, Y 2, . . . , Y s
and s stage derivatives
F 1 , F 2, . . . , F s ,
using the formula F i = f (Y i ).Each Y i is dened as a linear combination of the F j added on toy0:
Y i = y0 + hs
j =1
a ij F j , i = 1 , 2, . . . , s ,
and the approximation at x 1 = x 0 + h is found from
y1 = y0 + hs
i=1
bi F i .
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
15/171
Formulation of RungeKutta methods
In carrying out a step we evaluate s stage valuesY 1, Y 2, . . . , Y s
and s stage derivatives
F 1 , F 2, . . . , F s ,
using the formula F i = f (Y i ).Each Y i is dened as a linear combination of the F j added on toy0:
Y i = y0 + hs
j =1
a ij F j , i = 1 , 2, . . . , s ,
and the approximation at x 1 = x 0 + h is found from
y1 = y0 + hs
i=1
bi F i .
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
16/171
We represent the method by a tableau:c1 a 11 a 12 a 1sc
2a
21a
22 a
2s......
......
cs a s 1 a s 2 a ssb1 b2 bs
or, if the method is explicit, by the simplied tableau0c2 a 21...
...... . . .
cs a s 1 a s 2 a s,s 1b1 b2 bs 1 bs
In each case, ci (i = 1 , 2, . . . ) is dened as s j =1 a ij . The valueof ci indicates the point X i = x 0 + hc i for which Y i is a goodapproximation to y(X i ).
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
17/171
We represent the method by a tableau:c1 a 11 a 12 a 1sc
2a
21a
22 a
2s......
......
cs a s 1 a s 2 a ssb1 b2 bs
or, if the method is explicit, by the simplied tableau0c2 a 21...
...... . . .
cs a s 1 a s 2 a s,s 1b1 b2 bs 1 bs
In each case, ci (i = 1 , 2, . . . ) is dened as s j =1 a ij . The valueof ci indicates the point X i = x 0 + hc i for which Y i is a goodapproximation to y(X i ).
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
18/171
We represent the method by a tableau:c1 a 11 a 12 a 1sc
2a
21a
22 a
2s......
......
cs a s 1 a s 2 a ssb1 b2 bs
or, if the method is explicit, by the simplied tableau0c2 a 21...
...... . . .
cs a s 1 a s 2 a s,s 1b1 b2 bs 1 bs
In each case, ci (i = 1 , 2, . . . ) is dened as s j =1 a ij . The valueof ci indicates the point X i = x 0 + hc i for which Y i is a goodapproximation to y(X i ).
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
19/171
Examples:
y1 = y0 + 0 hf (y0) + 1 hf y0 +1
2 hf (y0)
012
1
2
0 1
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
20/171
Examples:
y1 = y0 + 0 hf (y0) + 1 hf y0 +1
2 hf (y0)
012
1
2
0 1
Y 1 Y 2
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
21/171
Examples:
y1 = y0 + 0 hf (y0) + 1 hf y0 +1
2 hf (y0)
012
1
2
0 1
Y 1 Y 2
y1 = y0 +1
2 hf (y0) +1
2 hf y0 + 1 hf (y0)
01 11
212
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
22/171
Examples:
y1 = y0 + 0 hf (y0) + 1 hf y0 +12 hf (y0)
012
1
2
0 1
Y 1 Y 2
y1 = y0 +1
2 hf (y0) +1
2 hf y0 + 1 hf (y0)
01 11
212
Y 1 Y 2
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
23/171
Taylor series of exact solution
We need formulae for the second, third, . . . , derivatives.y (x ) = f (y(x ))y (x ) = f (y(x ))y (x )
= f (y(x )) f (y(x ))
y (x ) = f (y(x ))(f (y(x )) , y (x ))+ f (y(x )) f (y(x ))y (x )= f (y(x ))( f (y(x )) , f (y(x )))
+ f (y(x )) f (y(x )) f (y(x ))
This will become increasingly complicated as we evaluate higherderivatives.Hence we look for a systematic pattern.Write f = f (y(x )), f = f (y(x )), f = f (y(x )), . . . .
Introduction to RungeKutta methods
-
8/3/2019 RK Tutorial
24/171
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
25/171
Taylor series of exact solution
We need formulae for the second, third, . . . , derivatives.y (x ) = f (y(x ))y (x ) = f (y(x ))y (x )
= f (y(x )) f (y(x ))
y (x ) = f (y(x ))(f (y(x )) , y (x ))+ f (y(x )) f (y(x ))y (x )= f (y(x ))( f (y(x )) , f (y(x )))
+ f (y(x )) f (y(x )) f (y(x ))
This will become increasingly complicated as we evaluate higherderivatives.Hence we look for a systematic pattern.Write f = f (y(x )), f = f (y(x )), f = f (y(x )), . . . .
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
26/171
Taylor series of exact solution
We need formulae for the second, third, . . . , derivatives.y (x ) = f (y(x ))y (x ) = f (y(x ))y (x )
= f (y(x )) f (y(x ))
y (x ) = f (y(x ))(f (y(x )) , y (x ))+ f (y(x )) f (y(x ))y (x )= f (y(x ))( f (y(x )) , f (y(x )))
+ f (y(x )) f (y(x )) f (y(x ))
This will become increasingly complicated as we evaluate higherderivatives.Hence we look for a systematic pattern.Write f = f (y(x )), f = f (y(x )), f = f (y(x )), . . . .
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
27/171
Taylor series of exact solution
We need formulae for the second, third, . . . , derivatives.y (x ) = f (y(x ))y (x ) = f (y(x ))y (x )
= f (y(x )) f (y(x ))
y (x ) = f (y(x ))(f (y(x )) , y (x ))+ f (y(x )) f (y(x ))y (x )= f (y(x ))( f (y(x )) , f (y(x )))
+ f (y(x )) f (y(x )) f (y(x ))
This will become increasingly complicated as we evaluate higherderivatives.Hence we look for a systematic pattern.Write f = f (y(x )), f = f (y(x )), f = f (y(x )), . . . .
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
28/171
Taylor series of exact solution
We need formulae for the second, third, . . . , derivatives.y (x ) = f (y(x ))y (x ) = f (y(x ))y (x )
= f (y(x )) f (y(x ))
y (x ) = f (y(x ))(f (y(x )) , y (x ))+ f (y(x )) f (y(x ))y (x )= f (y(x ))( f (y(x )) , f (y(x )))
+ f (y(x )) f (y(x )) f (y(x ))
This will become increasingly complicated as we evaluate higherderivatives.Hence we look for a systematic pattern.Write f = f (y(x )), f = f (y(x )), f = f (y(x )), . . . .
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
29/171
Taylor series of exact solution
We need formulae for the second, third, . . . , derivatives.y (x ) = f (y(x ))y (x ) = f (y(x ))y (x )
= f (y(x )) f (y(x ))
y (x ) = f (y(x ))(f (y(x )) , y (x ))+ f (y(x )) f (y(x ))y (x )= f (y(x ))( f (y(x )) , f (y(x )))
+ f (y(x )) f (y(x )) f (y(x ))
This will become increasingly complicated as we evaluate higherderivatives.Hence we look for a systematic pattern.Write f = f (y(x )), f = f (y(x )), f = f (y(x )), . . . .
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
30/171
y (x ) = f f
y (x ) = f f f f
y (x ) = f (f , f )f
f f
+ f f f f f f
The various terms have a structure related to rooted-trees.Hence, we introduce the set of all rooted trees and somefunctions on this set.
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
31/171
y (x ) = f f
y (x ) = f f f f
y (x ) = f (f , f ) f f f
+ f f f f f f
The various terms have a structure related to rooted-trees.Hence, we introduce the set of all rooted trees and somefunctions on this set.
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
32/171
y (x ) = f f
y (x ) = f f f f
y (x ) = f (f , f ) f f f
+ f f f f f f
The various terms have a structure related to rooted-trees.Hence, we introduce the set of all rooted trees and somefunctions on this set.
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
33/171
Let T denote the set of rooted trees:
T = , , , , , , , , . . .
We identify the following functions on T .
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
34/171
Let T denote the set of rooted trees:
T = , , , , , , , , . . .
We identify the following functions on T .
In this table, t will denote a typical tree
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
35/171
Let T denote the set of rooted trees:
T = , , , , , , , , . . .
We identify the following functions on T .
In this table, t will denote a typical treer (t ) order of t = number of vertices
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
36/171
Let T denote the set of rooted trees:
T = , , , , , , , , . . .
We identify the following functions on T .
In this table, t will denote a typical treer (t ) order of t = number of vertices (t ) symmetry of t = order of automorphism group
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
37/171
Let T denote the set of rooted trees:
T = , , , , , , , , . . .
We identify the following functions on T .
In this table, t will denote a typical treer (t ) order of t = number of vertices (t ) symmetry of t = order of automorphism group (t ) density of t
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
38/171
Let T denote the set of rooted trees:
T = , , , , , , , , . . .
We identify the following functions on T .
In this table, t will denote a typical treer (t ) order of t = number of vertices (t ) symmetry of t = order of automorphism group (t ) density of t (t ) number of ways of labelling with an ordered set
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
39/171
Let T denote the set of rooted trees:
T = , , , , , , , , . . .
We identify the following functions on T .
In this table, t will denote a typical treer (t ) order of t = number of vertices (t ) symmetry of t = order of automorphism group (t ) density of t (t ) number of ways of labelling with an ordered set (t ) number of ways of labelling with an unordered set
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
40/171
Let T denote the set of rooted trees:
T = , , , , , , , , . . .
We identify the following functions on T .
In this table, t will denote a typical treer (t ) order of t = number of vertices (t ) symmetry of t = order of automorphism group (t ) density of t (t ) number of ways of labelling with an ordered set (t ) number of ways of labelling with an unordered set
F (t )(y0) elementary differential
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
41/171
Let T denote the set of rooted trees:
T = , , , , , , , , . . .
We identify the following functions on T .
In this table, t will denote a typical treer (t ) order of t = number of vertices (t ) symmetry of t = order of automorphism group (t ) density of t (t ) number of ways of labelling with an ordered set (t ) number of ways of labelling with an unordered set
F (t )(y0) elementary differentialWe will give examples of these functions based on t =
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
42/171
t =
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
43/171
t =
r (t ) = 7 7 65
1 2 43
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
44/171
t =
r (t ) = 7 7 65
1 2 43
(t ) = 8
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
45/171
t =
r (t ) = 7 7 65
1 2 43
(t ) = 8
(t ) = 63 7 331 1 11
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
46/171
t =
r (t ) = 7 7 65
1 2 43
(t ) = 8
(t ) = 63 7 331 1 11
(t ) = r ( t )! (t ) (t ) = 10
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
47/171
t =
r (t ) = 7 7 65
1 2 43
(t ) = 8
(t ) = 63 7 331 1 11
(t ) = r ( t )! (t ) (t ) = 10
(t ) = r ( t )! (t ) = 630
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
48/171
t =
r (t ) = 7 765
1 2 43
(t ) = 8
(t ) = 63 7 331 1 11
(t ) = r ( t )! (t ) (t ) = 10
(t ) = r ( t )! (t ) = 630
F (t ) = f f (f , f ), f (f , f ) f f f
f f f f
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
49/171
These functions are easy to compute up to order-4 trees:
t
r (t ) 1 2 3 3 4 4 4 4
(t ) 1 1 2 1 6 1 2 1
(t ) 1 2 3 6 4 8 12 24
(t ) 1 1 1 1 1 3 1 1
(t ) 1 2 3 6 4 24 12 24
F (t ) f f f f (f , f ) f f f f (3)(f , f , f ) f (f , f f ) f f (f , f ) f f f f
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
50/171
The formal Taylor expansion of the solution at x 0 + h is
y(x 0 + h ) = y0 +tT
(t )hr ( t )
r (t )!F (t )(y0)
Using the known formula for (t ), we can write this as
y(x 0 + h ) = y0 +tT
hr (t )
(t ) (t )F (t )(y0)
Our aim will now be to nd a corresponding formula for theresult computed by one step of a RungeKutta method.
By comparing these formulae term by term, we will be able toobtain conditions for a specic order of accuracy.
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
51/171
The formal Taylor expansion of the solution at x 0 + h is
y(x 0 + h ) = y0 +tT
(t )hr ( t )
r (t )!F (t )(y0)
Using the known formula for (t ), we can write this as
y(x 0 + h ) = y0 +tT
hr (t )
(t ) (t )F (t )(y0)
Our aim will now be to nd a corresponding formula for theresult computed by one step of a RungeKutta method.
By comparing these formulae term by term, we will be able toobtain conditions for a specic order of accuracy.
Introduction to RungeKutta methods
-
8/3/2019 RK Tutorial
52/171
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
53/171
The formal Taylor expansion of the solution at x 0 + h is
y(x 0 + h ) = y0 +tT
(t )hr ( t )
r (t )!F (t )(y0)
Using the known formula for (t ), we can write this as
y(x 0 + h ) = y0 +tT
hr (t )
(t ) (t ) F (t )(y0)
Our aim will now be to nd a corresponding formula for theresult computed by one step of a RungeKutta method.
By comparing these formulae term by term, we will be able toobtain conditions for a specic order of accuracy.
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
Taylor series of approximation
-
8/3/2019 RK Tutorial
54/171
Taylor series of approximation
We need to evaluate various expressions which depend on thetableau for a particular method.
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
Taylor series of approximation
-
8/3/2019 RK Tutorial
55/171
Taylor series of approximation
We need to evaluate various expressions which depend on thetableau for a particular method.These are known as elementary weights.
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
Taylor series of approximation
-
8/3/2019 RK Tutorial
56/171
Taylor series of approximation
We need to evaluate various expressions which depend on thetableau for a particular method.These are known as elementary weights.We use the example tree we have already considered toillustrate the construction of the elementary weight ( t ).
t =i
kj
l m on
Introduction to RungeKutta methods
-
8/3/2019 RK Tutorial
57/171
Introduction Formulation Taylor series: exact solution Approximation Order conditions
Taylor series of approximation
-
8/3/2019 RK Tutorial
58/171
Taylor series of approximation
We need to evaluate various expressions which depend on thetableau for a particular method.These are known as elementary weights.We use the example tree we have already considered toillustrate the construction of the elementary weight ( t ).
t =i
kj
l m on
(t ) =s
i,j,k,l,m,n,o =1
bi a ij a ik a jl a jm a kn a ko
Simplify by summing over l ,m,n,o :
(t ) =s
i,j,k =1
bi a ij c2 j a ik c2k
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
Now add ( t ) to the table of functions:
-
8/3/2019 RK Tutorial
59/171
Now add ( t ) to the table of functions:
t
r (t ) 1 2 3 3 (t ) 1 1 1 1 (t ) 1 2 3 6(t ) bi bi ci bi c2i bi a ij c j
t
r (t ) 4 4 4 4 (t ) 1 3 1 1 (t ) 4 24 12 24(t ) bi c3i bi ci a ij c j bi a ij c2 j bi a ij a jk ck
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
60/171
The formal Taylor expansion of the numerical approximation tothe solution at x0 + h is
y1 = y0 +tT
(t )h r (t )
r (t )!(t )F (t )(y0)
Using the known formula for (t ), we can write this as
y1 = y0 +tT
h r (t )
(t )(t )F (t )(y0)
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
-
8/3/2019 RK Tutorial
61/171
The formal Taylor expansion of the numerical approximation tothe solution at x0 + h is
y1 = y0 +tT
(t )h r (t )
r (t )!(t )F (t )(y0)
Using the known formula for (t ), we can write this as
y1 = y0 +tT
h r (t )
(t )(t )F (t )(y0)
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
Order conditions
-
8/3/2019 RK Tutorial
62/171
To match the Taylor series
y(x 0 + h ) = y0 +tT
h r ( t )
(t ) (t )F (t )(y0)
y1 = y0 +tT
h r (t )
(t )(t )F (t )(y0)
up to h p terms we need to ensure that
(t ) =1
(t ),
for all trees such thatr (t ) p.
These are the order conditions.
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
Order conditions
-
8/3/2019 RK Tutorial
63/171
To match the Taylor series
y(x 0 + h ) = y0 +tT
h r ( t )
(t ) (t )F (t )(y0)
y1 = y0 +tT
h r (t )
(t )(t )F (t )(y0)
up to h p terms we need to ensure that
(t ) =1
(t ),
for all trees such thatr (t ) p.
These are the order conditions.
Introduction to RungeKutta methods
Introduction Formulation Taylor series: exact solution Approximation Order conditions
The order conditions will be illustrated in the case of explicit 4t g th d ith d 4
-
8/3/2019 RK Tutorial
64/171
stage methods with order 4.
t (t ) = 1 (t)
b1 + b2 + b3 + b4 = 1
b2c2 + b3c3 + b4c4 = 12b2c22 + b3c23 + b4c24 =
1
3b3a 32c2 + b4a 42c2 + b4a 43c3 = 16
b2c32 + b3c33 + b4c34 =14
b3c3a 32c2 + b4c4a 42c2 + b4c4a 43c3 = 18b3a 32c22 + b4a 42c22 + b4a 43c23 =
112
b4a 43a 32c2 = 124
Introduction to RungeKutta methods
Review order conditions Quadrature connections Order 2 Order 3 Order 4
John Butchers tutorials
-
8/3/2019 RK Tutorial
65/171
Low order RungeKutta methods
01
2
1
2
12
0 12
1 0 0 1
1
6
1
3
1
3
1
6
Low order RungeKutta methods
Review order conditions Quadrature connections Order 2 Order 3 Order 4
Review of order conditions
-
8/3/2019 RK Tutorial
66/171
Recall the order-4 conditions for a 4-stage method:
t (t) = 1 (t )b1 + b2 + b3 + b4 = 1
b2c2 + b3c3 + b4c4 = 12
b2c22 + b3c23 + b4c24 = 13b3a32c2 + b4a42c2 + b4a43c3 = 16
b2c32 + b3c33 + b4c34 =14
b3c3a32c2 + b4c4a42c2 + b4c4a43c3 =18
b3a32c22 + b4a42c22 + b4a43c23 =112
b4a43a32c2 = 124
Low order RungeKutta methods
Review order conditions Quadrature connections Order 2 Order 3 Order 4
For order p ( p 4), no more than p stages are required.
-
8/3/2019 RK Tutorial
67/171
Low order RungeKutta methods
Review order conditions Quadrature connections Order 2 Order 3 Order 4
For order p ( p 4), no more than p stages are required.Th di i f h b f d h 4 b
-
8/3/2019 RK Tutorial
68/171
The conditions for these can be found, when p < 4, byomitting conditions corresponding to trees with greaterthan p vertices
Low order RungeKutta methods
Review order conditions Quadrature connections Order 2 Order 3 Order 4
For order p ( p 4), no more than p stages are required.Th diti f th b f d h < 4 b
-
8/3/2019 RK Tutorial
69/171
The conditions for these can be found, when p < 4, byomitting conditions corresponding to trees with greaterthan p vertices(that is by omitting trees with order greater than p)
Low order RungeKutta methods
Review order conditions Quadrature connections Order 2 Order 3 Order 4
For order p ( p 4), no more than p stages are required.Th diti f th b f d h < 4 b
-
8/3/2019 RK Tutorial
70/171
The conditions for these can be found, when p < 4, byomitting conditions corresponding to trees with greater
than p vertices(that is by omitting trees with order greater than p)omitting all terms in ( t) with subscripts greater than p.
Low order RungeKutta methods
Review order conditions Quadrature connections Order 2 Order 3 Order 4
For order p ( p 4), no more than p stages are required.The conditions for these can be found when p < 4 by
-
8/3/2019 RK Tutorial
71/171
The conditions for these can be found, when p < 4, byomitting conditions corresponding to trees with greater
than p vertices(that is by omitting trees with order greater than p)omitting all terms in ( t) with subscripts greater than p.
Order 2:b1 + b2 = 1
b2c2 = 12
Low order RungeKutta methods
Review order conditions Quadrature connections Order 2 Order 3 Order 4
For order p ( p 4), no more than p stages are required.The conditions for these can be found when p < 4 by
-
8/3/2019 RK Tutorial
72/171
The conditions for these can be found, when p < 4, byomitting conditions corresponding to trees with greater
than p vertices(that is by omitting trees with order greater than p)omitting all terms in ( t) with subscripts greater than p.
Order 2:b1 + b2 = 1
b2c2 = 12Order 3:
b1 + b2 + b3 = 1
b2c2 + b3c3 =12
b2c22 + b3c23 =13
b3a32c2 = 16
Low order RungeKutta methods
-
8/3/2019 RK Tutorial
73/171
Review order conditions Quadrature connections Order 2 Order 3 Order 4
Connection with quadrature
-
8/3/2019 RK Tutorial
74/171
In the special case of a differential equation of the form:dydx = (x),
integration over a single step using an s-stage RungeKuttamethod is equivalent to the approximation
x 1x 0 (x)dx h si=1 bi (x0 + hc i ) (*)We will examine how well this approximation works for the sspecial choices of given by
(x) = ( x x0)k 1, k = 1 , 2, . . . , s .
The error in (*) is equal to
si=1 bi ck 1i
1k h
k ()
To obtain order s, the coefficient of hk in () must be zero forall k = 1 , 2, . . . , s .
Low order RungeKutta methods
Review order conditions Quadrature connections Order 2 Order 3 Order 4
Connection with quadrature
-
8/3/2019 RK Tutorial
75/171
In the special case of a differential equation of the form:dydx = (x),
integration over a single step using an s-stage RungeKuttamethod is equivalent to the approximation
x 1x 0 (x)dx h si=1 bi (x0 + hc i ) (*)We will examine how well this approximation works for the sspecial choices of given by
(x) = ( x x0)k 1, k = 1 , 2, . . . , s .
The error in (*) is equal to
si=1 bi ck 1i
1k h
k ()
To obtain order s, the coefficient of hk in () must be zero forall k = 1 , 2, . . . , s .
Low order RungeKutta methods
Review order conditions Quadrature connections Order 2 Order 3 Order 4
Connection with quadrature
-
8/3/2019 RK Tutorial
76/171
In the special case of a differential equation of the form:dydx = (x),
integration over a single step using an s-stage RungeKuttamethod is equivalent to the approximation
x 1x 0 (x)dx h si=1 bi (x0 + hc i ) (*)We will examine how well this approximation works for the sspecial choices of given by
(x) = ( x x0)k 1, k = 1 , 2, . . . , s .
The error in (*) is equal to
si=1 bi ck 1i
1k h
k ()
To obtain order s, the coefficient of hk in () must be zero forall k = 1 , 2, . . . , s .
Low order RungeKutta methods
Review order conditions Quadrature connections Order 2 Order 3 Order 4
From a consideration of special quadrature problems, we haveseen that necessary conditions for order s are that
-
8/3/2019 RK Tutorial
77/171
s
i=1 bi ck 1i =
1k , k = 1 , 2, . . . , s . (*)
Low order RungeKutta methods
Review order conditions Quadrature connections Order 2 Order 3 Order 4
From a consideration of special quadrature problems, we haveseen that necessary conditions for order s are that
-
8/3/2019 RK Tutorial
78/171
s
i=1 bi ck 1i =
1k , k = 1 , 2, . . . , s . (*)
These are equivalent to those Runge-Kutta order conditionswhich correspond to the trees
Low order RungeKutta methods
Review order conditions Quadrature connections Order 2 Order 3 Order 4
From a consideration of special quadrature problems, we haveseen that necessary conditions for order s are that
-
8/3/2019 RK Tutorial
79/171
s
i=1 bi ck 1i =
1k , k = 1 , 2, . . . , s . (*)
These are equivalent to those Runge-Kutta order conditionswhich correspond to the trees
It will usually be convenient to choose a quadrature formula asthe rst step in deriving a RungeKutta method.
Low order RungeKutta methods
Review order conditions Quadrature connections Order 2 Order 3 Order 4
From a consideration of special quadrature problems, we haveseen that necessary conditions for order s are that
-
8/3/2019 RK Tutorial
80/171
s
i=1b
ick 1
i= 1
k, k = 1 , 2, . . . , s . (*)
These are equivalent to those Runge-Kutta order conditionswhich correspond to the trees
It will usually be convenient to choose a quadrature formula asthe rst step in deriving a RungeKutta method.Once this is done, the bi and the ci are known and the nal task
will be to satisfy the remaining order conditions by choosingsuitable values of the a ij .
Low order RungeKutta methods
Review order conditions Quadrature connections Order 2 Order 3 Order 4
From a consideration of special quadrature problems, we haveseen that necessary conditions for order s are that
-
8/3/2019 RK Tutorial
81/171
s
i=1b
ick 1
i= 1
k, k = 1 , 2, . . . , s . (*)
These are equivalent to those Runge-Kutta order conditionswhich correspond to the trees
It will usually be convenient to choose a quadrature formula asthe rst step in deriving a RungeKutta method.Once this is done, the bi and the ci are known and the nal task
will be to satisfy the remaining order conditions by choosingsuitable values of the a ij .In the remaining sections of this tutorial, these ideas will beapplied to nding methods up to order 4.
Low order RungeKutta methods Review order conditions Quadrature connections Order 2 Order 3 Order 4
Methods with order 2
-
8/3/2019 RK Tutorial
82/171
The two quadrature conditions are
b1 + b2 = 1 ,b2c2 = 12 ,
corresponding to the trees and .
There are no additional trees (or additional order conditions) tosatisfy, so all we have to do is choose c2 = 0 and immediatelywe nd that b2 = 1 / 2c2 , b1 = 1 1/ 2c2 .
Here are three methods based on convenient choices of c2. Notethat the rst two methods are due to Runge.
012
120 1
01 1
12
12
023
2314
34
Low order RungeKutta methods Review order conditions Quadrature connections Order 2 Order 3 Order 4
Methods with order 2
-
8/3/2019 RK Tutorial
83/171
The two quadrature conditions are
b1 + b2 = 1 ,b2c2 = 12 ,
corresponding to the trees and .
There are no additional trees (or additional order conditions) tosatisfy, so all we have to do is choose c2 = 0 and immediatelywe nd that b2 = 1 / 2c2 , b1 = 1 1/ 2c2 .
Here are three methods based on convenient choices of c2. Notethat the rst two methods are due to Runge.
012
120 1
01 1
12
12
023
2314
34
Low order RungeKutta methods Review order conditions Quadrature connections Order 2 Order 3 Order 4
Methods with order 2
-
8/3/2019 RK Tutorial
84/171
The two quadrature conditions are
b1 + b2 = 1 ,b2c2 = 12 ,
corresponding to the trees and .
There are no additional trees (or additional order conditions) tosatisfy, so all we have to do is choose c2 = 0 and immediatelywe nd that b2 = 1 / 2c2 , b1 = 1 1/ 2c2 .
Here are three methods based on convenient choices of c2. Notethat the rst two methods are due to Runge.
012
120 1
01 1
12
12
023
2314
34
Low order RungeKutta methods Review order conditions Quadrature connections Order 2 Order 3 Order 4
Methods with order 3
-
8/3/2019 RK Tutorial
85/171
There are now three quadrature conditions corresponding to the
trees , and .
Low order RungeKutta methods Review order conditions Quadrature connections Order 2 Order 3 Order 4
Methods with order 3
h h d d d h
-
8/3/2019 RK Tutorial
86/171
There are now three quadrature conditions corresponding to the
trees , and .These are
b1 + b2 + b3 = 1 , (a)b2c2 + b3c3 = 12 , (b)
b2c22 + b3c
23 =
13 . (c)
Low order RungeKutta methods Review order conditions Quadrature connections Order 2 Order 3 Order 4
Methods with order 3
Th h d di i di h
-
8/3/2019 RK Tutorial
87/171
There are now three quadrature conditions corresponding to the
trees , and .These are
b1 + b2 + b3 = 1 , (a)b2c2 + b3c3 = 12 , (b)
b2c22 + b3c
23 =
13 . (c)
There is also an additional condition corresponding to :
b3a32c2 = 16 . (d)
Low order RungeKutta methods Review order conditions Quadrature connections Order 2 Order 3 Order 4
Methods with order 3
Th th d t diti di t th
-
8/3/2019 RK Tutorial
88/171
There are now three quadrature conditions corresponding to the
trees , and .These are
b1 + b2 + b3 = 1 , (a)b2c2 + b3c3 = 12 , (b)
b2c22 + b3c
23 =
13 . (c)
There is also an additional condition corresponding to :
b3a32c2 = 16 . (d)The steps in nding a method are to choose suitable ci , solve
(a), (b) and (c) for the bi and nally solve (d) for a32 .
Low order RungeKutta methods Review order conditions Quadrature connections Order 2 Order 3 Order 4
Methods with order 3
Th th d t diti di g t th
-
8/3/2019 RK Tutorial
89/171
There are now three quadrature conditions corresponding to the
trees , and .These are
b1 + b2 + b3 = 1 , (a)b2c2 + b3c3 = 12 , (b)
b2c22 + b3c
23 =
13 . (c)
There is also an additional condition corresponding to :
b3a32c2 = 16 . (d)The steps in nding a method are to choose suitable ci , solve
(a), (b) and (c) for the bi and nally solve (d) for a32 .Note that in the choice of the ci and the evaluation of the bi ,the value b3 = 0 must be avoided, otherwise the solution of (d)becomes impossible.
Low order Runge Kutta methods
-
8/3/2019 RK Tutorial
90/171
Review order conditions Quadrature connections Order 2 Order 3 Order 4
Methods with order 4
Recall the conditions for order 4 but ordered differently:
-
8/3/2019 RK Tutorial
91/171
Recall the conditions for order 4, but ordered differently:
b1 + b2 + b3 + b4 = 1 , (1)b2c2 + b3c3 + b4c4 = 12 , (2)b2c22 + b3c
23 + b4c
24 = 13 , (3)
b2c32 + b3c33 + b4c
34 = 14 , (4)
b3a32c2 + b4a42c2 + b4a43c3 = 16 , (5)b3c3a32c2 + b4c4a42c2 + b4c4a43c3 = 18 , (6)
b3a32c22 + b4a42c22 + b4a43c
23 =
112 , (7)
b4a43a32c2 = 124 . (8)Given c2, c3, c4, carry out the three steps:
1 solve for b1, b2, b3, b4 from (1), (2), (3), (4),2 solve for a32 , a42 , a43 from (5), (6), (7),3 substitute the results found in steps 1 and 2 into (8) and
check for consistency.
Low order Runge Kutta methods Review order conditions Quadrature connections Order 2 Order 3 Order 4
Methods with order 4
Recall the conditions for order 4 but ordered differently:
-
8/3/2019 RK Tutorial
92/171
Recall the conditions for order 4, but ordered differently:
b1 + b2 + b3 + b4 = 1 , (1)b2c2 + b3c3 + b4c4 = 12 , (2)b2c22 + b3c
23 + b4c
24 = 13 , (3)
b2c32 + b3c33 + b4c
34 = 14 , (4)
b3a32c2 + b4a42c2 + b4a43c3 = 16 , (5)b3c3a32c2 + b4c4a42c2 + b4c4a43c3 = 18 , (6)
b3a32c22 + b4a42c22 + b4a43c
23 =
112 , (7)
b4a43a32c2 = 124 . (8)Given c2, c3, c4, carry out the three steps:
1 solve for b1, b2, b3, b4 from (1), (2), (3), (4),2 solve for a32 , a42 , a43 from (5), (6), (7),3 substitute the results found in steps 1 and 2 into (8) and
check for consistency.
Low order Runge Kutta methods Review order conditions Quadrature connections Order 2 Order 3 Order 4
Methods with order 4
Recall the conditions for order 4 but ordered differently:
-
8/3/2019 RK Tutorial
93/171
Recall the conditions for order 4, but ordered differently:
b1 + b2 + b3 + b4 = 1 , (1)b2c2 + b3c3 + b4c4 = 12 , (2)b2c22 + b3c
23 + b4c
24 = 13 , (3)
b2c32 + b3c33 + b4c
34 = 14 , (4)
b3a32c2 + b4a42c2 + b4a43c3 = 16 , (5)b3c3a32c2 + b4c4a42c2 + b4c4a43c3 = 18 , (6)
b3a32c22 + b4a42c22 + b4a43c
23 =
112 , (7)
b4a43a32c2 = 124 . (8)Given c2, c3, c4, carry out the three steps:
1 solve for b1, b2, b3, b4 from (1), (2), (3), (4),2 solve for a32 , a42 , a43 from (5), (6), (7),3 substitute the results found in steps 1 and 2 into (8) and
check for consistency.
Low order Runge Kutta methods Review order conditions Quadrature connections Order 2 Order 3 Order 4
Methods with order 4
Recall the conditions for order 4 but ordered differently:
-
8/3/2019 RK Tutorial
94/171
Recall the conditions for order 4, but ordered differently:
b1 + b2 + b3 + b4 = 1 , (1)b2c2 + b3c3 + b4c4 = 12 , (2)b2c22 + b3c
23 + b4c
24 = 13 , (3)
b2c32 + b3c33 + b4c
34 = 14 , (4)
b3a32c2 + b4a42c2 + b4a43c3 = 16 , (5)b3c3a32c2 + b4c4a42c2 + b4c4a43c3 = 18 , (6)
b3a32c22 + b4a42c22 + b4a43c
23 =
112 , (7)
b4a43a32c2 = 124 . (8)Given c2, c3, c4, carry out the three steps:
1 solve for b1, b2, b3, b4 from (1), (2), (3), (4),2 solve for a32 , a42 , a43 from (5), (6), (7),3 substitute the results found in steps 1 and 2 into (8) and
check for consistency.
Low order Runge Kutta methods Review order conditions Quadrature connections Order 2 Order 3 Order 4
Methods with order 4
Recall the conditions for order 4 but ordered differently:
-
8/3/2019 RK Tutorial
95/171
Recall the conditions for order 4, but ordered differently:
b1 + b2 + b3 + b4 = 1 , (1)b2c2 + b3c3 + b4c4 = 12 , (2)b2c22 + b3c
23 + b4c
24 = 13 , (3)
b2c32 + b3c33 + b4c
34 = 14 , (4)
b3a32c2 + b4a42c2 + b4a43c3 = 16 , (5)b3c3a32c2 + b4c4a42c2 + b4c4a43c3 = 18 , (6)
b3a32c22 + b4a42c22 + b4a43c
23 =
112 , (7)
b4a43a32c2 = 124 . (8)Given c2, c3, c4, carry out the three steps:
1 solve for b1, b2, b3, b4 from (1), (2), (3), (4),2 solve for a32 , a42 , a43 from (5), (6), (7),3 substitute the results found in steps 1 and 2 into (8) and
check for consistency.
Low order Runge Kutta methods Review order conditions Quadrature connections Order 2 Order 3 Order 4
If c2, c3 and c4 are treated as parameters, and these steps are
-
8/3/2019 RK Tutorial
96/171
p pcarried out, it is found that the consistency condition yielded inStep 3 is surprisingly simple. This condition is:
c4 = 1.
For specic choices of c2 and c3 to be used with c4 = 1, it
sometimes happens that some step of the process cannot becarried out, for example because of a vanishing denominator,but fortunately, many cases exist when there is no trouble.
We conclude this tutorial by presenting a number of examples
of order 4 methods.
L d R g K tt th d Review order conditions Quadrature connections Order 2 Order 3 Order 4
If c2, c3 and c4 are treated as parameters, and these steps are
-
8/3/2019 RK Tutorial
97/171
carried out, it is found that the consistency condition yielded inStep 3 is surprisingly simple. This condition is:
c4 = 1.
For specic choices of c2 and c3 to be used with c4 = 1, it
sometimes happens that some step of the process cannot becarried out, for example because of a vanishing denominator,but fortunately, many cases exist when there is no trouble.
We conclude this tutorial by presenting a number of examples
of order 4 methods.
L d R K tt th d Review order conditions Quadrature connections Order 2 Order 3 Order 4
If c2, c3 and c4 are treated as parameters, and these steps are
-
8/3/2019 RK Tutorial
98/171
carried out, it is found that the consistency condition yielded inStep 3 is surprisingly simple. This condition is:
c4 = 1.
For specic choices of c2 and c3 to be used with c4 = 1, it
sometimes happens that some step of the process cannot becarried out, for example because of a vanishing denominator,but fortunately, many cases exist when there is no trouble.
We conclude this tutorial by presenting a number of examples
of order 4 methods.
L d R K tt th d Review order conditions Quadrature connections Order 2 Order 3 Order 4
If c2, c3 and c4 are treated as parameters, and these steps are
-
8/3/2019 RK Tutorial
99/171
carried out, it is found that the consistency condition yielded inStep 3 is surprisingly simple. This condition is:
c4 = 1.
For specic choices of c2 and c3 to be used with c4 = 1, it
sometimes happens that some step of the process cannot becarried out, for example because of a vanishing denominator,but fortunately, many cases exist when there is no trouble.
We conclude this tutorial by presenting a number of examples
of order 4 methods.
L d R K h d
-
8/3/2019 RK Tutorial
100/171
John Butchers tutorials
Implicit RungeKutta methods
-
8/3/2019 RK Tutorial
101/171
12
36
14
14
36
12 + 36 14 + 36 14
12
12
Implicit RungeKutta methods
Since we have an order barrier, which says that order p RKmethods require more than p stages if p > 4, we might ask howto get around this barrier.
-
8/3/2019 RK Tutorial
102/171
For explicit methods, solving the order conditions becomesincreasingly difficult as the order increases but everythingbecomes simpler for implicit methods.
For example the following method has order 5:
014
18
18
710
1100
1425
320
1 27 0 571
143281
250567
554
Implicit RungeKutta methods
Since we have an order barrier, which says that order p RKmethods require more than p stages if p > 4, we might ask howto get around this barrier.
-
8/3/2019 RK Tutorial
103/171
For explicit methods, solving the order conditions becomesincreasingly difficult as the order increases but everythingbecomes simpler for implicit methods.
For example the following method has order 5:
014
18
18
710
1100
1425
320
1 27 0 571
143281
250567
554
Implicit RungeKutta methods
Since we have an order barrier, which says that order p RKmethods require more than p stages if p > 4, we might ask howto get around this barrier.
-
8/3/2019 RK Tutorial
104/171
For explicit methods, solving the order conditions becomesincreasingly difficult as the order increases but everythingbecomes simpler for implicit methods.
For example the following method has order 5:
014
18
18
710
1100
1425
320
1 27 0 571
143281
250567
554
Implicit RungeKutta methods
Since we have an order barrier, which says that order p RKmethods require more than p stages if p > 4, we might ask howto get around this barrier.
-
8/3/2019 RK Tutorial
105/171
For explicit methods, solving the order conditions becomesincreasingly difficult as the order increases but everythingbecomes simpler for implicit methods.
For example the following method has order 5:
014
18
18
710
1100
1425
320
1 27 0 571
143281
250567
554
Implicit RungeKutta methods
We could check the order of this method by verifying the 17order conditions but there is an easier way.A method has order 5 if it satises the B(5), C(2) and D(2)conditions
-
8/3/2019 RK Tutorial
106/171
conditions.A method satises B( k), C( k), D( k) and E( k, ) if
s
i=1bi c j 1i =
1 j , j = 1 , 2, . . . , k , B(k)
s
j =1a ij c1 j = 1 ci , i = 1 , 2, . . . , s , = 1 , 2, . . . , k , C(k)
s
i=1bi c1i a ij =
1 b j (1 c
j ), j = 1 , 2, . . . , s , = 1 , 2, . . . , k , D(k)
s
i,j =1bi cm 1i a ij cn 1 j = 1(m + n )n , m, n = 1 , 2, . . . , s , E(k, )
and B(5), C(2) and D(2) are easy to check for this method.
Implicit RungeKutta methods
We could check the order of this method by verifying the 17order conditions but there is an easier way.A method has order 5 if it satises the B(5), C(2) and D(2)conditions
-
8/3/2019 RK Tutorial
107/171
conditions.A method satises B( k), C( k), D( k) and E( k, ) if
s
i=1bi c j 1i =
1 j , j = 1 , 2, . . . , k , B(k)
s
j =1a ij c1 j = 1 ci , i = 1 , 2, . . . , s , = 1 , 2, . . . , k , C(k)
s
i=1bi c1i a ij =
1 b j (1 c
j ), j = 1 , 2, . . . , s , = 1 , 2, . . . , k , D(k)
s
i,j =1bi cm 1i a ij cn 1 j = 1(m + n )n , m, n = 1 , 2, . . . , s , E(k, )
and B(5), C(2) and D(2) are easy to check for this method.
Implicit RungeKutta methods
We could check the order of this method by verifying the 17order conditions but there is an easier way.A method has order 5 if it satises the B(5), C(2) and D(2)conditions
-
8/3/2019 RK Tutorial
108/171
conditions.A method satises B( k), C( k), D( k) and E( k, ) if
s
i=1bi c j 1i =
1 j , j = 1 , 2, . . . , k , B(k)
s
j =1a ij c1 j = 1 ci , i = 1 , 2, . . . , s , = 1 , 2, . . . , k , C(k)
s
i=1bi c1i a ij =
1 b j (1 c
j ), j = 1 , 2, . . . , s , = 1 , 2, . . . , k , D(k)
s
i,j =1bi cm 1i a ij cn 1 j = 1(m + n )n , m, n = 1 , 2, . . . , s , E(k, )
and B(5), C(2) and D(2) are easy to check for this method.
Implicit RungeKutta methods
We could check the order of this method by verifying the 17order conditions but there is an easier way.A method has order 5 if it satises the B(5), C(2) and D(2)conditions
-
8/3/2019 RK Tutorial
109/171
conditions.A method satises B( k), C( k), D( k) and E( k, ) if
s
i=1bi c j 1i =
1 j , j = 1 , 2, . . . , k , B(k)
s
j =1a ij c1 j = 1 ci , i = 1 , 2, . . . , s , = 1 , 2, . . . , k , C(k)
s
i=1bi c1i a ij =
1 b j (1 c
j ), j = 1 , 2, . . . , s , = 1 , 2, . . . , k , D(k)
s
i,j =1bi cm 1i a ij cn 1 j = 1(m + n )n , m, n = 1 , 2, . . . , s , E(k, )
and B(5), C(2) and D(2) are easy to check for this method.
Implicit RungeKutta methods
The most important types of fully implicit methods (that isA can have any structure) are
-
8/3/2019 RK Tutorial
110/171
Gauss methods of order 2 s, characterized by B(2 s) andC(s). To satisfy B(2 s), the ci must be zeros of P s (2x 1) = 0, where P s is the Legendre polynomial of degree s.
Radau IIA methods of order 2 s 1, characterized bycs = 1, B(2 s 1) and C( s). The ci are zeros of P s (2x 1) P s1(2x 1) = 0.
Both these families of methods are A-stable.
But both are very expensive to implement and both can sufferfrom order reduction.
Implicit RungeKutta methods
The most important types of fully implicit methods (that isA can have any structure) are
-
8/3/2019 RK Tutorial
111/171
Gauss methods of order 2 s, characterized by B(2 s) andC(s). To satisfy B(2 s), the ci must be zeros of P s (2x 1) = 0, where P s is the Legendre polynomial of degree s.
Radau IIA methods of order 2 s 1, characterized bycs = 1, B(2 s 1) and C( s). The ci are zeros of P s (2x 1) P s1(2x 1) = 0.
Both these families of methods are A-stable.
But both are very expensive to implement and both can sufferfrom order reduction.
Implicit RungeKutta methods
The most important types of fully implicit methods (that isA can have any structure) are
-
8/3/2019 RK Tutorial
112/171
Gauss methods of order 2 s, characterized by B(2 s) andC(s). To satisfy B(2 s), the ci must be zeros of P s (2x 1) = 0, where P s is the Legendre polynomial of degree s.
Radau IIA methods of order 2 s 1, characterized bycs = 1, B(2 s 1) and C( s). The ci are zeros of P s (2x 1) P s1(2x 1) = 0.
Both these families of methods are A-stable.
But both are very expensive to implement and both can sufferfrom order reduction.
Implicit RungeKutta methods
The most important types of fully implicit methods (that isA can have any structure) are
-
8/3/2019 RK Tutorial
113/171
Gauss methods of order 2 s, characterized by B(2 s) andC(s). To satisfy B(2 s), the ci must be zeros of P s (2x 1) = 0, where P s is the Legendre polynomial of degree s.
Radau IIA methods of order 2 s 1, characterized bycs = 1, B(2 s 1) and C( s). The ci are zeros of P s (2x 1) P s1(2x 1) = 0.
Both these families of methods are A-stable.
But both are very expensive to implement and both can sufferfrom order reduction.
Implicit RungeKutta methods
The most important types of fully implicit methods (that isA can have any structure) are
-
8/3/2019 RK Tutorial
114/171
Gauss methods of order 2 s, characterized by B(2 s) andC(s). To satisfy B(2 s), the ci must be zeros of P s (2x 1) = 0, where P s is the Legendre polynomial of degree s.
Radau IIA methods of order 2 s 1, characterized bycs = 1, B(2 s 1) and C( s). The ci are zeros of P s (2x 1) P s1(2x 1) = 0.
Both these families of methods are A-stable.
But both are very expensive to implement and both can sufferfrom order reduction.
Implicit RungeKutta methods
Outline proof that Gauss methods have order 2 s
C (s)
-
8/3/2019 RK Tutorial
115/171
B (2s)
D (s)
E (s, s ) p=2 s
AND
AND
AND
AND
AND
Implicit RungeKutta methods
-
8/3/2019 RK Tutorial
116/171
-
8/3/2019 RK Tutorial
117/171
-
8/3/2019 RK Tutorial
118/171
-
8/3/2019 RK Tutorial
119/171
-
8/3/2019 RK Tutorial
120/171
-
8/3/2019 RK Tutorial
121/171
This idea of choosing A as a lower triangular matrix can betaken further by avoiding diagonal zeros.
If all the diagonal elements are equal, we get the
-
8/3/2019 RK Tutorial
122/171
Diagonally-Implicit methods of R. Alexander and theSemi-Explicit methods of S. P. Nrsett (referred to assemi-implicit by J.C. Butcher in 1965).
The following third order L-stable method illustrates what ispossible for DIRK methods
12 (1 + )
12 (1 )
1 14 ( 62 + 16 1) 14 (6
2 20 + 5) 14 ( 6
2+ 16 1)
14 (6
2 20 + 5)
where 0.4358665215 satises 16 32 +3
2 3 = 0.
Implicit RungeKutta methods
This idea of choosing A as a lower triangular matrix can betaken further by avoiding diagonal zeros.
If all the diagonal elements are equal, we get the
-
8/3/2019 RK Tutorial
123/171
Diagonally-Implicit methods of R. Alexander and theSemi-Explicit methods of S. P. Nrsett (referred to assemi-implicit by J.C. Butcher in 1965).
The following third order L-stable method illustrates what ispossible for DIRK methods
12 (1 + )
12 (1 )
1 14 ( 62 + 16 1) 14 (6
2 20 + 5) 14 ( 6
2+ 16 1)
14 (6
2 20 + 5)
where 0.4358665215 satises 16 32 +3
2 3 = 0.
Implicit RungeKutta methods
This idea of choosing A as a lower triangular matrix can betaken further by avoiding diagonal zeros.
If all the diagonal elements are equal, we get the
-
8/3/2019 RK Tutorial
124/171
Diagonally-Implicit methods of R. Alexander and theSemi-Explicit methods of S. P. Nrsett (referred to assemi-implicit by J.C. Butcher in 1965).
The following third order L-stable method illustrates what ispossible for DIRK methods
12 (1 + )
12 (1 )
1 14 ( 62 + 16 1) 14 (6
2 20 + 5) 14 ( 6
2+ 16 1)
14 (6
2 20 + 5)
where 0.4358665215 satises 16 32 +3
2 3 = 0.
Implicit RungeKutta methods
This idea of choosing A as a lower triangular matrix can betaken further by avoiding diagonal zeros.
If all the diagonal elements are equal, we get the
-
8/3/2019 RK Tutorial
125/171
Diagonally-Implicit methods of R. Alexander and theSemi-Explicit methods of S. P. Nrsett (referred to assemi-implicit by J.C. Butcher in 1965).
The following third order L-stable method illustrates what ispossible for DIRK methods
12 (1 + )
12 (1 )
1 14 ( 62 + 16 1) 14 (6
2 20 + 5) 14 ( 6
2+ 16 1)
14 (6
2 20 + 5)
where 0.4358665215 satises 16 32 +3
2 3 = 0.
Implicit RungeKutta methods
A SIRK method is characterised by the equation(A) = {}.That is A has a one-point spectrum.
For DIRK methods the stages can be computed independentlyd ll f f h f
-
8/3/2019 RK Tutorial
126/171
and sequentially from equations of the form
Y i hf (Y i ) = a known quantity.
Each stage requires the same factorised matrix I h J to
permit solution by a modied Newton iteration process (whereJ f/y ).
How then is it possible to implement SIRK methods in asimilarly efficient manner?
The answer lies in the inclusion of a transformation to Jordancanonical form into the computation.
Implicit RungeKutta methods
-
8/3/2019 RK Tutorial
127/171
A SIRK method is characterised by the equation(A) = {}.That is A has a one-point spectrum.
For DIRK methods the stages can be computed independentlyd i ll f i f h f
-
8/3/2019 RK Tutorial
128/171
and sequentially from equations of the form
Y i hf (Y i ) = a known quantity.
Each stage requires the same factorised matrix I h J to
permit solution by a modied Newton iteration process (whereJ f/y ).
How then is it possible to implement SIRK methods in asimilarly efficient manner?
The answer lies in the inclusion of a transformation to Jordancanonical form into the computation.
Implicit RungeKutta methods
A SIRK method is characterised by the equation(A) = {}.That is A has a one-point spectrum.
For DIRK methods the stages can be computed independentlyd ti ll f ti f th f
-
8/3/2019 RK Tutorial
129/171
and sequentially from equations of the form
Y i hf (Y i ) = a known quantity.
Each stage requires the same factorised matrix I h J to
permit solution by a modied Newton iteration process (whereJ f/y ).
How then is it possible to implement SIRK methods in asimilarly efficient manner?
The answer lies in the inclusion of a transformation to Jordancanonical form into the computation.
Implicit RungeKutta methods
A SIRK method is characterised by the equation(A) = {}.That is A has a one-point spectrum.
For DIRK methods the stages can be computed independentlyd ti ll f ti f th f
-
8/3/2019 RK Tutorial
130/171
and sequentially from equations of the form
Y i hf (Y i ) = a known quantity.
Each stage requires the same factorised matrix I h J to
permit solution by a modied Newton iteration process (whereJ f/y ).
How then is it possible to implement SIRK methods in asimilarly efficient manner?
The answer lies in the inclusion of a transformation to Jordancanonical form into the computation.
Implicit RungeKutta methods
A SIRK method is characterised by the equation(A) = {}.That is A has a one-point spectrum.
For DIRK methods the stages can be computed independentlyand sequentially from equations of the form
-
8/3/2019 RK Tutorial
131/171
and sequentially from equations of the form
Y i hf (Y i ) = a known quantity.
Each stage requires the same factorised matrix I h J to
permit solution by a modied Newton iteration process (whereJ f/y ).
How then is it possible to implement SIRK methods in asimilarly efficient manner?
The answer lies in the inclusion of a transformation to Jordancanonical form into the computation.
Implicit RungeKutta methods
Suppose the matrix T transforms A to canonical form as follows
T 1AT = A
-
8/3/2019 RK Tutorial
132/171
where
A = (I J ) = 1 0 0 0 0 1 1 0 0 0
0 1 1 0 0...
......
......
0 0 0 1 00 0 0 1 1
Implicit RungeKutta methods
Suppose the matrix T transforms A to canonical form as follows
T 1AT = A
-
8/3/2019 RK Tutorial
133/171
where
A = (I J ) = 1 0 0 0 0 1 1 0 0 0
0 1 1 0 0...
......
......
0 0 0 1 00 0 0 1 1
Implicit RungeKutta methods
Suppose the matrix T transforms A to canonical form as follows
T 1AT = A
-
8/3/2019 RK Tutorial
134/171
where
A = (I J ) = 1 0 0 0 0 1 1 0 0 0
0 1 1 0 0...
......
......
0 0 0 1 00 0 0 1 1
Implicit RungeKutta methods
Consider a single Newton iteration, simplied by the use of thesame approximate Jacobian J for each stage.
Assume the incoming approximation is y0 and that we areattempting to evaluate
-
8/3/2019 RK Tutorial
135/171
attempting to evaluate
y1 = y0 + h(bT I )F
where F is made up from the s subvectors F i = f (Y i ),
i = 1 , 2, . . . , s .The implicit equations to be solved are
Y = ey0 + h(A I )F
where e is the vector in R n with every component equal to 1and Y has subvectors Y i , i = 1 , 2, . . . , s
Implicit RungeKutta methods
Consider a single Newton iteration, simplied by the use of thesame approximate Jacobian J for each stage.
Assume the incoming approximation is y0 and that we areattempting to evaluate
-
8/3/2019 RK Tutorial
136/171
attempting to evaluate
y1 = y0 + h(bT I )F
where F is made up from the s subvectors F i = f (Y i ),
i = 1 , 2, . . . , s .The implicit equations to be solved are
Y = ey0 + h(A I )F
where e is the vector in R n with every component equal to 1and Y has subvectors Y i , i = 1 , 2, . . . , s
Implicit RungeKutta methods
Consider a single Newton iteration, simplied by the use of thesame approximate Jacobian J for each stage.
Assume the incoming approximation is y0 and that we areattempting to evaluate
-
8/3/2019 RK Tutorial
137/171
attempting to evaluate
y1 = y0 + h(bT I )F
where F is made up from the s subvectors F i = f (Y i ),
i = 1 , 2, . . . , s .The implicit equations to be solved are
Y = ey0 + h(A I )F
where e is the vector in R n with every component equal to 1and Y has subvectors Y i , i = 1 , 2, . . . , s
Implicit RungeKutta methods
Consider a single Newton iteration, simplied by the use of thesame approximate Jacobian J for each stage.
Assume the incoming approximation is y0 and that we areattempting to evaluate
-
8/3/2019 RK Tutorial
138/171
attempting to evaluate
y1 = y0 + h(bT I )F
where F is made up from the s subvectors F i = f (Y i ),
i = 1 , 2, . . . , s .The implicit equations to be solved are
Y = ey0 + h(A I )F
where e is the vector in R n with every component equal to 1and Y has subvectors Y i , i = 1 , 2, . . . , s
Implicit RungeKutta methods
Consider a single Newton iteration, simplied by the use of thesame approximate Jacobian J for each stage.
Assume the incoming approximation is y0 and that we areattempting to evaluate
-
8/3/2019 RK Tutorial
139/171
attempting to evaluate
y1 = y0 + h(bT I )F
where F is made up from the s subvectors F i = f (Y i ),
i = 1 , 2, . . . , s .The implicit equations to be solved are
Y = ey0 + h(A I )F
where e is the vector in R n with every component equal to 1and Y has subvectors Y i , i = 1 , 2, . . . , s
Implicit RungeKutta methods
The Newton process consists of solving the linear system
(I s I hA J )D = Y ey0 h(A I )F
and updating
-
8/3/2019 RK Tutorial
140/171
and updatingY Y D
To benet from the SI property, write
Y = ( T 1 I )Y, F = ( T 1 I )F, D = ( T 1 I )D,
so that
(I s I hA J )D = Y ey0 h(A I )F
The following table summarises the costs
Implicit RungeKutta methods
The Newton process consists of solving the linear system
(I s I hA J )D = Y ey0 h(A I )F
and updating
-
8/3/2019 RK Tutorial
141/171
p gY Y D
To benet from the SI property, write
Y = ( T 1 I )Y, F = ( T 1 I )F, D = ( T 1 I )D,
so that
(I s I hA J )D = Y ey0 h(A I )F
The following table summarises the costs
Implicit RungeKutta methods
The Newton process consists of solving the linear system
(I s I hA J )D = Y ey0 h(A I )F
and updating
-
8/3/2019 RK Tutorial
142/171
p gY Y D
To benet from the SI property, write
Y = ( T 1 I )Y, F = ( T 1 I )F, D = ( T 1 I )D,
so that
(I s I hA J )D = Y ey0 h(A I )F
The following table summarises the costs
Implicit RungeKutta methods
The Newton process consists of solving the linear system
(I s I hA J )D = Y ey0 h(A I )F
and updating
-
8/3/2019 RK Tutorial
143/171
p gY Y D
To benet from the SI property, write
Y = ( T 1 I )Y, F = ( T 1 I )F, D = ( T 1 I )D,
so that
(I s I hA J )D = Y ey0 h(A I )F
The following table summarises the costs
Implicit RungeKutta methods
The Newton process consists of solving the linear system
(I s I hA J )D = Y ey0 h(A I )F
and updating
-
8/3/2019 RK Tutorial
144/171
p gY Y D
To benet from the SI property, write
Y = ( T 1 I )Y, F = ( T 1 I )F, D = ( T 1 I )D,
so that
(I s I hA J )D = Y ey0 h(A I )F
The following table summarises the costs
Implicit RungeKutta methods
without withtransformation transformation
LU factorisation s 3 N 3 N 3
Transformation s 2 N Backsolves s 2 N 2 sN 2
-
8/3/2019 RK Tutorial
145/171
Transformation s 2 N
In summary, we reduce the very high LU factorisation cost to alevel comparable to BDF methods.
Also we reduce the back substitution cost to the same work perstage as for DIRK or BDF methods.
By comparison, the additional transformation costs areinsignicant for large problems.
Implicit RungeKutta methods
without withtransformation transformation
LU factorisation s 3 N 3 N 3
Transformation s 2 N Backsolves s 2 N 2 sN 2
-
8/3/2019 RK Tutorial
146/171
Transformation s 2 N
In summary, we reduce the very high LU factorisation cost to alevel comparable to BDF methods.
Also we reduce the back substitution cost to the same work perstage as for DIRK or BDF methods.
By comparison, the additional transformation costs areinsignicant for large problems.
Implicit RungeKutta methods
without withtransformation transformation
LU factorisation s 3 N 3 N 3
Transformation s 2 N Backsolves s 2 N 2 sN 2
-
8/3/2019 RK Tutorial
147/171
Transformation s 2 N
In summary, we reduce the very high LU factorisation cost to alevel comparable to BDF methods.
Also we reduce the back substitution cost to the same work perstage as for DIRK or BDF methods.
By comparison, the additional transformation costs areinsignicant for large problems.
Implicit RungeKutta methods
without withtransformation transformation
LU factorisation s 3 N 3 N 3
Transformation s 2 N Backsolves s 2 N 2 sN 2
-
8/3/2019 RK Tutorial
148/171
Transformation s 2 N
In summary, we reduce the very high LU factorisation cost to alevel comparable to BDF methods.
Also we reduce the back substitution cost to the same work perstage as for DIRK or BDF methods.
By comparison, the additional transformation costs areinsignicant for large problems.
Implicit RungeKutta methods
Stage order s means thats
j =1a ij (ci ) = c i0 (t)dt,
for any polynomial of degree s 1 This implies that
-
8/3/2019 RK Tutorial
149/171
for any polynomial of degree s 1. This implies that
Ack1 = 1k ck , k = 1 , 2, . . . , s ,
where the vector powers are interpreted component bycomponent.
This is equivalent to
Akc
0=
1k!c
k, k = 1 , 2, . . . , s ()
Implicit RungeKutta methods
Stage order s means thats
j =1a ij (ci ) = c i0 (t)dt,
for any polynomial of degree s 1 This implies that
-
8/3/2019 RK Tutorial
150/171
for any polynomial of degree s 1. This implies that
Ack1 = 1k ck , k = 1 , 2, . . . , s ,
where the vector powers are interpreted component bycomponent.
This is equivalent to
Akc
0=
1k!c
k, k = 1 , 2, . . . , s ()
Implicit RungeKutta methods
Stage order s means thats
j =1a ij (ci ) = c i0 (t)dt,
for any polynomial of degree s 1 This implies that
-
8/3/2019 RK Tutorial
151/171
for any polynomial of degree s 1. This implies that
Ack1 = 1k ck , k = 1 , 2, . . . , s ,
where the vector powers are interpreted component bycomponent.
This is equivalent to
Ak
c0
=1k!c
k
, k = 1 , 2, . . . , s ()
Implicit RungeKutta methods
Stage order s means thats
j =1a ij (ci ) = c i0 (t)dt,
for any polynomial of degree s 1. This implies that
-
8/3/2019 RK Tutorial
152/171
for any polynomial of degree s 1. This implies that
Ack1 = 1k ck , k = 1 , 2, . . . , s ,
where the vector powers are interpreted component bycomponent.
This is equivalent to
Ak
c0
=1k!c
k
, k = 1 , 2, . . . , s ()
Implicit RungeKutta methods
Stage order s means thats
j =1a ij (ci ) = c i0 (t)dt,
for any polynomial of degree s 1. This implies that
-
8/3/2019 RK Tutorial
153/171
for any polynomial of degree s 1. This implies that
Ack1 = 1k ck , k = 1 , 2, . . . , s ,
where the vector powers are interpreted component bycomponent.
This is equivalent to
Ak
c0
=1k!c
k
, k = 1 , 2, . . . , s ()
Implicit RungeKutta methods
From the Cayley-Hamilton theorem
(A I )s c0 = 0
and hences
-
8/3/2019 RK Tutorial
154/171
i=0
si
( )s i Ai c0 = 0 .
Substitute from ( ) and it is found that
s
i=0
1i!
si
( )s i ci = 0 .
Implicit RungeKutta methods
From the Cayley-Hamilton theorem
(A I )s c0 = 0
and hences
-
8/3/2019 RK Tutorial
155/171
i=0
si
( )s i Ai c0 = 0 .
Substitute from ( ) and it is found that
s
i=0
1i!
si
( )s i ci = 0 .
Implicit RungeKutta methods
From the Cayley-Hamilton theorem
(A I )s c0 = 0
and hences s
-
8/3/2019 RK Tutorial
156/171
i=0
si
( )s i Ai c0 = 0 .
Substitute from ( ) and it is found that
s
i=0
1i!
si
( )s i ci = 0 .
Implicit RungeKutta methods
Hence each component of c satisess
i=0
1i!
si
x
i= 0
That is x
-
8/3/2019 RK Tutorial
157/171
Lsx
= 0
where LS denotes the Laguerre polynomial of degree s.
Let 1, 2, . . . , s denote the zeros of Ls so that
ci = i , i = 1 , 2, . . . , s
The question now is, how should be chosen?
Implicit RungeKutta methods
Hence each component of c satisess
i=0
1i!
si
x
i= 0
That is x
-
8/3/2019 RK Tutorial
158/171
Lsx
= 0
where LS denotes the Laguerre polynomial of degree s.
Let 1, 2, . . . , s denote the zeros of Ls so that
ci = i , i = 1 , 2, . . . , s
The question now is, how should be chosen?
Implicit RungeKutta methods
Hence each component of c satisess
i=0
1i!
si
x
i= 0
That is x
-
8/3/2019 RK Tutorial
159/171
Lsx
= 0
where LS denotes the Laguerre polynomial of degree s.
Let 1, 2, . . . , s denote the zeros of Ls so that
ci = i , i = 1 , 2, . . . , s
The question now is, how should be chosen?
Implicit RungeKutta methods
Hence each component of c satisess
i=0
1i!
si
x
i= 0
That is x
-
8/3/2019 RK Tutorial
160/171
Lsx
= 0
where LS denotes the Laguerre polynomial of degree s.
Let 1, 2, . . . , s denote the zeros of Ls so that
ci = i , i = 1 , 2, . . . , s
The question now is, how should be chosen?
Implicit RungeKutta methods
Unfortunately, to obtain A-stability, at least for orders p > 2, has to be chosen so that some of the ci are outside the interval[0, 1].
This effect becomes more severe for increasingly high orders and
-
8/3/2019 RK Tutorial
161/171
This effect becomes more severe for increasingly high orders andcan be seen as a major disadvantage of these methods.
We will look at two approaches for overcoming thisdisadvantage.
However, we rst look at the transformation matrix T forefficient implementation.
Implicit RungeKutta methods
Unfortunately, to obtain A-stability, at least for orders p > 2, has to be chosen so that some of the ci are outside the interval[0, 1].
This effect becomes more severe for increasingly high orders and
-
8/3/2019 RK Tutorial
162/171
This effect becomes more severe for increasingly high orders andcan be seen as a major disadvantage of these methods.
We will look at two approaches for overcoming thisdisadvantage.
However, we rst look at the transformation matrix T forefficient implementation.
Implicit RungeKutta methods
Unfortunately, to obtain A-stability, at least for orders p > 2, has to be chosen so that some of the ci are outside the interval[0, 1].
This effect becomes more severe for increasingly high orders and
-
8/3/2019 RK Tutorial
163/171
This effect becomes more severe for increasingly high orders andcan be seen as a major disadvantage of these methods.
We will look at two approaches for overcoming thisdisadvantage.
However, we rst look at the transformation matrix T for
efficient implementation.
Implicit RungeKutta methods
Unfortunately, to obtain A-stability, at least for orders p > 2, has to be chosen so that some of the ci are outside the interval[0, 1].
This effect becomes more severe for increasingly high orders and
-
8/3/2019 RK Tutorial
164/171
This effect becomes more severe for increasingly high orders andcan be seen as a major disadvantage of these methods.
We will look at two approaches for overcoming thisdisadvantage.
However, we rst look at the transformation matrix T for
efficient implementation.
Implicit RungeKutta methods
Dene the matrix T as follows:
T =
L0(1) L1(1) L2(1) Ls
1(1)L0(2) L1(2) L2(2) Ls1(2)L0(3) L1(3) L2(3) Ls1(3)
-
8/3/2019 RK Tutorial
165/171
......
......
L0(s ) L1(s ) L2(s ) Ls1(s )
It can be shown that for a SIRK method
T 1AT = (I J )
Implicit RungeKutta methods
-
8/3/2019 RK Tutorial
166/171
There are two ways in which SIRK methods can be generalized
In the rst of these we add extra diagonally implicit stages so
that the coefficient matrix looks like this:A 0
-
8/3/2019 RK Tutorial
167/171
A 0W I
,
where the spectrum of the p p submatrix A is( A) = {}For s p = 1 , 2, 3, . . . we get improvements to the behaviour of
the methods
Implicit RungeKutta methods
A second generalization is to replace order by effectiveorder.
This allows us to locate the abscissae where we wish.
In DESIRE methods:
-
8/3/2019 RK Tutorial
168/171
In DESIRE methods:
Diagonally Extended Singly Implicit Runge-Kutta methodsusing Effective order
these two generalizations are combined.
This seems to be as far as we can go in constructing efficientand accurate singly-implicit Runge-Kutta methods.
Implicit RungeKutta methods
A second generalization is to replace order by effectiveorder.
This allows us to locate the abscissae where we wish.
In DESIRE methods:
-
8/3/2019 RK Tutorial
169/171
In DESIRE methods:
Diagonally Extended Singly Implicit Runge-Kutta methodsusing Effective order
these two generalizations are combined.
This seems to be as far as we can go in constructing efficientand accurate singly-implicit Runge-Kutta methods.
Implicit RungeKutta methods
A second generalization is to replace order by effectiveorder.
This allows us to locate the abscissae where we wish.
In DESIRE methods:
-
8/3/2019 RK Tutorial
170/171
In DESIRE methods:
Diagonally Extended Singly Implicit Runge-Kutta methodsusing Effective order
these two generalizations are combined.
This seems to be as far as we can go in constructing efficientand accurate singly-implicit Runge-Kutta methods.
Implicit RungeKutta methods
A second generalization is to replace order by effectiveorder.
This allows us to locate the abscissae where we wish.
In DESIRE methods:
-
8/3/2019 RK Tutorial
171/171
In DESIRE methods:
Diagonally Extended Singly Implicit Runge-Kutta methodsusing Effective order
these two generalizations are combined.
This seems to be as far as we can go in constructing efficientand accurate singly-implicit Runge-Kutta methods.
Implicit RungeKutta methods