The Easy Lectures on Nonlinear Dynamical Systems - …users.ntua.gr/antkar/EDUCATION/nlds.pdf ·...
Transcript of The Easy Lectures on Nonlinear Dynamical Systems - …users.ntua.gr/antkar/EDUCATION/nlds.pdf ·...
The Easy Lectureson Nonlinear Dynamical Systemswith emphasis on physicochemical phenomena
Antonis KarantonisChemist, PhD
Saitama, 2000
2
3
Preface
This collection of notes on Nonlinear Dynamical Systems emerged througha half semester seminar in the Department of Chemistry, Saitama Univer-sity.
The main scope of the lectures (as well as the present notes) was topresent the basic concepts of the theory of nonlinear dynamical systemswith emphasis on related physical phenomena. Since, the theory of dy-namical systems is mainly a mathematical theory, most of the material isconcentrated on mathematical definitions, theorems and techniques but ina very applied manner. At the end of each chapter I tried to introducesome examples from the physical world, in order to present some simpli-fied applications. Additionally, a whole chapter is dedicated to numericaltechniques suitable for studying nonlinear dynamical systems, in order togive the students a chance to try their own examples or problems.
Obviously, this collection of notes do not give the complete pictureof the theory of dynamical systems and its applications. The interestedreader should consult the bibliography for a more complete and accurateinformation.
Finally, readers should be careful to notice that this book is an uncor-rected manuscript (even though, at this point I should thank Masao Gohdofor finding many mistakes in the text). Nevertheless, I hope the basic ma-terial still conserves its value.
1Typeset with LATEX
4
Contents
1 Introduction and Definitions 51.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 51.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . 61.3 Some additional definitions . . . . . . . . . . . . . . . . . 71.4 Types of dynamical response . . . . . . . . . . . . . . . . 81.5 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.6 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2 Linear Systems 192.1 Concepts from linear algebra . . . . . . . . . . . . . . . . 19
2.1.1 Generalities . . . . . . . . . . . . . . . . . . . . . 202.1.2 Eigenvalues and eigenvectors . . . . . . . . . . . 21
2.2 Solution of linear ODEs . . . . . . . . . . . . . . . . . . . 242.3 Plane autonomous systems . . . . . . . . . . . . . . . . . 262.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3 Linear Stability Analysis 373.1 Linearization . . . . . . . . . . . . . . . . . . . . . . . . 373.2 Linearized stability . . . . . . . . . . . . . . . . . . . . . 393.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5
6 CONTENTS
4 Elementary Bifurcations 494.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . 494.2 Center manifold theory . . . . . . . . . . . . . . . . . . . 504.3 Static bifurcations . . . . . . . . . . . . . . . . . . . . . . 53
4.3.1 The saddle-node bifurcation . . . . . . . . . . . . 544.3.2 The transcritical bifurcation . . . . . . . . . . . . 574.3.3 The pitchfork bifurcation . . . . . . . . . . . . . . 59
4.4 Normal forms . . . . . . . . . . . . . . . . . . . . . . . . 614.5 The Hopf bifurcation . . . . . . . . . . . . . . . . . . . . 65
4.5.1 The normal form for a pair of pure imaginary eigen-values . . . . . . . . . . . . . . . . . . . . . . . . 65
4.5.2 The normal form in various coordinate systems . . 694.5.3 Simplified analysis of the Hopf bifurcation . . . . 69
4.6 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5 Numerical Methods and Tools 895.1 Aim of this chapter . . . . . . . . . . . . . . . . . . . . . 895.2 Steady states: The Newton-Raphson method . . . . . . . . 90
5.2.1 Gauss elimination . . . . . . . . . . . . . . . . . 925.2.2 LU decomposition . . . . . . . . . . . . . . . . . 96
5.3 Use of libraries . . . . . . . . . . . . . . . . . . . . . . . 985.4 Integration of ODEs: Runge-Kutta method . . . . . . . . . 985.5 The AUTO 97 package . . . . . . . . . . . . . . . . . . . . 1005.6 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 101
6 Chaos through examples 1156.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 1156.2 The logistic map . . . . . . . . . . . . . . . . . . . . . . 1166.3 More fun with maps . . . . . . . . . . . . . . . . . . . . . 1226.4 The Rossler equations . . . . . . . . . . . . . . . . . . . . 124
Chapter 1
Introduction and Definitions
1.1 Introduction
In the physical world with the term dynamical system we define any physi-cal phenomenon evolves in time. Since a physical system can be describedby different physical variables we can say that a dynamical system is asystem in which some (or all) variables evolve in time. Since most of thephenomena we observe in nature are progressing in time it seems clear thatthe study of dynamical systems is of great interest. In the present series oflectures we will mainly concentrate on dissipative dynamical systems, i.e.systems that exchange energy and/or mass with their environment (most ofphysicochemical systems belong to this category). In other words, we willstudy systems which are kept away from the thermodynamic equilibrium.
Dynamical systems are studied systematically by mathematicians, physi-cists and lest commonly by chemists, for more than 150 years. So, why weusually hear that “the theory of dynamical systems is modern”? Probablythis is because of the “discovery” of the strange or chaotic attractor. Wewill come back to this issue later but for the moment we can say chaoticresponse is a dynamical behavior which is very sensitive to the initial con-
7
8 CHAPTER 1. INTRODUCTION AND DEFINITIONS
ditions (the system evolves very differently if we just change slightly theinitial state of the system), is unpredictable in the long term but it is givenby completely deterministic laws.
1.2 Background
Throughout this text we will deal mainly with physical systems which canbe described by systems of equations of the form,
dx1
dt= f1(x1, x2, ..., xn; µ1, µ2, ..., µp),
dx2
dt= f2(x1, x2, ..., xn; µ1, µ2, ..., µp),
...dxn
dt= fn(x1, x2, ..., xn; µ1, µ2, ..., µp).
(1.1)
Obviously, Eq.(1.1), can be written in a more compact form,
x = f(x; µ), (1.2)
with x ∈ � n, t ∈ � 1 and µ ∈ � p. In Eq.(1.2), the over-dot means “ ddt
”and bold symbols are used for vectors. We call x the dynamical variables,µ the parameters of the system and t the time. In the physical world, x arethe variables of the system that we observe experimentally, either directlyor via a response function and µ are the parameters of the system that wekeep constant. Obviously, t is also a variable but since we cannot affect itscourse it is considered as as independent variable. If we take as an examplethe evolution of a homogeneous chemical reaction, then, x is the concen-tration of the reactants or the products and µ might be the temperature,pressure or volume. Equation (1.2) is often called ordinary differentialequation, vector field or simply dynamical system. In the case when the
1.3. SOME ADDITIONAL DEFINITIONS 9
right hand side of Eq.(1.2) is a nonlinear function, we call it nonlinear dy-namical system. The dimension of the dynamical system is defined as thenumber of variables which are required to describe the system. In Eq.(1.2)the dimension is n. Equation (1.2) together with some initial conditionsis often called initial value problem. Finally, since the right hand side ofEq.(1.2) doesn’t depend explicitly on time, the dynamical system is calledautonomous.
Dynamical systems of the form of Eq.(1.2) can describe evolutionaryprocesses with specific properties which have special interest to physicalchemistry. This properties are [Arnold, 1973]:
• Determinacy: The entire future course and the entire past are uniquelydetermined by its state at the present instant of time. (The system issaid to be deterministic.)
• Finite-dimensionality: The number of variables required to describethe system is finite.
• Differentiability: The change of state with time is described by dif-ferentiable functions.
Some examples of physical systems which are not described by equa-tions of the form considered here is the motion of a quantum particle (itis not deterministic), the motion of fluids (it is not finite-dimensional), themotion of shock waves (it is not differentiable). We will try to deal withinfinite dimensional dynamical systems in the last chapter of these notes.
1.3 Some additional definitions
Some additional definitions are also useful. Hence, the space defined bythe dynamical variables of the system is called phase space. Obviously,the dimension of this space is n. The solution of Eq.(1.2) under a specified
10 CHAPTER 1. INTRODUCTION AND DEFINITIONS
-20 -10 0 10 20 30x
-20
-15
-10
-5
0
5
10
y
(a)
fixed point
0 10 20 30 40 50t
-0.0005
0
0.0005
0.001
x
(b)
Figure 1.1: (a) Trajectories to a stable fixed point and (b) projection of theintegral curve.
initial condition x0 ≡ x(t = t0) is written x(t, t0,x0) and is called thetrajectory or phase curve through the point x0 at t = t0. The graph ofx(t, t0,x0) versus t is called an integral curve. Finally, the set of points inphase space that lie on a trajectory passing through x0 is called and orbitthrough x0.
1.4 Types of dynamical response
In the following chapters we will give several examples of different typesof dynamical response. Here we will present just some definitions andgeometric representations.
• Fixed points. A fixed point of the dynamical system Eq.(1.2) is apoint x ∈ � n such that,
f(x) = 0. (1.3)
1.4. TYPES OF DYNAMICAL RESPONSE 11
−0.1
0
0.1
−1
0
1
−1
0
1
PSfrag replacements
x1
x2
x3
0.07829
10
20
30
40
48.33
−20
−10
0
10
2027.65
−17.62 −10 0 10 19.81
PSfrag replacements
x1
x2
x3
Figure 1.2: (a) Trajectories on a torus, and (b) the Lorenz chaotic attractor
From Eq.(1.3) we see that the fixed point is a solution which doesnot change in time. Fixed points are also called equilibrium pointsor steady states. There are different types of fixed points, but we willcome back to this later. In Fig.1.1 we present an example of a fixedpoint in
� 2 and the integral curve.
• Limit cycles. Concerning the limit cycle behavior we will only givea very rough definition. Consider Eq.(1.2) where x ∈ � 2. A periodicsolution of this dynamical system is called limit cycle if another so-lution of the system approaches this periodic solution for t → ±∞.A graphic interpretation is given in Fig.1.5.
(A limit cycle can be defined also as an isolated periodic orbit.)
• Tori. Once again, we will deal with trajectories on a torus later.At this point, we can say that the response on a torus can be non-periodic, and more specifically, quasiperiodic. Quasiperiodic re-sponse is defined as a solution consisting of at least two incommen-surate frequencies, i.e.,
θ
φ= α 6= p
q, (1.4)
12 CHAPTER 1. INTRODUCTION AND DEFINITIONS
where q ∈ � ∗ and q ∈ � ∗.
An example of quasiperiodicity on a torus is shown in Fig.1.2(a)
• Chaos. Roughly speaking, the chaotic response is a non-periodic re-sponse, sensitive to the initial conditions. A more accurate definitionwill be given later. Here we just present the famous Lorenz chaoticattractor (Fig.1.2(b)).
1.5 Stability
Consider the autonomous dynamical system Eq.(1.2). Let x(t) be a solu-tion of the system. Then, the solution x(t) is stable if solutions startingclose to x(t) at a given time remain close to x(t) for all later times. It isasymptotically stable if nearby solutions converge to x(t) as t → ∞. In aformal way these definitions can be written [Wiggins, 1990]:
Definition 1.1 (Lyapunov stability) The solution x(t) is said to be stable(Lyapunov stable) if, given ε, there exists δ = δ(ε) such that, for any othersolution y(t) satisfying |x(t0) − y(t0)| < δ, then |x(t) − y(t)| < ε fort > t0.
A geometrical interpretation of Lyapunov stability is shown in Fig.1.3a.
Definition 1.2 (Asymptotic stability) The solution x(t) is said to be asymp-totically stable if it is Lyapunov stable and if there exists a constant β > 0such that, if |x(t0) − y(t0)| < β then limt→∞ |x(t) − y(t)| = 0.
A geometrical interpretation of asymptotic stability is shown in Fig.1.3b.
1.6. EXAMPLES 13
0
(a)
(t)x
(t)yε
δ
tt
(t)
x(t)
(b)
tt0
β
y
Figure 1.3: (a) Lyapunov stability and (b) asymptotic stability.
1.6 Examples
Example 1.1 Consider the linear dynamical system,
x1 + kx1 = 0 (1.5)
with x1(t = 0) = 1 and x1(t = 0) = 0. Let k = 1.
1. Write Eq.(1.5) as a system of ODEs.
2. What is the dimension of the system?
3. Write Eq.(1.6-1.7) in a vector-matrix form.
4. Draw the orbit of the system (Hint: divide Eq.(1.6) by Eq.(1.7) andintegrate).
5. Draw a trajectory of the system.
6. Draw the integral curve of the system (Hint: use complex coordi-nates).
(Note: Equation (1.5) describes small oscillations of a plane pendulum.)
14 CHAPTER 1. INTRODUCTION AND DEFINITIONS
x1
x2(a)
x1
x2(b)
PSfrag replacements
x 1
x2
t-1
-1 00
0
1
1
10 20
(c)
Figure 1.4: (a) An orbit, (b) a trajectory through (1,0) and (c) an integralcurve of Eqs.(1.6-1.7).
1.6. EXAMPLES 15
Solution:
1. Equation (1.5) can be written as a system of ODEs by letting,
x1 = x2.
Under this change of variables, Eq.(1.5) is written,
x1 = x2, (1.6)
x2 = −x1, (1.7)
with x1(t = 0) = 1 and x2(t = 0) = 0.
2. Since the system is described by two dynamical variables, the di-mension of the system is n = 2.
3. The system can be written in a vector matrix form by letting,
x =
(
x1
x2
)
, (1.8)
A =
(
0 1−1 0
)
, (1.9)
that is,x = Ax. (1.10)
4. The orbit of the system can be drawn by dividing Eq.(1.6) withEq.(1.7) and integrating,
dx1
dx2= −x2
x1. (1.11)
This equation can be readily solved by separation of variables,
x21(t) = −x2
2(t) + c, (1.12)
16 CHAPTER 1. INTRODUCTION AND DEFINITIONS
where c is the constant of integration. But, due to the initial condi-tion, c =1, thus,
x21(t) + x2
2(t) = 1. (1.13)
We observe from Eq.(1.13) that the orbit is a circle of unit radiuswith its center at the origin (Fig.1.4a).
The direction of rotation can be drawn by noticing that the tangenton the orbit determined by the fraction −x2
x1
, thus it is clockwise.
5. A trajectory is plotted in Fig.1.4b.
6. In order to plot the integral curve of the system, we must find thesolution. Even though Eqs.(1.6-1.7) can be solved easily, here wewill introduce a way which will prove useful in future examples.
Let us follow the hint and introduce complex variables, i.e.,
z = x1 + ix2,
z = x1 − ix2,(1.14)
where the over-bar represents the complex conjugate. Equation (1.14)is written in vector-matrix form,
z = Sx, (1.15)
where S =(
1 1−i i
)
. The inverse relation1 can be also defined,
x = S−1z, (1.16)
where S−1 = 12
(
1 i1 −i
)
and S−1S = I. Turning now to Eq.(1.10), wecan write,
x = AS−1Sx. (1.17)
1The inverse of a n × n matrix S is S−1 = 1
detSadjS where adjS is the adjoint,
provided that the determinant detS 6= 0.
1.6. EXAMPLES 17
Left multiplication by S gives,
Sx = SAS−1Sx. (1.18)
Or, by using Eqs.(1.15) and (1.16),
z = Jz, (1.19)
where J =(
−i 00 i
)
. Since in Eq.(1.19) the first line is the complexconjugate of the second, we can study only the first (or only thesecond) equation in this system, i.e.,
z = −iz, (1.20)
with solution z(t) = e−it. Since the second equation is just the com-plex conjugate, its solution is z = eit. Using DeMoivre’s formula2
and Eq.(1.16) we have,
x1(t) = cos t, (1.21)
x2(t) = − sin t. (1.22)
As expected, the solution is a periodic function with period and am-plitude unity. An integral curve is shown in Fig.1.4c.
Example 1.2 Consider the system [Nemytskii and Stepanov, 1989],
x = −y +x
√
x2 + y2(1 − (x2 + y2)), (1.23)
y = x +y
√
x2 + y2(1 − (x2 + y2)). (1.24)
1. Find the solution of the system (Hint: use polar coordinates).
2. Draw the solution in the phase space and the trajectories startingfrom within and outside the periodic solution.
18 CHAPTER 1. INTRODUCTION AND DEFINITIONS
-1 0 1x
-1
0
1
y
inner trajectory
outer trajectory
limit cycle
Figure 1.5: A limit cycle and trajectories approaching it.
Solution:
1. We start by using polar coordinates,
x = r cos θ,
y = r sin θ.(1.25)
Under this transformation Eqs.(1.23) and (1.24) are written,
x = −y +x
r(1 − r2), (1.26)
y = x +y
r(1 − r2). (1.27)
Multiplying Eq.(1.26) by x and Eq.(1.27) by y and adding we ob-tain,3
r = 1 − r2, (1.28)
2DeMoivre’s formula: eiα = cosα + i sinα.3Here we use the identity: xx + yy = rr
1.6. EXAMPLES 19
where r ≥ 0. Similarly, multiplying Eq.(1.26) by y and Eq.(1.27) byx and subtracting we obtain,4
θ = 1. (1.29)
At this point we must observe that Eq.(1.28) has a fixed point,
1 − r2 = 0 ⇒ r = ±1. (1.30)
Since, r represents the radius, the negative sign does not have anymeaning, thus the only fixed point is r = 1. Also we must notethat fixed points for Eq.(1.28) represent periodic orbits for the fullsystem, Eqs.(1.28) and (1.29). Thus, the system has a periodic orbitwith radius unity.
Solving Eq.(1.28) we get,
r =
{
Ae2t−1
Ae2t+1for 0 < r < 1
Ae2t+1Ae2t
−1for r > 1
(1.31)
where A = |1+r0
1−r0|. Obviously, in both cases r → 1 as t → +∞.
2. Since r → 1 as t → +∞ when r is within or outside the periodicorbit, the limit cycle is stable. A graphic representation of the solu-tion as well as the trajectories towards the limit cycle are shown inFig. 1.5.
4Here we use the identity: xy + yx = r2θ.
20 CHAPTER 1. INTRODUCTION AND DEFINITIONS
Chapter 2
Linear Systems
2.1 Concepts from linear algebra
In the previous chapter we gave some general information about a dynam-ical system which can be written in the form of Eq. (1.2). In this chapterwe will deal with a specific category of dynamical systems which can bewritten as,
x = Ax. (2.1)
In Eq. (2.1) the matrix A of n × n dimension is called the matrix of coef-ficients or parameters. The vector x ∈ � n is still the dynamical variablesof the system. The dynamical system defined by Eq. (2.1) is called lineardynamical system.
It is rather obvious that the behavior of this system depends on theproperties of the matrix of coefficients, A. In the rest of the chapter we willassume that A is a time independent matrix, hence the linear dynamicalsystem is autonomous. Since, the properties of A are important and A isjust an n×n matrix, we must introduce some concepts from linear algebrawhich will be useful for studying the dynamical system later.
21
22 CHAPTER 2. LINEAR SYSTEMS
2.1.1 Generalities
We start be giving some preliminary definitions [Anton, 1987]. I will as-sume that the rules of matrix arithmetic are known. The elements of amatrix A = (ai,j) or simply A will be designated as ai,j where i indicatesthe row and j the column of the matrix, i.e.,
A =
a1,1 a1,2 . . . a1,m
a2,1 a2,2 . . . a2,m...an,1 an,2 . . . an,m
.
Here we give some definitions:
• A matrix of dimension n × n is call square matrix.
• A square matrix where all elements are zero except of ai,i, i =1, 2, . . . , n, is called a diagonal matrix.
• The case of a diagonal matrix with ai,i = 1 is called identity matrix,I, and has the property,
AI = IA = A.
• The complex conjugate of a matrix A = (ai,j), denoted by A, isdefined by A = (ai,j), where ai,j is the complex conjugate of ai,j .
• The transposed matrix of A, denoted by AT, is defined by AT =(aj,i).
• The determinant of A is denoted by det A. Here I will assume thatthe concept of the determinant is known. We will remind only thefollowing definition:
2.1. CONCEPTS FROM LINEAR ALGEBRA 23
Definition 2.1 If A is a square matrix, then the minor of entry ai,j
is denoted by Mi,j and is defined to be the determinant of the sub-matrix that remains after the ith row and jth column are deletedfrom A. The number (−1)i+jMi,j is denoted by Ci,j and is calledthe cofactor of entry ai,j .
• The adjoint of A, adj(A), is the transpose of the matrix of cofactors.
• If detA = 0 then A is said to be singular.
• A nonsingular matrix A possesses an inverse, A−1, which satisfies,
AA−1 = A−1A = I,
If A−1 exists then we say that A is invertible. The inverse of matrixis given by,
A−1 =1
detAadj(A). (2.2)
2.1.2 Eigenvalues and eigenvectors
We will see soon that eigenvalues and eigenvectors play an important rolein the solution of ODEs as well as their stability analysis. We start byreminding the definition,
Definition 2.2 If A is an n × n matrix, then a nonzero vector u is calledand eigenvector of A if,
Au = λu, (2.3)
for some scalar λ. The scalar λ is called the eigenvalue of A and u iscalled an eigenvector corresponding to λ.
Eigenvalues and eigenvectors have a useful geometric interpretation, ascan be seen in Fig. (2.1).
24 CHAPTER 2. LINEAR SYSTEMS
u
(a)
Au = uλ
Au
Au = uλ
Au = uλ
λ
λ < 0
Au = u
(b)
u
u
0 < λ < 1
λ > 1
u
Figure 2.1: (a) Geometrical interpretation of an eigenvalue and eigenvectorand (b) action of A on an eigenvector u depending on the eigenvalue, λ.
Note that the eigenvalues of a real matrix can be real or complex. Theeigenvalues of a matrix A can be calculated by solving the characteristicequation,
det(A− λI) = 0. (2.4)
Here we can observe that for a 2 × 2 matrix if we expand Eq. (2.4) itcan be written as,
λ2 − λtrA + detA = 0, (2.5)
where the trace of A, trA, is the sum of the diagonal elements, trA =∑n
i=1 ai,i.Two n× n matrices A and B are said to be similar if there is a nonsin-
gular matrix S such that,B = SAS−1. (2.6)
Note that similar matrices have the same characteristic polynomial.Now we give a fundamental result concerning the canonical form of a
matrix [Coddington and Levinson, 1955; Hirsch and Smale, 1974]:
2.1. CONCEPTS FROM LINEAR ALGEBRA 25
Theorem 2.1 Every n × n matrix A is similar to a matrix J of the form(Jordan form),
J =
J0 0 0 . . . 00 J1 0 . . . 0...0 0 0 . . . Js
, (2.7)
where J0 is a diagonal matrix with diagonal elements λ1, λ2, . . . , λq, and(Jordan block),
Ji =
λq+i 1 0 0 . . . 0 00 λq+i 1 0 . . . 0 0...0 0 0 0 . . . λq+i 10 0 0 0 . . . 0 λq+i
, (2.8)
for i = 1, 2, . . . , s. The scalars λj , j = 1, 2, . . . , q + s, are the eigenvaluesof A, which need not all be distinct.
In the special case where all eigenvalues λi are distinct then A is similarto the diagonal matrix,
J =
λ1 0 0 . . . 00 λ2 0 . . . 0...0 0 0 . . . λn
.
For example, the canonical form of a matrix having a pair complexeigenvalues a ± ib with multiplicity 1 and a real eigenvalue, µ with multi-plicity two is,
J =
a b 0 0−b a 0 00 0 µ 10 0 0 µ
.
26 CHAPTER 2. LINEAR SYSTEMS
We will see in the next section that in order to solve a linear system ofODEs it is necessary to make A diagonal. The diagonalization problemcan be stated as follows:
Definition 2.3 (Diagonalization Problem) Given an n× n matrix A, doesthere exist an invertible matrix S such that S−1AS is diagonal?
The procedure for diagonalizing a matrix A with real eigenvalues issummarized as follows,
• Step 1: Find n linearly independent eigenvectors of A, say, u1, u2,. . ., un.
• Step 2: Form the matrix S with u1, u2, . . ., un as column vectors.
• Step 3: The matrix S−1AS is diagonal with λ1, λ2, . . ., λn as itsdiagonal elements.
Note that if the eigenvalues are complex some modifications of theprocedure are necessary. Hence, for a matrix A with complex eigenvalues,
• Step 1: Find the complex eigenvectors ui = vi + iwi.
• Step 2: Form the matrix S with v1, w1, . . ., as column vectors.
• Step 3: The matrix S−1AS is block diagonal with,(
a b−b a
)
as its di-agonal elements.
2.2 Solution of linear ODEs
There are many different procedures for solving Eq. (2.1). Here we willpresent just two alternatives.
Consider Eq. (2.1) and an invertible matrix S which transforms A in acanonical form through a similarity transformation,
J = S−1AS. (2.9)
2.2. SOLUTION OF LINEAR ODES 27
Equation (2.1) can be written,
S−1x = S−1ASS−1x. (2.10)
Letting,x = Sy, (2.11)
the original equation is written in canonical form,
y = Jy, (2.12)
where J is given by Eq. (2.7). If the eigenvalues, λi of A are real anddistinct then J is just a diagonal matrix with λi in its diagonal. Thus,Eq. (2.12) is uncoupled and can be solved directly. If the eigenvaluesare complex and distinct then J has the blocks
(
a b−b a
)
along its diagonaland Eq. (2.12) can be solved directly (possibly after introducing polar orcomplex coordinates, as shown in the examples). The procedure is moretedious in the case of eigenvalues with multiplicity but direct inspectionshows that the solution can be accomplished, as it is shown in the exam-ples.
A different procedure for solving Eq. (2.1) can be followed if we con-sider for a moment the one-dimensional linear ODE,
x = ax. (2.13)
We know that the solution of Eq. (2.13) is,
x = keat, (2.14)
where k is a constant scalar. Hence, in the case of the higher dimensionalsystem, Eq. (2.1), we can write,
x = etAk, (2.15)
28 CHAPTER 2. LINEAR SYSTEMS
where k is now a constant vector and the exponential of the matrix is de-fined as the following series,
etA = I + tA +t2A2
2!+ . . . = I +
∞∑
m=1
tmAm
m!. (2.16)
Even though, Eq. (2.15) is apparently appealingly easy, it is not usefulin practice to evaluate Eq. (2.15) through Eq. (2.16). Instead we followanother procedure; we know that if the matrix A is diagonizable we canwrite,
A = SΛS−1, (2.17)
where Λ is diagonal with the eigenvalues on the main diagonal.Using Eq. (2.16), Eq. (2.17) is written,
etA = SetΛS−1. (2.18)
Note that if the eigenvalues are complex some modifications of thesolution procedure might be necessary, as shown in the examples.
2.3 Plane autonomous systems
Let us consider the case of Eq. (2.1) for n = 2. In this case, the lineardynamical system can be written explicitly as,
x1 = ax1 + bx2
x2 = cx1 + dx2
(2.19)
where the coefficients are real scalars (can be also zero). Obviously, in thissystem, the origin is always a fixed point. As mentioned in the previoussection, there is a similarity transformation, Eq. (2.6) which can turn A
into a canonical form. In the case of a planar (2 dimensional) system, thecases are quite limited [Hirsch and Smale, 1974; Grimshaw, 1990].
2.3. PLANE AUTONOMOUS SYSTEMS 29
-0.4 -0.2 0 0.2 0.4y
1
-1
-0.5
0
0.5
1
y2
(a)
-0.4 -0.2 0 0.2 0.4y
1
-1
-0.5
0
0.5
1
y2
(b)
Figure 2.2: (a) Stable node for λ1 = −1 and λ2 = −2 and (b) unstablenode for λ1 = 1 and λ2 = 2.
1. Node - Two real distinct eigenvalues of the same sign: In thiscase,λ1λ2 > 0 and according to Theorem 2.1, the canonical form ofA is,
(
λ1 00 λ2
)
(2.20)
Hence, the solution of this system in the new coordinate system is,
y1(t) = y1(0)eλ1t
y2(t) = y2(0)eλ2t(2.21)
In this case the origin is called a node. We observe from Eq. (2.23)that if λ1 and λ2 are both negative then the trajectories approachexponentially the origin, i.e. we have a stable node. If λ1 and λ2
are both positive then the trajectories depart exponentially from theorigin, i.e. we have an unstable node. An example of a stable andunstable node is shown in Fig. (2.2).
30 CHAPTER 2. LINEAR SYSTEMS
2. Improper node - A real eigenvalue of multiplicity 2: In this case,according to Theorem 2.1, the canonical form of A is,
(
λ 10 λ
)
(2.22)
Hence, the solution of this system in the new coordinate system is,
y1(t) = (y1(0) + y2(0)t)eλt
y2(t) = y2(0)eλt(2.23)
In this case the origin is called an improper node. We observe fromEq. (2.21) that if λ is negative then the trajectories approach the ori-gin, i.e. we have a stable improper node. If λ is positive then thetrajectories depart from the origin, i.e. we have an unstable im-proper node. An example of a stable and unstable node is shownin Fig. (2.3).
3. Saddle - Two real distinct eigenvalues with opposite sign: In thiscase the canonical form will be given by Eq. (2.20) and the solutionby Eq. (2.21). Nevertheless, due to the different signs of the eigen-values the fixed point is characterized by two different kinds of tra-jectories, one approaching and one departing from the origin. Notethat the saddle is an unstable fixed point. An example of a saddlepoint is shown in Fig. (2.4)
4. Focus - A pair of complex eigenvalues: In this case we have λ1,2 =a ± ib and according to Theorem 2.1, the canonical form of A is,
(
a b−b a
)
(2.24)
The system can be easily solved by introducing polar, or complexcoordinates and the solution is,
r(t) = r(0)eat
ϑ(t) = ϑ(0) − bt(2.25)
2.3. PLANE AUTONOMOUS SYSTEMS 31
-0.5 0 0.5y
1
-1
-0.5
0
0.5
1
y2
(a)
0.50-0.5y
1
-1
-0.5
0
0.5
1
y2
(b)
Figure 2.3: (a) Stable improper node for λ = −1 and (b) unstable impropernode for λ = 1.
In this case the origin is called a focus. We observe from Eq. (2.25)that the stability of the origin is determined only by a whereas bdetermines only the direction of rotation. If a is negative then trajec-tories are spiraling towards the origin, i.e. we have a stable focus. Ifa is positive then the trajectories are spiraling away from the origin,i.e. we have an unstable focus. An example of a stable and unstablefocus is shown in Fig. (2.5).
5. Center - A pair of imaginary eigenvalues: In this case we haveλ1,2 = ±ib and according to Theorem 2.1, the canonical form of A
is given again by Eq. (2.24) but for a = 0. The solution of the systemis,
r(t) = r(0)
ϑ(t) = ϑ(0) − bt(2.26)
In this case the origin is called a center. We observe from Eq. (2.26)
32 CHAPTER 2. LINEAR SYSTEMS
-0.05 0 0.05y
1
-1
-0.5
0
0.5
1
y2
Figure 2.4: Saddle point for λ1 = 0.1, λ2 = −0.2
that the center is not asymptotically stable. Trajectories starting fromr0 form closed orbits around the fixed point as shown in Fig. (1.4).
2.4 Examples
Example 2.1 Prove that Eq. (2.15) is a solution of Eq. (2.1).
Solution:If Eq. (2.15) is a solution of Eq. (2.1) then we must have,
d
dt(eAtk) = AeAtk. (2.27)
Let us consider the left side part of this equation. Since, k is a constantvector we can write,
d
dt(eAtk) =
d
dt(eAt)k, (2.28)
2.4. EXAMPLES 33
-1 0 1y
1
-1
0
1
y2
(a)
-0.1 -0.05 0 0.05 0.1y
1
-0.1
-0.05
0
0.05
0.1
y2
(b)
Figure 2.5: (a) Stable focus for a = −0.15 and b = 1 and (b) unstablefocus for a = 0.15 and b = 1.
or, by using Eq. (2.16,
d
dt(eAt)k =
(
limh→0
e(t+h)A − etA
h
)
k
=
(
limh→0
etAehA − etA
h
)
k
= etA
(
limh→0
ehA − I
h
)
k
= etA
(
limh→0
I + hA + h.o.t. − I
h
)
k
= etAAk.
(2.29)
Since A commutes with each term of the series for eAt, the proof iscomplete.
Example 2.2 Find the solution of the canonical form of a n = 2 lineardynamical system with an eigenvalue of multiplicity 2.
34 CHAPTER 2. LINEAR SYSTEMS
Solution:The canonical form of a two dimensional system with an eigenvalue of
multiplicity 2 is,(
y1
y2
)
=
(
λ 10 λ
)(
y1
y2
)
(2.30)
The second equation in Eq. (2.30) can be solved directly. The solutionis,
y2(t) = c2eλt.
Putting y2(t) into the first equation of Eq. (2.30) we get,
y1 = λy1 + c2eλt. (2.31)
Equation (2.31) can be solved with the method of variation of a constant.Hence, the solution of the corresponding homogeneous equation is,
y1(t) = ceλt.
Now, we seek a general solution of Eq. (2.31) of the general form,
y1(t) = c(t)eλt.
Putting this expression in Eq. (2.31) and solving for c we get c(t) = c1+c2t.Thus, the solution is,
y1(t) = (c1 + c2t)eλt,
which agrees with Eq. (2.23).
Example 2.3 Find the solution of the canonical form of a n = 2 lineardynamical system with a pair of complex eigenvalues.
Solution:The canonical form of a two dimensional system with a pair of complex
eigenvalues, λ = a + ib and λ = a − ib is,(
y1
y2
)
=
(
a b−b a
)(
y1
y2
)
(2.32)
2.4. EXAMPLES 35
This system can be solved by either introducing polar coordinates orcomplex variables. Let’s deal we both cases.Case 1: Using polar coordinates
By introducing polar coordinates y1(t) = r(t) cos ϑ(t) and y2(t) =r(t) sin ϑ(t) we immediately get,
r = ar
ϑ = −1(2.33)
which has as a solution,
r(t) = c1eat
ϑ(t) = c2 − bt(2.34)
In terms of the original variables,
y1(t) = c1eat cos(c2 − bt)
y2(t) = c1eat sin(c2 − bt)
or,1
y1(t) = eat(c1 cos bt + c2 sin bt)
y2(t) = eat(c2 cos bt − c1 sin bt)(2.35)
Case 2: Using complex variablesIntroducing complex variables, z = y1 + iy2 and z = y1 − iy2, we get,
z = (a + ib)z
˙z = (a − ib)z
Since the second equation is just the complex conjugate of the first we canattack the problem by dealing only with one of them.
1Here we use the identities: cos(A−B) = cosA cosB+sin A sin B and sin(A−B) =sin A cosB − cosA sin B.
36 CHAPTER 2. LINEAR SYSTEMS
The solution of the first equation is,
z(t) = ce(a−ib)t
or, by using De Moivre’s formula,
z(t) = ceat(cos bt − i sin bt). (2.36)
Note that in this case, the constant c is, in general, a complex number, i.e.c = c1 + ic2. Obviously, the solution of the second equation is just thecomplex conjugate of Eq. (2.36), that is,
z(t) = ceat(cos bt + i sin bt). (2.37)
Turning to the original variables, y1 = 12(z + z) and y2 = 1
2(−iz + iz)
we immediately get Eq. (2.35).
Example 2.4 Solve the following system of ODEs,
x1 = −3x1 + 2x2
x2 = x1 − 2x2
(2.38)
Solution:Equation (2.38) is written in a matrix-vector form,
x =
(
−3 21 −2
)
x = Ax. (2.39)
We start by finding the eigenvalues of A. According to Eq. (2.5), wehave,
λ2 + 5λ − 4λ = 0,
that is, λ1 = −1 and λ2 = −4. We observe that the system has two real dis-tinct eigenvalues. Thus, the origin is a stable node, since real eigenvalues
2.4. EXAMPLES 37
are negative. Now we search for two linearly independent eigenvectors.According to Eq. (2.3) we have, for λi,
(
−3 21 2
)(
u1
u2
)
= λi
(
u1
u2
)
or,
−3u1 + 2u2 = λiu1
u1 − 2u2 = λiu2
Solving this system we get the following set of eigenvectors,
u1 =
(
11
)
,u2 =
(
−21
)
.
So, the matrix S will be,
S =
(
1 −21 1
)
.
Since the eigenvalues are distinct and real, the canonical form is givenby Eq. (2.20 and the solution by Eq. (2.21) for λ1 = −1 and λ2 = −4, i.e,
y1(t) = c1e−t
y2(t) = c2e−4t (2.40)
In terms of the original variables x the solution is found by usingEq. (2.11), i.e.,
(
x1
x2
)
=
(
1 −21 1
)(
y1
y2
)
that is,
x1 = c1e−t − 2c2e
−4t,
x2 = c1e−4t + c2e
−4t.(2.41)
38 CHAPTER 2. LINEAR SYSTEMS
Example 2.5 Solve the following system of ODEs,
x1 = 5x1 + 10x2
x2 = −x1 − x2
(2.42)
Solution:Equation (2.42) is written in a matrix-vector form as,
x =
(
5 10−1 −1
)
= Ax
The matrix A has a pair of complex eigenvalues, i.e., λ = 2 + i andλ = 2 − i, that is, the origin is a focus. The eigenvector corresponding toλ is,
u =
(
3 + i−1
)
=
(
3−1
)
+ i
(
10
)
= v + iw.
In this case of complex eigenvalues, the matrix S is having as columnsthe vectors v and w, i.e.
S =
(
3 1−1 0
)
.
Since we have a pair of complex eigenvalues, the canonical form isgiven by, Eq. (2.24) and the solution by Eq. (2.35). Using the transforma-tion x = Sy we turn to the original variables and get as a solution,
x1 = e2t{c1(3 cos t − sin t) + c2(3 sin t + cos t)}x2 = e2t(−c1 cos t − c2 sin t)
(2.43)
Chapter 3
Linear Stability Analysis
3.1 Linearization
In the previous chapter we show that the fixed points of a linear dynamicalsystem can be stable or unstable depending on the sign of its eigenvalues.Also in Chapter 1 we gave some definitions about the stability of fixedpoints of nonlinear dynamical systems but we did not provide any methodfor determining whether or not a give solution is stable.
Before we start, let us remind the Taylor theorem which states that ascalar function f(x) can be written as a series of the form,
f(x) = f(a) +df(x)
dx
∣
∣
∣
∣
x=a
(x − a) +d2f(x)
dx2
∣
∣
∣
∣
x=a
(x − a)2
2!+ . . . (3.1)
In order to write the function f(a) as the series Eq. (3.1), the function musthave finite derivatives of all orders at x = a. The Taylor theorem for a
39
40 CHAPTER 3. LINEAR STABILITY ANALYSIS
multi variable function will be written,
f(x, y) =f(a, b) +∂f(x, y)
∂x
∣
∣
∣
∣
a,b
(x − a) +∂f(x, y)
∂x
∣
∣
∣
∣
a,b
(y − b)
+∂2f(x, y)
∂x2
∣
∣
∣
∣
a,b
(x − a)2
2!+
∂2f(x, y)
∂y2
∣
∣
∣
∣
a,b
(y − b)2
2!
+∂2f(x, y)
∂x∂y
∣
∣
∣
∣
a,b
2(x − a)(y − b)
2!+ . . .
(3.2)
Now, let us turn back to our main question. This is, we deal with ann-dimensional autonomous nonlinear dynamical system of the form,
x = f(x), (3.3)
which has a fixed point, x(t), i.e.,
f(x) = 0. (3.4)
Keep in mind that f(x) is an n-dimensional vector with vector componentsthe nonlinear functions f1(x1, x2, . . . , xn), f2(x1, x2, . . . , xn) etc. We wantto determine the stability of the fixed point, x(t). In order to do this wemust investigate the nature of the solutions near x(t). Let us consider asolution near the fixed point,
x = x + δx, (3.5)
and let us expand the vector function f(x) around the fixed point,
f(x) = f(x) + Dxf(x)(x − x) + . . . (3.6)
We can easily recognize that in Eq. (3.6),
Dxf(x) =
∂f1(x)∂x1
∣
∣
x=x
∂f1(x)∂x2
∣
∣
x=x. . . ∂f1(x)
∂xn
∣
∣
x=x∂f2(x)
∂x1
∣
∣
x=x
∂f2(x)∂x2
∣
∣
x=x. . . ∂f2(x)
∂xn
∣
∣
x=x
...∂fn(x)
∂x1
∣
∣
x=x
∂fn(x)∂x2
∣
∣
x=x. . . ∂fn(x)
∂xn
∣
∣
x=x
(3.7)
3.2. LINEARIZED STABILITY 41
Let us put Eq. (3.5) and Eq. (3.6) in Eq. (3.3). We will get,
˙x + ˙δx = f(x) + Dxf(x)(x − x) + . . . (3.8)
But, ˙x = f(x), so, we have,
˙δx = Dxf(x)δx + . . . (3.9)
Equation (3.9) describes the evolution of the orbits near the fixed pointx(t). Since, for the stability question, we are concerned with the behaviorof the solutions very near the fixed point, we drop the higher order terms(also the symbol “δ”) and we obtain,
x = Dxf(x)x. (3.10)
But Eq. (3.10) is a linear dynamical system. Thus, the stability of the fixedpoint will be determined by the eigenvalues of the matrix Df(x), given byEq. (3.7). We call this matrix the Jacobian of the system. Some times wewill write Eq. (3.10) using a different symbol for the Jacobian, i.e,
x = J(x)x, (3.11)
but don’t confuse the Jacobian with the Jordan matrix.
3.2 Linearized stability
An important question is what we can say about the solutions of Eq. (3.3)based on our knowledge of Eq. (3.10)? In other words, the stability ofthe nonlinear system is always determined by the stability of the linearizedsystem? We will answer this question after giving the following definition:
Definition 3.1 (Hyperbolic fixed points) A fixed point x is called hyper-bolic if none of the eigenvalues of the Jacobian Dxf(x) is zero or purelyimaginary.
42 CHAPTER 3. LINEAR STABILITY ANALYSIS
Now we can state a theorem which states under what conditions thelinearized system determine the stability of the nonlinear system.
Theorem 3.1 If x is a hyperbolic fixed point then the asymptotic behaviornear this point (and hence its stability type) is determined by the lineariza-tion.
Thus, if the Jacobian has no zero or imaginary eigenvalues, the stabilityis determined by the linear system, Eq. (3.10). If even one eigenvalue iszero or pure imaginary we cannot make any conclusions by just studyingthe linearized system.
3.3 Examples
Example 3.1 Consider the system shown in Fig. (3.1). Assume that thebody has mass m and the spring exerts a force F1 = k(x3 − x) where x isthe displacement. Additionally, assume that the mass moves in a mediumwhich exerts a resistance F2 proportional to the velocity, v [Stoker, 1950].
1. Show that the system can be described by the unforced Duffing equa-tions,
x = y
y = x − x3 − ay(3.12)
2. Find the fixed points of the system for a ≥ 0
3. Study the stability of the fixed points by linearization
4. Plot the orbits in the phase space, near the fixed points
Solution:
3.3. EXAMPLES 43
x
m
1
2
�������
�������
F
F
Figure 3.1: A mass attached to a spring
The total force acting on the mass is,
Ftot = F1 + F2 = mγ,
where γ is the acceleration. Since F1 is the spring force and F2 is propor-tional to the velocity, we have,
mγ + cv + k(x3 − x) = 0,
or, by letting γ = x, v = x, a = c/m and κ = k/m,
x + ax + κ(x3 − x) = 0
Let us assume κ = 1 and let y = x. Under this transformation we getEq. (3.12).
The fixed points of Eq. (3.12) are the solutions of the algebraic system,
y = 0
x − x3 − ay = 0(3.13)
So, the system has three steady states, that is,
(x1, y1) = (0, 0)
(x2, y2) = (1, 0)
(x3, y3) = (−1, 0)
(3.14)
44 CHAPTER 3. LINEAR STABILITY ANALYSIS
-1 0 1x
-0.4
0
0.4
y
Figure 3.2: Trajectories near the fixed points for the Duffing oscillator fora = 0.5.
The Jacobian of the system is,
Dxf(x) =
(
∂f1
∂x
∣
∣
x
∂f1
∂y
∣
∣
x∂f2
∂x
∣
∣
x
∂f2
∂y
∣
∣
x
)
=
(
0 11 − 3x2 −a
)
(3.15)
Thus, the eigenvalues of the linear system are,
λ1,2 = −a
2± 1
2
√
a2 + 4(1 − 3x2). (3.16)
For the first fixed point, (x1, y1) = (0, 0), Eq. (3.16) is written,
λ1,2 = −a
2± 1
2
√a2 + 4.
Obviously, for a > 0, λ1 and λ2 are real numbers. Also, direct inspectionshows that,
λ1 = −a
2+
1
2
√a2 + 4 > 0
λ2 = −a
2− 1
2
√a2 + 4 < 0
3.3. EXAMPLES 45
thus the origin is a saddle point (unstable).For the second fixed point, (x2, y2) = (1, 0), Eq. (3.16) is written,
λ1,2 = −a
2± 1
2
√a2 − 8.
For 0 < a <√
8, λ1 and λ2 are complex numbers with negative real part(since a > 0). Thus, this fixed point is a stable focus. For a >
√8, λ1 and
λ2 are real numbers and direct inspection shows that,
λ1 = −a
2+
1
2
√a2 − 8 < 0
λ2 = −a
2− 1
2
√a2 − 8 < 0
thus this fixed point is a stable node. Exactly the same holds for (x3, y3) =(−1, 0).
Let us see what happens when a = 0. In this case, the eigenvaluescorresponding to (x1, y1) = (0, 0) are λ1,2 = ±1, thus the origin remainsa saddle. The eigenvalues corresponding to the rest of the fixed points arepure imaginary, λ1,2 = ±2i. Thus, in the linear sense, these fixed pointswill be centers, but we cannot draw any conclusion about the stability ofthese points now.
The trajectories for the case 0 < a <√
8 are shown in Fig. 3.2. Wecan observe that the origin (x1, y1) = (0, 0) is a saddle point whereasthe orbits are spiraling towards the stable foci at (x2, y2) = (1, 0) and(x3, y3) = (−1, 0).
Example 3.2 Consider two variables x and y which measure respectivelythe number of individuals in two populations that are assumed to work inopposition to one another. Assume also that [Davis, 1962]:
• The variable x measures the population of the preys and the variabley the population of predators.
46 CHAPTER 3. LINEAR STABILITY ANALYSIS
• The predators depend solely upon the prey for its source of food.
• There is infinite source of food for the prey.
Under these assumptions,
1. Show that this system can be described by the Volterra-Lotka (predator-prey) equations,
x = Ax − Bxy
y = −Dy + Cxy
A, B, C, D > 0
(3.17)
where x is the population of the prey and y the population of thepredator.
2. Find the fixed points of the system
3. Study the stability of the fixed points by linearization
4. Plot the orbits in the phase space, near the fixed points
Solution:The fixed points of the system are found by solving the algebraic sys-
tem,
Ax − Bxy = 0
−Dy + Cxy = 0
So, the system has two steady states, that is,
(x1, y1) = (0, 0)
(x2, y2) = (D/C, A/B)(3.18)
3.3. EXAMPLES 47
0 1 2 3 4 5 6x
0
1
2
3
4
5
6
y
Figure 3.3: Trajectories of the Volterra-Lotka equations for A = B = C =D = 1
The Jacobian of the system is,
Dxf(x) =
(
∂f1
∂x
∣
∣
x
∂f1
∂y
∣
∣
x∂f2
∂x
∣
∣
x
∂f2
∂y
∣
∣
x
)
=
(
A − By −BxCy −D + Cx
)
(3.19)
For the fist fixed point (x1, y1) = (0, 0) the eigenvalues are,
λ1 = D > 0
λ2 = −A < 0
Since A, D > 0 the origin is a saddle point (unstable).For the second fixed point (x2, y2) = (D/C, A/B) the eigenvalues are,
λ1 = +i√
AD
λ2 = −i√
AD
Thus, the fixed point (x2, y2) = (D/C, A/B) is a center since the eigen-values are pure imaginary and no information about stability is given.
48 CHAPTER 3. LINEAR STABILITY ANALYSIS
Fortunately, we can do something about Eq. (3.17). It can be written inthe form,
dy
dx=
y(Dx − C)
x(A − By). (3.20)
This equation can be solved exactly by separating the variables. The solu-tion is,
A ln |y| − By = −C ln |x| + Dy + k,
where k is an arbitrary constant. By plotting the solution in the phase spacewe observe that indeed there is a family of periodic orbits around the fixedpoint (x2, y2) = (D/C, A/B). (Fig. (3.3).
It is very interesting to note that no matter what the numbers of preyand predator are, neither species will die out, nor will grow indefinitely.On the other hand, except for the state (x2, y2) = (D/C, A/B) (which isnot so probable) the population will not remain constant.
Example 3.3 Consider the system [Guckenheimer and Holmes, 1983],
x + εx2x + x = 0 (3.21)
1. Write Eq. (3.21) as a two dimensional system of ODEs
2. Find the steady states
3. Study the stability of the origin and show that it is not determined bylinearization
Solution:Equation (3.21) can be written as a 2D system by letting x1 = x and
x2 = x. Thus, we have,(
x1
x2
)
=
(
0 1−1 0
)(
x1
x2
)
− ε
(
0x2
1x2
)
(3.22)
3.3. EXAMPLES 49
-0.2 -0.1 0 0.1 0.2 0.3x
-0.2
-0.1
0
0.1
0.2
y
Figure 3.4: Trajectories of Eq. (3.21) for ε = 20.
Obviously (x1, x2) = (0, 0) is a fixed point of the system we eigenval-ues λ1 = +i and λ2 = −i. Thus, in the linear sense, the origin is a center.Since, though, the fixed point is nonhyperbolic, we cannot say anythingabout its stability. Actually, integration shows that the origin is a nonhy-perbolic (weak) attracting focus for ε > 0 (repelling for ε < 0. Of course,for ε = 0 the origin is indeed a center. See Fig. (3.4).
50 CHAPTER 3. LINEAR STABILITY ANALYSIS
Chapter 4
Elementary Bifurcations
4.1 Definitions
Up to this point we studied dynamical systems of the form,
x = f(x, µ),
x ∈ � n, µ ∈ � k,(4.1)
where we assumed that the parameters µ have some constant value. If oneor more of these parameters are varied, changes my occur in the qualita-tive structure of the solutions of Eq. (4.1) for certain values of µ. Thesechanges are called bifurcations and the parameter values are called bifur-cation values. Let us consider as an example the case µ ∈ �
. The fixedpoints of Eq. (4.1) will be the solutions of the following equation,
f(x, µ) = 0, (4.2)
and may depend on µ. The graph of x, versus µ will be called bifurcationdiagram (now x can represent not only points but also periodic orbits etc).If there are more than one bifurcation parameters, e.g., µ1 and µ2, then
51
52 CHAPTER 4. ELEMENTARY BIFURCATIONS
a plot in the µ-plane where lines correspond to parameter values where abifurcation is observed, is called bifurcation set.
From the previous chapters we know that when a fixed point is hy-perbolic, its stability is determined by linearization. Hence, it is easy tounderstand that bifurcation is expected to occur when at least one of theeigenvalues of the dynamical system becomes zero or pure imaginary, aswe vary the parameter µ, i.e. when the fixed point under study becomesnonhyperbolic. Thus, we arrive naturally to the study of nonhyperbolicfixed points. Here we must note that the existence of a nonhyperbolic fixedpoint is a necessary but not sufficient condition for a bifurcation. Thismeans that when one or more eigenvalues of the system are zero or pureimaginary then we might or might not observe a bifurcation. On the otherhand, in order to have a bifurcation one or more eigenvalues must be zeroor pure imaginary.
4.2 Center manifold theory
Let us consider a dynamical system with dimensions n = c + s where ceigenvalues are zero or pure imaginary and s eigenvalues are real negativeor complex with negative real part (here we will assume that all eigenval-ues are simple). It is rather easy to imagine that due to the presence ofthe s eigenvalues with negative real part, any orbit will rapidly decay to asubspace (let us call this subspace a manifold) where the stability of thesystem is determined by the c eigenvalues with zero real part. This sub-space, where the stability is determined by the eigenvalues with zero realpart, we be called center manifold. A question now is how to calculate thiscenter manifold.
Let us formulate the problem. Consider a dynamical system of the formEq. (4.1) where c eigenvalues are zero or pure imaginary and s eigenvaluesare negative or complex with negative real part. Using the tools from linearalgebra, this system can be written (forget the existence of the parameters
4.2. CENTER MANIFOLD THEORY 53
for a moment),
x = Ax + f(x,y),
y = By + g(x,y),
x ∈ � c,y ∈ � s.
(4.3)
In Eq. (4.3) the matrices A and B are in a Jordan canonical form. Let usassume that the origin is a fixed point. Note that in Eq. (4.3), the matrix A
has c eigenvalues with zero real part and B has s eigenvalues with negativereal part. It is time now to state two theorems [Carr, 1981]:
Theorem 4.1 (Existence of the center manifold) There exists a center man-ifold for Eq. (4.3) represented by y = h(x). The dynamics of Eq. (4.3)restricted to the center manifold is governed be the following system,
u = Au + f(u,h(u)),
u ∈ � c,(4.4)
for small u.
From Theorem (4.1) we can note that the study of the original system ofdimension n is now restricted in the center manifold of dimension c. Thus,the problem is simplified since its dimension is reduced. The next theoremstates that the stability of the fixed point of the original dynamical systemis determined by the stability of the fixed point of Eq. (4.4).
Theorem 4.2 Suppose the zero solution of Eq. (4.4) is stable (asymptoti-cally stable) (unstable). Then the zero solution of Eq. (4.3) is also stable(asymptotically stable) (unstable).
Now that we know that a center manifold exists and the stability of theoriginal system is governed by the stability of the reduced system on thecenter manifold, let us derive an algorithm to calculate the analytic form ofEq. (4.4). The center manifold will be given by,
y = h(x). (4.5)
54 CHAPTER 4. ELEMENTARY BIFURCATIONS
Differentiating with respect to time we have,
y = Dxh(x)x. (4.6)
Putting Eq. (4.5) in Eq. (4.3) we get,
x = Ax + f(x,h(x)),
y = Bh(x) + g(x,h(x)).(4.7)
Finally, substituting Eq. (4.7) into Eq. (4.6) we obtain,
Dxh(x)[Ax + f(x,h(x))] − Bh(x) − g(x,h(x)) = 0, (4.8)
where Dxh(x) is a matrix with elements ∂hi
∂xj. If we solve Eq. (4.8) we will
get the analytic form of h(x), that is we can write down explicitly Eq. (4.4).Unfortunately though, Eq. (4.8) is a quasi-linear partial differential equa-tion which is very difficult to solve for the unknown h(x). Nevertheless,there is a theorem stating that we can calculate h(x) to any desired degreeof accuracy by solving Eq. (4.8) to the same degree of accuracy. Thus, weassume a power series expansion of h(x), substitute into Eq. (4.8) and findthe unknown coefficients of the expansion. How this is done is shown inthe examples.
All the above can be generalized in the case the system depends ona parameter µ. In this case, µ is considered as an additional dynamicalvariable of the system and the same algorithm is applied. Now, the fixedpoint under study is (x, y, µ) = (0, 0, 0). Thus, in this case, we writeEq. (4.3) as,
x = Ax + f(x,y, µ),
µ = 0,
y = By + g(x,y, µ),
x ∈ � c,y ∈ � s, µ ∈ � p.
(4.9)
4.3. STATIC BIFURCATIONS 55
Now, the center manifold will written as,
y = h(x, µ),
and Eq. (4.8) as,
Dxh(x, µ)[Ax + f(x,h(x, µ), µ)] − Bh(x, µ) − g(x,h(x, µ), µ) = 0,(4.10)
where Dxh(x, µ) is a matrix with elements ∂hi(x,µ)∂xj
. The dynamics on thecenter manifold will be governed by,
u = Au + f(u,h(u, µ), µ),
µ = 0.(4.11)
The exact procedure of finding the stability of a nonhyperbolic fixed pointin a parameterized system will be outlined in the examples.
Note that, the center manifold theory is not necessarily applied only inbifurcation problems. We can say that this theory can be used for at leastthree purposes:
• Study the stability of nonhyperbolic fixed points (Example 4.1).
• Reduce the dimension of the system (Example 4.2).
• Study the bifurcations near a nonhyperbolic fixed point (Example 4.3).
4.3 Static bifurcations
In this section we will deal with the most simple bifurcations. In orderto skip the introduction of additional theorems and mathematical tricks,we will follow a rather unorthodox procedure for the presentation of thesebifurcations. Unfortunately, we cannot skip some mathematics when wewill deal with bifurcations leading to oscillations.
56 CHAPTER 4. ELEMENTARY BIFURCATIONS
4.3.1 The saddle-node bifurcation
Let us consider a dynamical system of the form of Eq. (4.1). Let us alsoassume that the origin, (x, µ) = (0, 0), is a nonhyperbolic fixed point. Ifthe system has one simple zero eigenvalue and the rest are either complexor real (but not zero) then we know from the center manifold theory thatwe can reduce the dimension of the system. In this case the dimension ofthe reduced system which will govern the dynamics on the center manifoldis unitity (c = 1), i.e.,
x = f(x, µ). (4.12)
The eigenvalue of this scalar system is λ = ∂f(x)∂x
|0,0 = 0. Let us expandf(x, µ) in a Taylor series,
f(x, µ) = f(0, 0) +∂f(x, µ)
∂x
∣
∣
∣
∣
0,0
x +∂f(x, µ)
∂µ
∣
∣
∣
∣
0,0
µ +∂2f(x)
∂x2
∣
∣
∣
∣
0,0
x2
2
+∂2f(x, µ)
∂x2
∣
∣
∣
∣
0,0
µ2
2+
∂2f(x, µ)
∂x∂µ
∣
∣
∣
∣
0,0
xµ + . . .
(4.13)
Since the origin is a fixed point f(0, 0) = 0 and, additionally, since it isnonhyperbolic ∂f(x,µ)
∂x|0,0 = 0. Thus, Eq. (4.13) is rewritten,
x = a1µ + a2x2 + a3µ
2 + a4µx + . . . , (4.14)
where the constants have an obvious meaning.As a first step, let assume that,
a1 =∂f
∂µ
∣
∣
∣
∣
0,0
6= 0 (4.15)
a2 =∂2f
∂x2
∣
∣
∣
∣
0,0
6= 0 (4.16)
a4 =∂2f
∂x∂µ
∣
∣
∣
∣
0,0
= 0 (4.17)
4.3. STATIC BIFURCATIONS 57
Under these conditions, Eq. (4.14) is rewritten,
x = a1µ + a2x2 + . . . (4.18)
Letting x →√
∣
∣
a1
a2
∣
∣x and t → a1
√
∣
∣
a2
a1
∣
∣t, and ignoring higher order terms,we get,
x = µ + βx2, (4.19)
were β = ±1. Let us study Eq. (4.19) as we vary µ. Let us first note thatfor µ = 0 the origin is nonhyperbolic. The fixed points of Eq. (4.19) are,
x =
{
±√−µ if β = 1±√
µ if β = −1(4.20)
Since we want the fixed points to be real, µ ≥ 0 when β = −1 and µ ≤ 0when β = 1. So, we note here that the fixed points exist only in one halfplane in the parameter space, depending on the value of β.
The stability of these fixed points is determined by the eigenvalue, λ =2βx. Let us explore each case:
• For β = 1, the eigenvalue corresponding to +√−µ is positive, so
this branch of fixed points is unstable. The eigenvalue corresponding−√−µ is negative, so this branch is stable.
• For β = −1, the eigenvalue corresponding to +√
µ is negative, sothis branch of fixed points is stable. The eigenvalue corresponding−√
µ is positive, so this branch is unstable.
We will call this bifurcation scenario a saddle-node bifurcation. The casesβ = −1 and β = 1 are shown graphically in Fig. 4.1.
What happens if a4 6= 0? In this case it can be shown that locallythere is still a saddle-node bifurcation at the origin. Thus, terms of thefrom µx do not affect the qualitative behavior in the neighborhood of thisnonhyperbolic fixed point. This situation is treated in Example (4.4).
58 CHAPTER 4. ELEMENTARY BIFURCATIONS
-1 -0.5 0 0.5 1µ
-1
-0.5
0
0.5
1
x
(a)
-1 -0.5 0 0.5 1µ
-1
-0.5
0
0.5
1
x
(b)
Figure 4.1: Saddle-node bifurcation for (a) β = −1 and (b) β = 1.
How about higher order terms? Do they affect the saddle-node equa-tion, Eq. (4.19)?. In the case it can be shown that once again, locally thereis a saddle-node bifurcation at the origin. Thus, terms of the form x3 canbe ignored. This situation is explored in Example (4.5).
As a conclusion we can say that a saddle-node bifurcation takes placeat the origin if,
• The origin is a fixed point,
f(0, 0) = 0. (4.21)
• The origin is a nonhyperbolic fixed point,
∂f
∂x
∣
∣
∣
∣
0,0
= 0. (4.22)
• Additionally, the following conditions must hold,
∂f
∂µ
∣
∣
∣
∣
0,0
6= 0,
∂2f
∂x2
∣
∣
∣
∣
0,0
6= 0.
(4.23)
4.3. STATIC BIFURCATIONS 59
• The normal form of the saddle-node bifurcation is,
x = µ ± x2 (4.24)
4.3.2 The transcritical bifurcation
In this section we will explore what will happen if one of the conditionsfor a saddle-node bifurcation is broken. More specifically we will studythe case,
∂f
∂µ
∣
∣
∣
∣
0,0
= 0. (4.25)
In this case, we must introduce also the additional term µx by letting,
∂2f
∂x∂µ
∣
∣
∣
∣
0,0
6= 0 (4.26)
Thus, we arrive to study the equation,
x = µx + βx2, (4.27)
where β = ±1. The steady states for Eq. (4.27) are x = 0 and x = − µβ
.The eigenvalue of the system is λ = µ + 2βx. So, we observe that thesteady state x = 0 is stable for µ < 0 and unstable for µ > 0. On the otherhand, the steady state x = −µ
βis unstable for µ < 0 and stable for µ > 0.
This bifurcation will be called transcritical. The cases β = 1 and β = −1are shown in Fig. 4.2.
What will happen if we “perturb” the transcritical bifurcation by addinga lower order term in Eq. (4.27)? (That is, if we keep the lower order termsin the Taylor expansion). Obviously, the addition of a term aµ turns out tobe the same with the case Eq. (4.83). Thus, in this case, the transcritical bi-furcation becomes a saddle-node bifurcation at the origin. The general casewhen the lower order perturbation is a constant is treated in Example (4.6).
As a conclusion we can say that a transcritical bifurcation takes placeat the origin if,
60 CHAPTER 4. ELEMENTARY BIFURCATIONS
-1 -0.5 0 0.5 1µ
-1
-0.5
0
0.5
1
x
(a)
-1 -0.5 0 0.5 1µ
-1
-0.5
0
0.5
1
x
(b)
Figure 4.2: Transcritical bifurcation for (a) β = 1 and (b) β = −1.
• The origin is a fixed point,
f(0, 0) = 0. (4.28)
• The origin is a nonhyperbolic fixed point,
∂f
∂x
∣
∣
∣
∣
0,0
= 0. (4.29)
• Additionally, the following conditions must hold,
∂f
∂µ
∣
∣
∣
∣
0,0
= 0,
∂2f
∂x2
∣
∣
∣
∣
0,0
6= 0,
∂2f
∂x∂µ
∣
∣
∣
∣
0,0
6= 0.
(4.30)
• The normal form of the transcritical bifurcation is,
x = µx ± x2 (4.31)
4.3. STATIC BIFURCATIONS 61
-1 -0.5 0 0.5 1µ
-1
-0.5
0
0.5
1
x
(a)
-1 -0.5 0 0.5 1µ
-1
-0.5
0
0.5
1
x
(b)
Figure 4.3: Pitchfork bifurcation for (a) β = −1, (b) β = 1.
4.3.3 The pitchfork bifurcation
The remaining case to study is when the following condition holds,
∂2f
∂x2
∣
∣
∣
∣
0,0
= 0. (4.32)
In this case we must introduce x3 terms since by the above condition sec-ond order nonlinearities are removed. Hence, we must study the followingequation,
x = µx + βx3, (4.33)
where β = ±1. The steady states of Eq. (4.33) are x = 0 and x = ±√
−µβ
.
The first steady state exists for any value of µ. Since we want the steadystates to be real, the following holds,
• For β = 1, the steady states x = ±√−µ exist only for µ ≤ 0.
• For β = −1, the steady state x = ±√µ exists only for µ ≥ 0.
The eigenvalue of the system is λ = µ+3βx2. Thus, the steady state x = 0is stable for µ < 0 and unstable for µ > 0. For the other steady states wehave,
62 CHAPTER 4. ELEMENTARY BIFURCATIONS
• For β = 1, the eigenvalue is λ = −2µ and since µ < 0 the steadystates x = ±√−µ are unstable.
• For β = −1, the eigenvalue is λ = −2µ and since µ > 0 the steadystates x = ±√
µ are stable.
We call this bifurcation pitchfork bifurcation. The pitchfork bifurcation isshown schematically in Fig. 4.3.
Let us explore what happens if the pitchfork bifurcation is “perturbed”by adding a constant term in Eq. (4.33). In this case we write,
x = ε + µx + βx3. (4.34)
Once again, depending on the sign of ε bifurcations are either suppressedor become saddle-nodes. This case is investigated in Example 4.7.
As a conclusion we can say that a pitchfork bifurcation takes place atthe origin if,
• The origin is a fixed point,
f(0, 0) = 0. (4.35)
• The origin is a nonhyperbolic fixed point,
∂f
∂x
∣
∣
∣
∣
0,0
= 0. (4.36)
4.4. NORMAL FORMS 63
• Additionally, the following conditions must hold,
∂f
∂µ
∣
∣
∣
∣
0,0
= 0,
∂2f
∂x2
∣
∣
∣
∣
0,0
= 0,
∂2f
∂x∂µ
∣
∣
∣
∣
0,0
6= 0,
∂3f
∂x3
∣
∣
∣
∣
0,0
6= 0.
(4.37)
• The normal form of the pitchfork bifurcation is,
x = µx ± x3 (4.38)
4.4 Normal forms
In the previous section it was shown that, in the case of a simple zeroeigenvalue, the local behavior of the system is determined by a limitingcases of simple equations, or normal forms. Additionally, we observed thatsome nonlinearities are essential for the qualitative behavior of the systemwhereas others do not induce any additional local phenomena near the bi-furcation point and can be disregarded. We also observed that some typesof bifurcations are turned to others (or they are completely suppressed) ifwe “perturb” the system.
Here, we will present the theory of normal forms, which allows a sys-tematic elimination of nonlinearities near a bifurcation point. In the nextsection we will apply the method of normal forms in the case of a complexeigenvalue.
64 CHAPTER 4. ELEMENTARY BIFURCATIONS
Let us consider an n-dimensional dynamical system with some zero orpure imaginary eigenvalues at the origin. The dimension of that systemcan be reduced due to the center manifold theory and represented as,
y = Ay + f(y), (4.39)
where A is in Jordan form and f(y) are the nonlinearities (we already knowhow to put any dynamical system in such a form).
Now, we would like to introduce a coordinate transformation of thevariables which turns Eq. (4.39) to the simplest possible form. Let us as-sume that the following is such a transformation,
y = x + Φ(x). (4.40)
Let us assume that under the influence of this transformation, Eq. (4.39)becomes,
x = Ax + P(x), (4.41)
where P(x) is the simpler polynomial vector function which originatesfrom the Taylor series expansion of f(x). The problem now is to calculateP(x) from our original equation. Let us see how we can do this.
Let us expand the function Φ(x) in a Taylor series,
Φ(x) =∞∑
q=2
Φq(x). (4.42)
Similarly, let us expand the simplified nonlinear part of Eq. (4.40) in aTaylor series,
P(x) =
∞∑
q=2
Pq(x). (4.43)
Finally, let us also Taylor expand f(x),
f(y) =
∞∑
q=2
fq(y). (4.44)
4.4. NORMAL FORMS 65
In all the above expansions, the subscript q denotes the order of the mono-mial term of the Taylor polynomial.
We differentiate Eq. (4.40) with respect to time to obtain,
y = (I + DxΦ(x)) x. (4.45)
In Eq. (4.45) the matrix DxΦ(x) has elements of the form, ∂Φi
∂xj. Now,
substitute Eqs. (4.44) and (4.45) into Eq. (4.39) to get,
(I + DxΦ) x = A(x + Φ) +
∞∑
q=2
fq(x + Φ(x)),
and in this equation substitute Eq. (4.43). The result is,
AΦ(x) − DxΦ(x)Ax =∞∑
q=2
Pq(x) −∞∑
q=2
fq(x + Φ(x))
+ DxΦ(x)AΦ(x) + DxΦ(x)
∞∑
q
fq(x + Φ(x)).
(4.46)
Now, if we put Eq. (4.42) into Eq. (4.46) and match terms of the sameorder, q (observe that the last two right-hand terms are always of the orderq + 1) we arrive to the following partial differential equation,
AΦq − DxΦqAx = Pq − Rq, (4.47)
where Rq denotes the q-th order terms of the expansion of the nonlinearityof the dynamical system and the matrix DxΦq has elements of the form∂Φi,q
∂xj. It is obvious that Pq will be zero (i.e. all q-th order nonlinearities
can be removed by the coordinate transformation) if,
AΦq − DxΦqAx = −Rq. (4.48)
66 CHAPTER 4. ELEMENTARY BIFURCATIONS
Let us explore this equation for a moment. Assume that A has distincteigenvalues, and thus its Jordan form is diagonal. Let us write down onecomponent of Eq. (4.48),
λiΦi,q −∑
j
∂Φi,q
∂xjλjxj = −Ri,q, (4.49)
where λi is the i-th eigenvalue of A. Recall now that the vector function Φq
has as components monomials of the form Φi,q = xm1
1 xm2
2 . . . xmnn where
∑nj=1 = q. If we put this expression in the left-hand side of Eq. (4.49) we
find easily that,
λiΦi,q −∑
j
∂Φi,q
∂xjλjxj =
(
λi −n∑
j=1
mjλj
)
Φi,q.
The meaning of this equation is that λi −∑n
j=1 mjλj are the eigenvaluesof the linear operator, L = A(.)−Dx(.)Ax, which is written explicitly as,
L =
(
λ1 0 ... 00 λ2 ... 0...0 0 ... λn
)
−
∂∂x1
∂∂x2
... ∂∂xn
∂∂x1
∂∂x2
... ∂∂xn...
∂∂x1
∂∂x2
... ∂∂xn
(
λ1 0 ... 00 λ2 ... 0...0 0 ... λn
)(
x1
x2
...xn
)
Additionally, the eigenvectors of L are the vectors with monomial compo-nents xm1
1 xm2
2 . . . xmnn . Thus, Eq. (4.48) has a solution if the linear operator
L is invertible, that is, if its eigenvalues are not zero. The operator is notinvertible, i.e. Eq. (4.48) cannot be solved if is has a zero eigenvalue, thatis,
λi −n∑
j=1
mjλj = 0 (4.50)
We call this condition imposed by Eq. (4.50, the resonance condition. Ifthe condition imposed by Eq. (4.50) is true, then q-th order monomialssatisfying this condition cannot be removed.
4.5. THE HOPF BIFURCATION 67
Up to this point we know when nonlinear terms can be removed. Butwhich terms, Pq, remain? Either we compute resonant terms (using Eq. (4.50),or we use the following result: It can be proved that the remaining nonlin-earities can be found by solving the following partial differential equation,
DxPq(x)A∗x = A∗Pq(x), (4.51)
where the star denotes the adjoint and, as usual, the matrix DxP has el-ements of the form ∂Pi,q
∂xi. This equation can be solved by the method of
characteristics. Note here that whether terms are remaining or eliminatedas well as which terms remain are determined by the linear part of thenonlinear dynamical system.
4.5 The Hopf bifurcation
In this section we will study the case of a nonlinear dynamical system withone pair of pure imaginary eigenvalues, λ = ±iω. Under this conditionand by using concept of linear algebra and the center manifold theory, thissystem is reduced to a two dimensional system,
(
xy
)
=
(
0 −ωω 0
)(
xy
)
+
(
f1(x, y)f2(x, y)
)
. (4.52)
Let us transform to complex coordinates by letting z = x + iy and z =x − iy (note that now the over-bar means complex conjugate). Under thistransformation Eq. (4.52) becomes,
(
z˙z
)
=
(
iω 00 −iω
)(
zz
)
+
(
F1(z, z)F2(z, z)
)
. (4.53)
4.5.1 The normal form for a pair of pure imaginary eigen-values
Let us try to apply the method of normal forms to Eq. (4.53).
68 CHAPTER 4. ELEMENTARY BIFURCATIONS
• Case 1: Using the resonance condition
We will start by dealing with second order terms. In this case, thecomponents of Φ2 will be z2, z2 and zz. The monomial z2 can bewritten as zm1 zm2 where m1 = 2 and m2 = 0. Thus, Eq. (4.50) iswritten, for λi = iω,
λi −n∑
j=1
mjλj = iω − 2iω = −iω,
which can never be zero. Thus, terms of the form z2 can be removed.
The monomial z2 is written as zm1zm2 where m1 = 2 and m2 = 0.Thus, Eq. (4.50) is written,
λi −n∑
j=1
mjλj = iω + 2iω = 3iω,
which can never be zero. Thus, terms of the form z2 can be removed.
The monomial zz is written as zm1 zm2 where m1 = 1 and m2 = 1.Thus, Eq. (4.50) is written,
λi −n∑
j=1
mjλj = iω − (iω − iω) = iω,
which can never be zero. Thus, terms of the form zz can be removed.Exactly the same procedure can be applied for the other eigenvalueλi = −iω which gives the identical result. As a conclusion all sec-ond order terms can be removed by a coordinate transformation.
Let us go now to third order terms. In this case, the components ofΦ3 can be z3, z3, z2z and zz2. Using Eq. (4.50) as before we get, forz3 terms,
λi −n∑
j=1
mjλj = iω − 3iω = −2iω.
4.5. THE HOPF BIFURCATION 69
For terms of the form z3 we get,
λi −n∑
j=1
mjλj = iω + 3iω = 4iω.
Now, for z2z terms,
λi −n∑
j=1
mjλj = iω − 2iω + iω = 0.
Finally for terms of the form zz2,
λi −n∑
j=1
mjλj = iω − iω + 2iω = 2iω.
Thus all third order terms can be removed except of the resonantterm z2z.
In general, notice that the resonance condition can be written,
λ − (m1λ + m2λ) = 0
But since λ = −λ the resonance condition is fulfilled only when,
1 − m1 + m2 = 0. (4.54)
Obviously this condition cannot be true if both m1 and m2 are evennumbers. Thus all even order terms can be removed. Therefore, upto the fifth order, the normal for of Eq. (4.53) is,
(
z˙z
)
=
(
iω 00 −iω
)(
zz
)
+
(
cz2zcz2z
)
, (4.55)
where c = a + ib is a complex constant.
70 CHAPTER 4. ELEMENTARY BIFURCATIONS
• Case 2: Using Eq. (4.51)
Alternative, the resonant terms can be computed by solving Eq. (4.51).Explicitly, this equation is written,
(
∂P∂z
∂P∂z
∂P∂z
∂P∂z
)(
−iω 00 iω
)(
zz
)
=
(
−iω 00 iω
)(
PP
)
. (4.56)
After performing some matrix arithmetic we get the following sys-tem of first order partial differential equations,
−∂P
∂zz +
∂P
∂zz = −P,
−∂P
∂zz +
∂P
∂zz = P .
(4.57)
This system can be solved with the method of characteristics. Ap-plying this method we get,
dP
−P=
dz
−z=
dz
z.
dP
P=
dz
−z=
dz
z.
(4.58)
Integrating the second equality we get the fist integral
zz = c,
where c is a real constant. Integrating the fist equality we get,
P = φ(zz)z,
where the constant of integration, φ is a function of the first integral,zz = c. Here we observe that since the first integral is a second orderfunction, P can never be a second order function (as a matter of factit can never be an even function). Thus second order terms can beremoved. As far as it concerns third order terms, we set φ(zz) = zzand thus the remaining third order terms are of the form P = z2z.Hence, the normal form is given by Eq. (4.55).
4.5. THE HOPF BIFURCATION 71
4.5.2 The normal form in various coordinate systems
The effect of bifurcation parameters in the normal form can be introducedin Eq. (4.55) by letting,
(
z˙z
)
=
[(
iω 00 −iω
)
+
(
0 µµ 0
)](
zz
)
+
(
cz2zcz2z
)
.
Thus, the parameterized normal form in complex coordinates is written,(
z˙z
)
=
(
iω µµ −iω
)(
zz
)
+
(
cz2zcz2z
)
. (4.59)
Turning to Cartesian coordinates we get,(
xy
)
=
(
µ ω−ω µ
)(
xy
)
+
(
(ax − by)(x2 + y2)(bx + ay)(x2 + y2)
)
. (4.60)
Finally, in polar coordinates,
r = µr + ar3,
θ = ω + br2.(4.61)
4.5.3 Simplified analysis of the Hopf bifurcation
For a simplified analysis of the Hopf bifurcation let us consider the polarcoordinates form, Eq. (4.61). Let us ignore the term br2 in the secondequation and study only the first one. Note, though, that for this equationsteady states with r > 0 correspond to periodic orbits (with radius r) whereas steady states with r = 0 correspond to points.
If we observe the first equation in Eq. (4.61) we immediately under-stand that this corresponds to a pitchfork bifurcation of a one-dimensionalsystem. The steady states of Eq. (4.61) are the solutions of the equation,
r(µ + ar2) = 0.
72 CHAPTER 4. ELEMENTARY BIFURCATIONS
-1 -0.5 0 0.5 1µ
0
0.5
1
r
(a)
-1 -0.5 0 0.5 1µ
0
0.5
1
r
(b)
Figure 4.4: Hopf bifurcation for (a) a < 0, (b) a > 0. Branches with r > 0correspond to periodic orbits.
Thus r = 0 is a steady state (a point) for any µ. The other steady states are
r = ±√
−µa
. Since now, r represents the radius, the negative solution can
be ignored, thus we have one branch of steady states, namely, r =√
−µa
.
This branch exists in the region −µ/a > 0.
The eigenvalue of the system corresponding to the steady state r =√
−µa
is λ = −2µ. Thus, if µ > 0 (this means a < 0) this steady state isstable. Moreover, the periodic orbit will be stable. If µ < 0 (this meansa > 0) the steady state is unstable, and the periodic orbit is unstable also.
As far as it concerns the steady state r = 0, its eigenvalue is λ = µ.Thus it is stable for µ < 0 and unstable for µ > 0.
We will call this bifurcation scenario Hopf bifurcation. During a Hopfbifurcation a fixed point reverses its stability and a limit cycle is born. Thisperiodic orbit is a limit cycle (see Example (4.8). The two cases presentedabove are shown in Fig. 4.4.
4.6. EXAMPLES 73
4.6 Examples
Example 4.1 Consider the following nonlinear dynamical system,
x = x2y − x5,
y = −y + x2.(4.62)
1. Write Eq. (4.62) in the form of Eq. (4.3).
2. Study the stability of the origin.
Solution:The origin is a fixed point of the system, i.e. (x, y) = (0, 0). The
Jacobian of the system at this point is,
Dxf(0) =
(
0 00 −1
)
,
and the eigenvalues are λ1 = 0 and λ2 = −1. Hence, the origin is a non-hyperbolic fixed point and its stability cannot be studied by linearization.
Equation (4.62) is already in a canonical form (Eq. (4.3)), for,
A = 0,
B = −1,
f(x, y) = x2y − x5,
g(x, y) = x2.
(4.63)
Now, let us assume that the center manifold is written
h(x) = ax2 + bx3 + . . . (4.64)
Putting, Eq. (4.64) and Eq. (4.63) into Eq. (4.8) we have,
(2ax + 3bx2 + . . .)(ax4 + bx5 − x5 + . . .) + ax2 + bx3 − x2 + . . . = 0.
74 CHAPTER 4. ELEMENTARY BIFURCATIONS
-0.5 0 0.5x
-0.4
-0.2
0
0.2
0.4
y
h(x)=x2
Figure 4.5: Trajectories to the center manifold for Eq. (4.62).
By equating coefficients of equal powers of x we get a = 1 and b = 0.Thus, the center manifold is written,
h(x) = x2 + O(4), (4.65)
where the notation O(n) means terms of the n-th order. Putting Eq. (4.65)in Eq. (4.4) we get the equation which determines the dynamics on thecenter manifold,
u = u4 + O(5). (4.66)
The fixed point u = 0 is unstable, thus, (x, y) = (0, 0) is also unstable,according the Theorem (4.2). The trajectories to the center manifold forEq. (4.62) are shown in Fig. 4.5.
Example 4.2 Consider an enzymic reaction of the Michaelis-Menden type,
S + E � ES,
ES → P + E,
[E] + [ES] = [E]0,
4.6. EXAMPLES 75
where [S], [E], [ES] and [P] are the concentrations of the substrate, en-zyme, enzyme-substrate and product, respectively. Also [E]0 is the totalconcentration of the enzyme which is conserved. Assume k1, k2 and k3 tobe the kinetic constants of these reactions.
1. Write the kinetic equations of this system.
2. Transform to dimensionless form.
3. Use the center manifold theory to reduce the dimension of the system.
Solution:For convenience, let us set,
[S] = y,
[ES] = z,
[E]0 = c,
[E] = c − z.
Using these symbols, the kinetic equations are written,
dy
dt= −k1y(c − z) + k2z,
dz
dt= k1y(c − z) − (k2 + k3)z.
(4.67)
The units of k1 are, let’s say, mol−1`·s−1 and of k2 and k3 are s−1. Inorder to transform Eq. (4.67) into dimensionless form let us set,
z = cz,
y =k2 + k3
k1
y = Kmy,
t =τ
k1c.
76 CHAPTER 4. ELEMENTARY BIFURCATIONS
Thus, Eq. (4.67) becomes,
y = −y + (y + c)z,
εz = y − (y + 1)z,(4.68)
where c = k2
k2+k3
and ε = cKm
.Let us study now Eq. (4.68). By setting τ = εt we get,
y = ε(−y + (y + c)z) = εf(y, z),
z = y − (y + 1)z.(4.69)
Obviously, the origin is a fixed point of Eq. (4.69). The Jacobian at thispoint is written,
Dxf(0) =
(
−ε cε1 −1
)
,
and the eigenvalues are,
λ1,2 = −ε − 1
2± 1
2
√
(ε + 1)2 − 4ε(1 − c).
For ε = 0, the eigenvalues are, λ1 = 0 and λ2 = −1. Now, if we considerε as an additional variable of the system,Eq. (4.69) is written,
(
yz
)
=
(
0 01 −1
)(
yz
)
+
(
εf(y, z)−yz
)
,
ε = 0.
(4.70)
In order to apply the center manifold theory in Eq. (4.70) we must put thelinear part in a canonical (Jordan) form. We know how to do this fromlinear algebra. Hence, the matrix S will be,
S =
(
1 01 −1
)
,
4.6. EXAMPLES 77
with an inverse,
S−1 =
(
1 01 −1
)
.
Now, letting,(
uv
)
= S−1
(
yz
)
,
we obtain,(
uv
)
=
(
0 00 −1
)(
uv
)
+
(
εf(u, v)u2 − uv + εf(u, v)
)
,
ε = 0.
(4.71)
Equation (4.71) is now in a form where the center manifold theory can beapplied. In this case,
A = 0,
B = −1,
f(u, v, ε) = εf(u, v),
g(u, v, ε) = u2 + uv − εf(u, v).
Thus, let as assume that the center manifold is given by,
v = h(u, ε) = αu2 + βuε + γε2 + O(3).
According to Eq. (4.10) we have,
∂h
∂u[0u + εf(u, h)]− (−1)h − u2 + uh − εf(u, h) = 0, (4.72)
thus, up to the third order Eq. (4.72) is written,
(α − 1)u2 + εu(β − c + 1) + γε2 + O(3) = 0. (4.73)
78 CHAPTER 4. ELEMENTARY BIFURCATIONS
Equating coefficients of equal powers of u in Eq. (4.73) we have α = 1,β = c − 1 = −λ and γ = 0. Note that λ > 0 since 0 < c < 1. Now thatwe know the values of the coefficients we can write explicitly the centermanifold up to the 3rd order, i.e.,
v = h(u, ε) = u2 − λεu + O(3).
The equation governing the dynamics, Eq. (4.11), will be,
u = ε(−u + (u + c)(u − u2 + λεu)), (4.74)
or, in the original time scale,
u = −λ(u − u2) + O(3). (4.75)
Let us see what is the meaning of Eq. (4.75). If we turn for a momentin Eq. (4.68) and set ε = 0 in the second equation we get,
z =y
y + 1.
Substituting in the fist equation we get,
y = −(1 − c)y
y + 1= −λy(y + 1)−1. (4.76)
Expanding around zero we get, 1
y ≈ −λ(y − y2). (4.77)
By comparing Eq. (4.75) with Eq. (4.77) we observe that we actuallyproved the quasi steady state approximation (QSSA).
1Here we use the MacLaurin series: (1 + y)−1 = 1 − y + y2 − . . ..
4.6. EXAMPLES 79
Example 4.3 Consider the Lorenz system,
x = σ(y − x),
y = ρx + x − y − xz,
z = −βz + xy.
(4.78)
Let σ, β > 0. Consider ρ as a bifurcation parameter and study the bifur-cations near the origin.
Solution:Obviously, the origin is a fixed point of the system. The Jacobian is,
Dxf(0) =
−σ σ 0ρ + 1 −1 0
0 0 −β
.
The eigenvalues of the Jacobian are,
λ1,2 = −σ + 1
2± 1
2
√
(σ + 1)2 + 4ρσ,
λ3 = −β.
For ρ = 0, the eigenvalues are,
λ1 = 0,
λ2 = −σ − 1,
λ3 = −β.
Since λ1 = 0 the fixed point is nonhyperbolic and we cannot make anyconclusion about the stability.
Let us now consider ρ as an additional dynamical variable of the sys-tem. The Lorenz equations are written,
xyz
=
−σ σ 01 −1 00 0 −β
xyz
+
0ρx − xz
xy
,
ρ = 0.
(4.79)
80 CHAPTER 4. ELEMENTARY BIFURCATIONS
The next step is to put Eq. (4.79) in the standard form, Eq. (4.9). Theeigenvectors corresponding to the eigenvalues λi are,
u1 =
110
,u2 =
σ−10
,u3 =
001
.
Thus, the matrix which transforms the Jacobian to a Jordan form is,
S =
1 σ 01 −1 00 0 1
,
with an inverse,
S−1 =1
1 + σ
1 σ 01 −1 00 0 1 + σ
.
Using the transformation,
uvw
= S−1
xyz
,
equation (4.78) turns to the standard form,
uvw
=
0 0 00 −σ − 1 00 0 −β
uvw
+1
1 + σ
σρ(u + σv) − σw(u + σv)−ρ(u + σv) + w(u + σv)(1 + σ)(u + σv)(u + v)
,
ρ = 0.
(4.80)
Equation (4.80) is now in the form of Eq. (4.9). Since we have two non-zero eigenvalues and ρ is a bifurcation parameter, the center manifold will
4.6. EXAMPLES 81
written,
h(u, ρ) =
(
h1(u, ρ)h2(u, ρ)
)
=
(
a1u2 + a2uρ + a3ρ
3 + . . .b1u
2 + b2uρ + b3ρ3 + . . .
)
. (4.81)
Substituting Eq. (4.81) and,
A = 0,
B =
(
−σ − 1 00 −β
)
,
f(u, v, w, ρ) =1
1 + σ[σρ(u + σv) − σw(u + σv)],
g(u, v, w, ρ) =1
1 + σ
(
−ρ(u + σv) + w(u + σv)(1 + σ)(u + σv)(u + v)
)
,
into Eq. (4.10) and equating coefficients, we obtain,
h(u, ρ) =
(
h1(u, ρ)h2(u, ρ)
)
=
(
− 1(1+σ)2
uρ + . . .1βu2 + . . .
)
.
Thus, the equation governing the dynamics on the center manifold is,
u = u
(
σ
1 + σρ − σ
βu2 + . . .
)
,
ρ = 0.
(4.82)
By observing Eq. (4.82) we see that in the neighborhood of the origin,the following picture holds:
• For ρ < 0 there is one fixed point u = 0.
• For ρ > 0 there are three fixed points, i.e., u = 0, u = ±√
β1+σ
ρ.
82 CHAPTER 4. ELEMENTARY BIFURCATIONS
-1 -0.5 0 0.5 1ρ
-0.5
0
0.5
u
Figure 4.6: Bifurcation diagram near the origin for Eq. (4.82).
The eigenvalue of the system Eq. (4.82) is λ = σ1+σ
ρ − 3σβu2. Thus, the
fixed point u = 0 is stable for ρ < 0 and unstable for ρ > 0. The fixed
points u = ±√
β1+σ
ρ are always stable. The bifurcation diagram near theorigin for Eq. (4.82) is shown in Fig. 4.6.
Example 4.4 Study the effect of terms µx to the saddle-node bifurcation.
Solution:We must study Eq. (4.14) for a4 6= 0. In this case we can write,
x = µ + βx2 + aµx. (4.83)
The fixed points of Eq. (4.83) are,
x =
{
−aµ2± 1
2
√
a2µ2 − 4µ if β = 1
−aµ2± 1
2
√
a2µ2 + 4µ if β = −1(4.84)
We observe that, in both cases, real fixed points exist only in the region µ ≤0 and µ ≥ 4
a2 . The eigenvalue of these fixed points is λ = ±0.5√
µ(a2µ − 4).
4.6. EXAMPLES 83
-4 0 4µ
-8
-4
0
4
8
x
(a)
-6 -4 -2 0 2 4 6µ
-8
-4
0
4
8
x
(b)
Figure 4.7: Saddle-node bifurcation for Eq. (4.83), a = 1 and (a) β = −1,(b) β = 1.
Thus, under the effect of the term aµx we expect a saddle-node bifurcationat the origin and an additional saddle-node bifurcation at µ = 4
a2 . Thus,locally the saddle-node bifurcation at the origin remains the same underthe influence of this term. Hence, terms of the form aµx can be ignored.This means we can set a4 = 0 in Eq. (4.14) (See. Fig. 4.7)
Example 4.5 Study the effect of third-order terms to the saddle-node bi-furcation.
Solution:Let us explore this situation, i.e.,
x = µ + x2 + ax3 (4.85)
In this case, the steady states of the system are given by,
µ = −x2 − ax3 (4.86)
We can see that the steady states are given by an S-type curve, Fig. 4.8.The eigenvalue of Eq. (4.85) is λ = x(3ax + 2) thus we expect a saddle-node bifurcation at the origin and an additional saddle-node bifurcation at(x, µ) =
(
− 23a
,− 427a2
)
. Thus, again, there is a saddle-node bifurcation
84 CHAPTER 4. ELEMENTARY BIFURCATIONS
-100 -50 0 50 100µ
-10
0
10
20
30
x
Figure 4.8: Saddle-node bifurcation for Eq. (4.85) for a = −0.05 andβ = 1.
at the origin which remains unaffected by higher order terms. Note thatglobally when a higher order term is added we observe hysteresis in thebifurcation diagram.
Example 4.6 Study the effect of a constant perturbation to the transcriti-cal bifurcation.
Solution:In the general case, if we add a constant term to the transcritical bifur-
cation we have,x = ε + µx + βx2. (4.87)
Let us consider the case β = −1. The steady states are given by,
x = −µ
2± 1
2
√
µ2 + 4ε,
and the eigenvalues by,
λ = −2x + µ = ∓√
µ2 + 4ε.
4.6. EXAMPLES 85
-1 -0.5 0 0.5 1µ
-4
-2
0
2
4
x
(a)
-10 -5 0 5 10µ
-10
-5
0
5
10
x
(b)
Figure 4.9: Perturbed transcritical bifurcation for β = −1 and (a) ε > 0,(b) ε < 0.
Obviously, when ε > 0, the eigenvalues can never be zero, so bifurca-tions are completely suppressed. One branch of steady states is stable andthe other is unstable for any value of µ.
When ε < 0, real steady states exist only in the region µ > 2√−ε
and µ < −2√−ε. At µ = ±
√−ε the eigenvalue becomes zero, and two
saddle-node bifurcations occur. Thus, the transcritical bifurcation is com-pletely suppressed no matter how small is ε. These cases are shown inFig. 4.9. Due to the sensitivity of the transcritical bifurcation to perturba-tions, it is unlikely to observe such a transition is real (physical) experi-ments.
Example 4.7 Study the effect of lower order perturbations to a pitchforkbifurcation.
Solution:In order to find the effect of lower order perturbations to a pitchfork
bifurcation we must investigate Eq. (4.34). This is a third order equation,and here we will give a general formula to solve third order equations.
Let us start with the general third order equation,
y3 + py2 + qy + r = 0.
86 CHAPTER 4. ELEMENTARY BIFURCATIONS
Now, let,
y = x − p
3
a =1
3(3q − p2)
b = − 1
27(2p3 − 9pq + 27r)
Under this transformation we obtain,
x3 + ax + b = 0. (4.88)
By letting,
A =3
√
− b
2+
√
b2
4+
a3
27,
B =3
√
− b
2−√
b2
4+
a3
27,
the solutions are written,
x1 = A + B,
x2 = −A + B
2+
A − B
2
√−3,
x3 = −A + B
2+
A − B
2
√−3.
(4.89)
Equation (4.34) is already in the form of Eq. (4.88) where the fixedpoints are the solution of the equation,
βx3 + µx + ε = 0.
Let us consider the case β = 1. Thus, a = µ and b = ε in Eq. (4.88). Ifwe plot the real solutions of Eq. (4.34) we get the bifurcation diagram ifFig. 4.10. It can be seen that the pitchfork bifurcation disappears, a saddle-node is created as well as an additional branch of steady states.
4.6. EXAMPLES 87
-4 -2 0 2 4µ
-3
-2
-1
0
1
2
3
x
Figure 4.10: Perturbed pitchfork bifurcation bifurcation for Eq. (4.34) forε = 1 and β = 1.
Example 4.8 Show that the periodic orbit generated through a Hopf bi-furcation is a limit cycle (Hint: use polar coordinates).
Solution: Let us follown the hind and use the polar coordinate form of theHopf bifurcation normal form, Eq (4.61),
dr
dt= µr + ar3,
dθ
dt= ω + br2.
(4.90)
Since the fist equation in Eq. (4.90) is uncoupled from the second, let usstart by solving this one. Hence, this equation is writte,
∫
dr
(µ + ar2)r= t + c,
where c is a constant of integration. The integral can be calculated with the
88 CHAPTER 4. ELEMENTARY BIFURCATIONS
method of partial fractions. Thus, we write,
1
µ + ar2=
Br + C
µ + ar2+
A
r.
Equating coefficient of equal powers or r we get, A = 1/µ, B = −a/µand C = 0. Hence, arrive to the following equation,
∫
1
rdr − a
µ
∫
r
µ + ar2dr = t + c.
Now, the integrals are easy to calculate. We obtain,
1
µln r − 1
2µln(µ + ar2) = t + c.
In order to determine the constant of integration c, let us set r(t = 0) = r0.Thus, we obtain,
c = ln
(
r20
µ + ar20
)
.
After some arithmetic we obtain,
r(t) =
[
−a
µ+
(
1
r20
+a
µ
)
exp(−2µt)
]
−1/2
. (4.91)
If µ > 0 then from Eq. (4.91) we get,
limt→∞
r(t) =
√
−µ
a,
thus a must be negative and the periodic orbit is a stable limit cycle.If µ < 0 then,
limt→−∞
r(t) =
√
−µ
a,
4.6. EXAMPLES 89
-0.4 -0.2 0 0.2 0.4x
-0.4
-0.2
0
0.2
0.4
y
Figure 4.11: The stable limit cycle for Hopf bifurcation for a < 0 andµ > 0.
thus a must be positive and the periodic orbit is an unstable limit cycle.For the second equation in Eq. (4.90) we have,
dθ
dt= ω − b
µ
a.
Thus, we obtain,
θ(t) =
(
ω − b
aµ
)
t + θ0. (4.92)
Here we must observe that θ(t) is never zero since ω is a constant indepen-dent of µ. The stable limit cycle for Hopf bifurcation for a < 0 and µ > 0is shown in Fig. 4.11.
90 CHAPTER 4. ELEMENTARY BIFURCATIONS
Chapter 5
Numerical Methods and Tools
5.1 Aim of this chapter
While studying nonlinear dynamical systems of the form,
x = f(x, µ),
x ∈ � n, µ ∈ � p,(5.1)
we face at least five different computational problems:
1. Calculation of steady states
2. Stability of steady states and/or limit cycles
3. Bifurcation of steady states and/or limit cycles
4. Solution of nonlinear ODEs
5. Characterization of the dynamical response
Up to this point, we considered examples where most of these computa-tional problems could be solved analytically. Nevertheless, in most of the
91
92 CHAPTER 5. NUMERICAL METHODS AND TOOLS
cases we face in practice some or all of these problems cannot be attack bysimple analytic mathematical techniques. Hence, we have to implementnumerical methods and the use of computer algorithms in order to studymost of the nonlinear dynamical systems.
In the present chapter we will deal mainly with the fist four computa-tional problems, that is, the calculation, stability and bifurcation of steadystates as well as the solution of nonlinear ODEs. We will discuss only thebasic numerical tools without moving to more advanced optimized algo-rithms. We will assume that the basics of programming are known, sincewe will present algorithms in Fortran 77. Hopefully, readers who are fa-miliar with other computer languages will be able to adapt the routines fortheir own need. At the final section of the chapter we will briefly presentthe package Auto 97 which can be used for solving a large variety of sta-bility and bifurcation problems.
5.2 Steady states: The Newton-Raphson method
The first problem we discuss is the calculation of steady states. As wealready know, a steady state, x of the system Eq. (5.1) must satisfy theequation,
f(x, µ) = 0, (5.2)
for a specific value of µ. Equation (5.2) is actually an n-dimensional sys-tem of nonlinear algebraic equations. The unknowns of this system arethe n elements of the vector x = (x1, x2, . . . , xn)T. This problem can besolved with the Newton-Raphson method.
Let as assume that x is a guess of the steady state (i.e, a solution guessof Eq. (5.2). Let us also assume that x = x + δx is the true (unknown)steady state. Let us expand Eq. (5.2) in a Taylor series around this truesteady state (ignore µ for the moment),
f(x + δx) = f(x) + Dxf(x)δx + h.o.t., (5.3)
5.2. STEADY STATES: THE NEWTON-RAPHSON METHOD 93
where, as usual, Dxf(x) is the n × n Jacobian,
Dxf(x) ≡ J(x) =
∂f1
∂x1
∣
∣
x
∂f1
∂x2
∣
∣
x. . . ∂f1
∂xn
∣
∣
x∂f2
∂x1
∣
∣
x
∂f2
∂x2
∣
∣
x. . . ∂f2
∂xn
∣
∣
x
...∂fn
∂x1
∣
∣
x
∂fn
∂x2
∣
∣
x. . . ∂fn
∂xn
∣
∣
x
, (5.4)
and all the derivatives are calculated at the guessed steady state.Since, x+ δx is the true steady state of the system, the left hand side of
Eq. (5.3) is equal to zero. Hence, ignoring higher order terms, this equationis written,
J(x)δx = −f(x). (5.5)
Let us observe Eq. (5.5). The Jacobian is a constant matrix since all thederivatives are calculated at the guessed steady state. The vector f(x) isalso an n-dimensional constant vector, since its elements are calculatedat x. The n-dimensional vector δx is the vector of the unknowns. Thus,Eq. (5.5) is a system of n linear algebraic equations with n unknowns.Later we will see how we can solve this system. Now, if we solve Eq. (5.5)we then form the corrected solution,
x = x + δx.
If x falls within a desired accuracy, we finished. If not, we set x = x andrepeat the procedure until the desired accuracy is accomplished.
In practice, is often useful to introduce an empirical scalar factor s, i.e,
x = x + sδx.
Thus, we start with s = 1. If convergence is not obtained we divide sby a factor of 2 and repeat until the solution is converged up to a desiredaccuracy.
We see that by applying the Newton-Raphson method, our problemis reduced to the solution of an n-dimensional linear algebraic problemwith n unknowns, Eq. (5.5). Let us discuss several methods to solve thisproblem.
94 CHAPTER 5. NUMERICAL METHODS AND TOOLS
5.2.1 Gauss elimination
The main idea behind Gauss elimination is to reduce J to an upper trian-gular form (all elements below the main diagonal being zero). Once this isaccomplished, the solution can be found by back substitution. Let us seehow this is accomplished.
As an illustrative example, let us consider a 4× 4 problem. Let us alsoindicate the elements of J as ai,j, the element of δx as xi and the elementsof −f with bi. Thus, Eq. (5.5) is rewritten,
a1,1x1 + a1,2x2 + a1,3x3 + a1,4x4 = b1, (5.6)
a2,1x1 + a2,2x2 + a2,3x3 + a2,4x4 = b2, (5.7)
a3,1x1 + a3,2x2 + a3,3x3 + a3,4x4 = b3, (5.8)
a4,1x1 + a4,2x2 + a4,3x3 + a4,4x4 = b4. (5.9)
Here we must observe that,
• Interchanging any two equations of this system (i.e. interchangingany two rows of J and the corresponding rows of f in Eq. (5.5)) doesnot change or scramble the solution.
• If we replace any equation by a linear combination of itself and an-other equation (i.e. replacing any row of J and f in Eq. (5.5) bya linear combination of itself and another row) does not change orscramble the solution.
We want to turn J to an upper triangular form. So let us start by elim-inating the term a2,1x1. To do this, multiply Eq. (5.6) by a2,1/a1,1 andsubtract the resulting equation from Eq. (5.7), The result is,
a1,1x1+a1,2x2 + a1,3x3 + a1,4x4 = b1,
a(2)2,2x2 + a
(2)2,3x3 + a
(2)2,4x4 = b
(2)2 ,
a3,1x1+a3,2x2 + a3,3x3 + a3,4x4 = b3,
a4,1x1+a4,2x2 + a4,3x3 + a4,4x4 = b4,
5.2. STEADY STATES: THE NEWTON-RAPHSON METHOD 95
where, a(2)2,j = a2,j − a2,1
a1,1a1,j and b
(2)2 = b2 − a2,1
a1,1b1.
In order to eliminate the term a3,1x1, multiply Eq. (5.6) by a3,1/a1,1
and subtract the resulting equation from Eq. (5.8). In the same manner,multiply Eq. (5.6) by a4,1/a1,1 and subtract the resulting equation fromEq. (5.9). The result is,
a1,1x1+a1,2x2 + a1,3x3 + a1,4x4 = b1, (5.10)
a(2)2,2x2 + a
(2)2,3x3 + a
(2)2,4x4 = b
(2)2 , (5.11)
a(2)3,2x2 + a
(2)3,3x3 + a
(2)3,4x4 = b
(2)3 , (5.12)
a(2)4,2x2 + a
(2)4,3x3 + a
(2)4,4x4 = b
(2)4 , (5.13)
where a(2)i,j = ai,j − ai,1
a1,1a1,j and b
(2)i = bi − ai,1
a1,1b1. In the above procedure
we call a1,1 the pivot element.At the next stage, we want to eliminate the term a
(2)3,2x2 in Eq. (5.12). In
order to do this we use as a pivot element the coefficient a(2)2,2 ad the follow
the same procedure. Finally we will get,
a1,1x1 + a1,2x2 + a1,3x3+a1,4x4 = b1, (5.14)
a(2)2,2x2 + a
(2)2,3x3+a
(2)2,4x4 = b
(2)2 , (5.15)
a(3)3,3x3+a
(3)3,4x4 = b
(3)3 , (5.16)
a(4)4,4x4 = b
(4)4 . (5.17)
Obviously, Eqs (5.14)-(5.17) are in upper triangular form and the systemcan be solved by back substitution. Thus, the solution will be,
x4 =b(4)4
a(4)4
, (5.18)
xi =b(i)i −∑n
j=i+1 a(i)i,jxj
a(i)i,i
, (5.19)
96 CHAPTER 5. NUMERICAL METHODS AND TOOLS
where, i = n − 1, n − 2, . . . , 1.At this point we can observe a potential problem. By applying the
above procedure we always devise by the pivot element ai,i. What willhappen if ai,i is zero or very close to zero? In the first case we will havea “division by zero” error and a computer overflow. On the second casethe round-off errors increase dramatically. In order to avoid zero or closeto zero pivot elements we apply a procedure called partial pivoting. Thus,before we normalize by the pivot element we determine the largest avail-able coefficient in the same column below the pivot element. The rows thencan be exchanged so that the largest element becomes the pivot element. Asimple routine which does Gauss elimination with partial pivoting is givenin subroutine GAUSS:
SUBROUTINE GAUSS(Y,N,M,PAR,X)
INTEGER N,M
REAL Y(N),PAR(M)
REAL DERIV,F
REAL Z(N,N),C(N),X(N),O(N),S(N)
C NUMBER OF EQUATIONS IS N
C Z(I,J) IS THE COEFFICIENTS OF THE UNKNOWNS
C C(I) IS THE NON-HOMOGENEOUS TERM
DO I = 1, N
DO J = 1, N
Z(I,J) = DERIV(Y,I,J,N,M,PAR)
END DO
C(I) = -F(Y,I,N,M,PAR)
X(I)=0.
END DO
DO I = 1, N
O(I) = I
S(I) = ABS(Z(I,1))
DO J = 2, N
IF (ABS(Z(I,J)).GT.S(I)) S(I) = ABS(Z(I,J))
END DO
END DO
DO K = 1, N - 1
CALL PIVOT(N,K,Z,C,X,O,S)
5.2. STEADY STATES: THE NEWTON-RAPHSON METHOD 97
DO I = K + 1, N
FACTOR = Z(O(I),K)/Z(O(K),K)
DO J =K+1, N
Z(O(I),J) = Z(O(I), J)-FACTOR*Z(O(K),J)
END DO
C(O(I)) = C(O(I)) - FACTOR*C(O(K))
END DO
END DO
X(N) = C(O(N))/Z(O(N),N)
DO I = N - 1 , 1, -1
SUM = 0.0
DO J = I + 1, N
SUM = SUM + Z(O(I),J)*X(J)
END DO
X(I) = (C(O(I))-SUM)/Z(O(I),I)
END DO
RETURN
END
SUBROUTINE PIVOT(N,K,Z,C,X,O,S)
REAL Z(N,N),C(N),X(N),O(N),S(N)
P = K
BIG = ABS(Z(O(K),K)/S(O(K)))
DO II = K + 1, N
DUMMY = ABS(Z(O(II),K)/S(O(II)))
IF (DUMMY.GT.BIG) THEN
BIG = DUMMY
P = II
END IF
END DO
DUMMY = O(P)
O(P) = O(K)
O(K) = DUMMY
RETURN
END
An alternative to Gauss elimination is the Gauss-Jordan method where,instead of transforming J to an upper triangular form, we transform to theidentity matrix; thus the back substitution step, Eqs. (5.18) and (5.19), isnot longer required. In Example 5.1 we present a simple application of theGauss elimination method.
98 CHAPTER 5. NUMERICAL METHODS AND TOOLS
5.2.2 LU decomposition
A more advantegous method for solving Eq. (5.5) is LU decomposition.Suppose we are able to write the Jacobian as a product of two matrices,
LU = J, (5.20)
where L is lower triangular and U is upper triangular. If we denote byai,j the elements of J, Eq. (5.20) is written, for the 4 × 4 case,
(
l1,1 0 0 0l2,1 l2,2 0 0l3,1 l3,2 l3,3 0l4,1 l4,2 l4,3 l4,4
)(
u1,1 u1,2 u1,3 u1,4
0 u2,2 u2,3 u2,4
0 0 u3,3 u3,4
0 0 0 u4,4
)
=
( a1,1 a1,2 a1,3 a1,4a2,1 a2,2 a2,3 a2,4a3,1 a3,2 a3,3 a3,4a4,1 a4,2 a4,3 a4,4
)
. (5.21)
Combining Eq. (5.5) and Eq. (5.20) we can write,
Jδx = (LU)δx = L(Uδx) = −f .
Thus, the problem can be solved in two steps, fist solving Ly = −f todetermine y and then solving Uδx = y to determine our original unknownδx. The advantage of this procedure is that the solution of (upper or lower)triangular systems is trivial and can be obtained by forward substitutions,
y1 =−f1
l1,1
,
yi =−fi −
∑i−1j=1 li,jyj
li,i,
(5.22)
for i = 2, 3, . . . , n, and, finally, by back substitution,
xn =yn
un,n,
xi =yi −
∑nj=i+1 ui,jxj
ui,i,
(5.23)
5.2. STEADY STATES: THE NEWTON-RAPHSON METHOD 99
for i = n − 1, n − 2, . . . , 1.Now that we see the advantage of LU decomposition, let us see how
this decomposition can be accomplished. Let us work on the specific ex-ample, Eq. (5.21) and generalize as we go on. Equation (5.21) is a 16dimensional system of linear algebraic equations with 20 unknowns (theelements li,j and ui,j). In general, if n is the original dimension of thesystem Eq. (5.5) then the dimension of Eq. (5.21) will be n2 whereas thenumber of unknowns n2 + n. Thus, we have to solve a system where thenumber of the unknowns is greater then the number of equations. In orderto do this we implement the Crout’s algorithm:
• Set li,i = 1 for i = 1, 2, . . . , n.
• For each j = 1, 2, . . . , n do the following:
1. For i = 1, 2, . . . , j, solve for ui,j,
ui,j = ai,j −i−1∑
k=1
li,kuk,j, (5.24)
where the summation is taken equal to zero for i = 1.
2. For i = j + 1, j + 2, . . . , n, solve for li,j ,
li,j =ai,j −
∑j−1k=1 li,kuk,j
uj,j. (5.25)
Then go to the next j. (Recall, ai,j are the elements of the Jacobian).
Once again, as we can see in Eq. (5.25), partial pivoting is necessary. Wewill see how it is implemented in the examples. Now, since the elementsli,j and ui,j are determined, the solution δx can be found by substitution,Eqs. (5.22) and (5.23).
100 CHAPTER 5. NUMERICAL METHODS AND TOOLS
5.3 Use of libraries
Most of the algorithms dedicated to the solution of a variety of linear al-gebraic problems have been widely tested and can be found in librariescontaining the appropriate subroutines.
One of the most widely used library for the solution of linear algebraicproblems is the LAPACK library (Linear Algebra Package) which actuallyincludes other famous packages like LINPACK and EISPACK. In many im-plementations LAPACK is included as a part of even bigger packages, forexample the CXML package of Digital.
When someone uses libraries like LAPACK the solver of the linear alge-braic system is called like a simple subroutine. Thus, in our case, we needonly to write down a simple “driver” module which performs the Newton-Raphson iterations and supply the Jacobian and the right-hand functions ofthe linear system. An application of this procedure for LU decompositionis given in Example 5.2.
5.4 Integration of ODEs: Runge-Kutta method
The most simplified scenario for the integration of ODEs is the Eulermethod. According to this method, the original dynamical system, Eq. (5.1)is written,
xi+1 − xi
δt≈ f(xi),
that is,xi+1 = xi + δtf(xi). (5.26)
In Eq. (5.26), h = δt is the integration step (i.e. the time increment) andxi+1 the advanced solution. Euler method is note recommended for practi-cal use because it is not very accurate neither very stable.
The Euler method can be improved if we use an additional (trial) stepin the midpoint of the integration interval. Then, we use this trial value to
5.4. INTEGRATION OF ODES: RUNGE-KUTTA METHOD 101
compute the “real” step across the whole interval. That is,
k1 = hf(xi),
k2 = hf(xi + 0.5k1),
xi+1 = xi + k2.
(5.27)
The scenario represented by Eq. (5.27) is called second order Runge-Kuttamethod.
The second order Runge-Kutta method can be improved even further.Thus, the fourth order Runge-Kutta method is written,
k1 = hf(xi),
k2 = hf(xi + 0.5k1),
k3 = hf(xi + 0.5k2),
k4 = hf(xi + k3),
xi+1 = xi +1
6(k1 + 2k2 + 2k3 + k4).
(5.28)
The fourth order Runge-Kutta is implemented in the subroutine RKK4
given below:
SUBROUTINE RKK4(Y,N,TSTEP,PAR)
C The fourth order Runge-Kutta subroutine
C The updated solution is stored in Y(1:n)
C Uses YPRIM
REAL Y1(1:10),Y2(1:10),Y3(1:10),Y4(1:10)
REAL YY1(1:10),YY2(1:10),YY3(1:10),Y(1:10)
REAL YPRIM, TSTEP,PAR(1:10)
INTEGER K,N
DO K = 1, N
Y1(K) = TSTEP*YPRIM(Y,K,PAR)
END DO
DO K = 1, N
YY1(K) = Y(K)+Y1(K)/2.0
END DO
102 CHAPTER 5. NUMERICAL METHODS AND TOOLS
DO K = 1, N
Y2(K) = TSTEP*YPRIM(YY1,K,PAR)
END DO
DO K = 1, N
YY2(K) = Y(K)+Y2(K)/2.0
END DO
DO K = 1, N
Y3(K) = TSTEP*YPRIM(YY2,K,PAR)
END DO
DO K = 1, N
YY3(K) = Y(K)+Y3(K)
END DO
DO K = 1, N
Y4(K) = TSTEP*YPRIM(YY3,K,PAR)
END DO
DO K = 1, N
Y(K) = Y(K)+(Y1(K)+2.0*Y2(K)+2.0*Y3(K)+Y4(K))/6.0
END DO
RETURN
END
We will see how to implement the fourth order Runge-Kutta method inExample 5.4.
5.5 The AUTO 97 package
The ultimate package for performing bifurcation analysis to nonlinear prob-lems is AUTO 97. This package it comes with an explicit manual and exam-ples and is freely distributed from the site http://indy.cs.concordia.ca/auto/and can be used in a variety of Unix environments. It comes with a graph-ical user interface (GUI) but it can be used also in command mode. Asimple application of the AUTO 97 package is given in Example 5.3.
An other variant of AUTO 97 is the package XPP-Aut which can befound in http://www.math.pitt.edu/ bard/xpp/xpp.html.
5.6. EXAMPLES 103
5.6 Examples
Example 5.1 Consider a system where species are produced electrochem-ically in an anode and move in the electrolytic solution due to diffusionand migration. Assume that all current in the solution is carried by thesespecies and that changes occur only normal to the electrode surface. Fi-nally assume Nerst model for the diffusion layer. This system can be de-scribed by the following set of ODEs,
εu = σ(v − u) − f(x, u),
x = 1 − x − σ(v − u) + g(x, u),(5.29)
where u is the dimensionless surface potential and x the dimensionlesssurface concentration. Assume that the total faradaic current is given by acubic equation,
f(x, u) = x(a1u + a2u2 + a3u
3),
and that g(x, u) = αf(x, u). Consider v as the bifurcation parameterin the region 25 ≤ v ≤ 30 and find the steady states of the system us-ing Newton-Raphson and Gauss elimination. Use the following parame-ter values: σ = 0.1, ε = 0.001, α = 0.1, a1 = 1.125, a2 = −0.075,a3 = 0.00125.
Solution:In order to find the steady states of the problem we must write first
a simple module with some general definitions, and the main loop of theNewton-Raphson method:
PROGRAM MAIN
INTEGER N,M,K,KMAX
REAL Y(2), PAR(7),X(2)
REAL V,LMT,VST,VED,SUM,NORM,VSTEP,DUMMY
C Number of equations
N=2
104 CHAPTER 5. NUMERICAL METHODS AND TOOLS
C Number of parameters
M=7
C Parameter values
PAR(1)=1.125
PAR(2)=-0.075
PAR(3)=0.00125
PAR(4)=0.1
PAR(5)=0.1
PAR(7)=0.01
C Initial guess
Y(1)=16.40838838
Y(2)=0.22675495
C Limits and step size
VST=25.
VED=30.
VSTEP=0.005
C Maximum number of N-R iterations
KMAX=100
C Desired accuracy
LMT=1.E-4
SUM=0.
OPEN(1,FILE=’results.dat’,STATUS=’UNKNOWN’)
C Q=1: Forward scan
C Q=2: Backward scan
DO Q=1,2
DO V=VST, VED,VSTEP
PAR(6)=V
C The N-R iteration loop
10 DO K=1,KMAX
CALL GAUSS(Y,N,M,PAR,X)
DO I=1,N
Y(I)=Y(I)+X(I)
SUM=SUM+X(I)**2
END DO
NORM=SQRT(SUM)
SUM=0.
C Check the norm
IF (NORM.LE.LMT) EXIT
END DO
IF (NORM.GT.LMT) THEN
PAUSE ’NO CONVERGENCE OR JUMP POINT. INCREASE KMAX’
GOTO 10
ELSE
WRITE(1,100) V,Y(1),Y(2)
END IF
5.6. EXAMPLES 105
END DO
DUMMY=VST
VST=VED
VED=DUMMY
VSTEP=-VSTEP
END DO
CLOSE(1)
100 FORMAT (3F15.8)
STOP
END
We must also introduce the right hand vector function for Eq. (5.5).Here is the function decreration:
REAL FUNCTION F(Y,I,N,M,PAR)
INTEGER I,M,N
REAL Y(N),PAR(M)
IF(I.EQ.1) F=PAR(5)*(PAR(6)-Y(1))-
$ Y(2)*(PAR(1)*Y(1)+PAR(2)*Y(1)**2+PAR(3)*Y(1)**3)
IF(I.EQ.2) F=1.-Y(2)-PAR(5)*(PAR(6)-Y(1))+
$ PAR(5)*Y(2)*(PAR(1)*Y(1)+PAR(2)*Y(1)**2+PAR(3)*Y(1)**3)
RETURN
END
Also, we must introduce the elements of the Jacobian for Eq. (5.5).This is done in the following function declaration:
REAL FUNCTION DERIV(Y,I,J,N,M,PAR)
INTEGER I,J,N,M
REAL Y(N),PAR(M)
IF (I.EQ.1.AND.J.EQ.1) DERIV=-PAR(5)-Y(2)*(PAR(1)
$ +2.*PAR(2)*Y(1)+3.*PAR(3)*Y(1)**2)
IF (I.EQ.1.AND.J.EQ.2) DERIV=-PAR(1)*Y(1)+PAR(2)*Y(1)**2
$ +PAR(3)*Y(1)**3
IF (I.EQ.2.AND.J.EQ.1) DERIV=PAR(5)*Y(1)+PAR(4)*(Y(2)*(PAR(1)
$ +2.*PAR(2)*Y(1)+3.*PAR(3)*Y(1)**2))
IF (I.EQ.2.AND.J.EQ.2) DERIV=-1. + PAR(4)*(PAR(1)*Y(1)+
$ PAR(2)*Y(1)**2+PAR(3)*Y(1)**3)
RETURN
END
Compiling the above routines together with GAUSS and executing weget the result shown in Fig. 5.1.
106 CHAPTER 5. NUMERICAL METHODS AND TOOLS
25 26 27 28 29 30v
15
20
25
30
35
u
Figure 5.1: The steady states of Eq. (5.29) as a function of v.
We observe from Fig. 5.1 that this simple application of Gauss elimina-tion can determine the steady states but cannot “turn” in order to determinethe branch of steady states in the hysteresis region. Modifications of thealgorithm would make it possible to locate this branch also. Additionally,at the jump points someone might expect the occurrence of saddle-nodebifurcations. This can be verified easily for this two dimensional problemsince the Jacobian is calculated at each point of the bifurcation curve andthus the calculation of the eigenvalues is rather simple.
Example 5.2 Solve the problem stated in Example 5.1 with LU decompo-sition by using the LAPACK library. (Hint: use the routine Sgesv).
Solution: The problem of Example 5.1 can be solved easily by using theLAPACK library. The rourines F and DERIV remain the same. The mainmodule is modified as follows:
PROGRAM MAINLIB
INTEGER N,M,K,KMAX,LDA,LDB,INFO,NRHS,IPIV(2)
REAL DERIV,F
REAL A(100,2),B(100,2)
5.6. EXAMPLES 107
REAL Y(2), PAR(7),X(2)
REAL V,LMT,VST,VED,SUM,NORM,VSTEP,DUMMY
C Number of equations
N=2
C Number of parameters
M=7
C Parameter values
PAR(1)=1.125
PAR(2)=-0.075
PAR(3)=0.00125
PAR(4)=0.1
PAR(5)=0.1
PAR(7)=0.01
C Initial guess
Y(1)=16.40838838
Y(2)=0.22675495
C Limits and step size
VST=25.
VED=30.
VSTEP=0.005
C Maximum number of N-R iterations
KMAX=100
C Desired accuracy
LMT=1.E-4
SUM=0.
NRHS=1
LDA=100
LDB=100
OPEN(1,FILE=’results-lib.dat’,STATUS=’UNKNOWN’)
C Q=1: Forward scan
C Q=2: Backward scan
DO Q=1,2
DO V=VST, VED,VSTEP
PAR(6)=V
C The N-R iteration loop
10 DO K=1,KMAX
DO I = 1, N
DO J = 1, N
A(I,J) = DERIV(Y,I,J,N,M,PAR)
END DO
B(I,1) = -F(Y,I,N,M,PAR)
END DO
C Call the LU decomposition routine
CALL SGESV(N,NRHS,A,LDA,IPIV,B,LDB,INFO )
108 CHAPTER 5. NUMERICAL METHODS AND TOOLS
DO I=1,N
Y(I)=Y(I)+B(I,1)
SUM=SUM+B(I,1)**2
END DO
NORM=SQRT(SUM)
SUM=0.
C Check the norm
IF (NORM.LE.LMT) EXIT
END DO
IF (NORM.GT.LMT) THEN
PAUSE ’NO CONVERGENCE OR JUMP POINT. INCREASE KMAX’
GOTO 10
ELSE
WRITE(1,100) V,Y(1),Y(2)
END IF
END DO
DUMMY=VST
VST=VED
VED=DUMMY
VSTEP=-VSTEP
END DO
CLOSE(1)
100 FORMAT (3F15.8)
STOP
END
In order to include the library to our executable, the compilation com-mand might be something like the following (this is from a Tru64 Unix
system)
f77 main-lib.f jacob.f rhp.f -ldxml
where jacob.f and rhp.f contain the subroutines DERIV and F, respec-tively. The result is identical with the one presented in Fig. 5.1.
Example 5.3 Solve the complete bifurcation problem stated in Example 5.1with the package Auto 97.
Solution:In order to use Auto 97 for the equation of Example 5.1 we must first
supply the derivatives. This is done in subroutine FUNC:
5.6. EXAMPLES 109
C----------------------------------------------------------------------
C----------------------------------------------------------------------
C aut.f : Model AUTO-equations file
C----------------------------------------------------------------------
C----------------------------------------------------------------------
C
SUBROUTINE FUNC(NDIM,U,ICP,PAR,IJAC,F,DFDU,DFDP)
C ---------- ----
C
C Evaluates the algebraic equations or ODE right hand side
C
C Input arguments :
C NDIM : Dimension of the algebraic or ODE system
C U : State variables
C ICP : Array indicating the free parameter(s)
C PAR : Equation parameters
C
C Values to be returned :
C F : Equation or ODE right hand side values
C
C Normally unused Jacobian arguments : IJAC, DFDU, DFDP (see manual)
C
IMPLICIT DOUBLE PRECISION (A-H,O-Z)
DIMENSION U(NDIM), PAR(*), F(NDIM), ICP(*)
P=U(1)
C=U(2)
S=0.1
A=0.1
EPSI=0.001
V=PAR(1)
A1=1.125
A2=-0.075
A3=0.00125
C
F(1)= (S*(V-P)-C*(A1*P+A2*P**2+A3*P**3))/EPSI
F(2)= 1.-C-S*(V-P)+A*C*(A1*P+A2*P**2+A3*P**3)
C
RETURN
END
The definition of the parameters and initial condition is done in subrou-tine STPNT as follows:
C----------------------------------------------------------------------
C----------------------------------------------------------------------
110 CHAPTER 5. NUMERICAL METHODS AND TOOLS
C
SUBROUTINE STPNT(NDIM,U,PAR)
C ---------- -----
C
C Input arguments :
C NDIM : Dimension of the algebraic or ODE system
C
C Values to be returned :
C U : A starting solution vector
C PAR : The corresponding equation-parameter values
C
C Note : For time- or space-dependent solutions this subroutine has
C arguments (NDIM,U,PAR,T), where the scalar input parameter T
C contains the varying time or space variable value.
IMPLICIT DOUBLE PRECISION (A-H,O-Z)
DIMENSION U(NDIM), PAR(*)
C
C Initialize the equation parameters
PAR(1)= 25.
C PAR(2)= ....
C
C Initialize the solution
U(1)= 16.40838838
U(2)= 0.22675495
C
RETURN
END
Finally, we must supply the computation parameters. This is done inthe file r.*,
2 1 0 1 NDIM,IPS,IRS,ILP
2 1 11 NICP,(ICP(I),I=1,NICP)
15 4 3 1 1 1 2 1 NTST,NCOL,IAD,ISP,ISW,IPLT,NBC,NINT
350 25 29.7 -1e+10 1e+10 NMX,RL0,RL1,A0,A1
50 5 3 8 5 3 0 NPR,MXBF,IID,ITMX,ITNW,NWTN,JAC
1e-06 1e-06 0.0001 EPSL,EPSU,EPSS
0.02 0.001 0.05 1 DS,DSMIN,DSMAX,IADS
1 NTHL,((I,THL(I)),I=1,NTHL)
11 0
0 NTHU,((I,THU(I)),I=1,NTHU)
0 NUZR,((I,UZR(I)),I=1,NUZR)
The output of this calculation is written to the files q.*, p.* and d.*.There we can find all information concerning the stability of the steadystates. An abstract is given bellow:
5.6. EXAMPLES 111
BR PT TY LAB PAR(1) U(0) U(1) U(2)
1 1 EP 1 2.50000E+01 1.64084E+01 1.64084E+01 2.26755E-01
1 50 2 2.65099E+01 1.83117E+01 1.83117E+01 2.62166E-01
1 62 HB 3 2.68402E+01 1.87596E+01 1.87596E+01 2.72740E-01
1 100 4 2.79031E+01 2.03333E+01 2.03333E+01 3.18720E-01
1 150 5 2.90471E+01 2.25504E+01 2.25504E+01 4.15298E-01
1 200 6 2.95863E+01 2.49728E+01 2.49728E+01 5.84787E-01
1 205 LP 7 2.95901E+01 2.51930E+01 2.51930E+01 6.04266E-01
1 250 8 2.93142E+01 2.74114E+01 2.74114E+01 8.28747E-01
1 267 LP 9 2.92355E+01 2.82330E+01 2.82330E+01 9.09774E-01
1 271 HB 10 2.92414E+01 2.84215E+01 2.84215E+01 9.26213E-01
1 298 EP 11 2.97065E+01 2.96651E+01 2.96651E+01 9.96271E-01
Total Time 0.192E+00
From this listing we observe that we have two Hopf bifurcations (HB) andtwo saddle-node bifurcations (LP). If we perform a second round using thefollowing parameters,
2 2 3 1 NDIM,IPS,IRS,ILP
2 1 11 NICP,(ICP(I),I=1,NICP)
15 4 3 1 1 1 2 1 NTST,NCOL,IAD,ISP,ISW,IPLT,NBC,NINT
350 25 29.7 -1e+10 1e+10 NMX,RL0,RL1,A0,A1
50 5 3 8 5 3 0 NPR,MXBF,IID,ITMX,ITNW,NWTN,JAC
1e-06 1e-06 0.0001 EPSL,EPSU,EPSS
0.02 0.001 0.05 1 DS,DSMIN,DSMAX,IADS
1 NTHL,((I,THL(I)),I=1,NTHL)
11 0
0 NTHU,((I,THU(I)),I=1,NTHU)
0 NUZR,((I,UZR(I)),I=1,NUZR)
we can get the locus of periodic orbits in our bifurcation diagram. Thegraphic output of Auto 97 is shown in Fig. 5.2. Here we can clearly seethe occurrence of Hopf bifurcation at point 3 and the occurrence of thetwo saddle-node bifurcations at point 7 and 9. Also the amplitude of theperiodic orbits is shown with black circles. By inspecting the informationif the file d.* we see that the periodic orbits are stable. Also, at point 7 anunstable node turns to a saddle point and at point 9 the saddle point turnsto un unstable node. An additional Hopf bifurcation takes place at point 10(which we do not explore in this example).
112 CHAPTER 5. NUMERICAL METHODS AND TOOLS
25. 26. 27. 28. 29. 30.
0.
5.
10.
15.
20.
25.
30.
12
3
4
5 6
789
1011
12
13
14 15
16
1718
19
20
2122
23
24
25
Figure 5.2: The bifurcation diagram of Eq. (5.29) with v as a bifurcationparameter
Example 5.4 Write a program to solve the Lorenz equations, Eq. (4.78),numerically based on the fourth order Runge-Kutta method. (Note: setρ = ρ − 1 to get the original Lorenz equations).
Solution:In order to write a program to integrate a system of ODEs (in this spe-
cific case the Lorenz equations) we need to write down:
• A module with the main definitions
• A subroutine with the Runge-Kutta method
• A subroutine or function to declare the ODEs
Here is an example of a possible main module:
PROGRAM LORENZ
5.6. EXAMPLES 113
C Integration of the Lorenz equations with the
C fourth order Runge-Kutta. Y(1:n) are the
C dynamical variables. TSTEP is the integration
C step. TT is the ‘‘real time’’. The results
C are stored in the array YSOL(m,n) and can
C be written in a file.
C Uses RKK4
REAL Y(1:10),PAR(1:10), YSOL(0:20000,1:10)
REAL TSTEP,TT
INTEGER N,TMAX,K,I
C N: NUMBER OF EQUATIONS
N=3
C Initial conditions
YSOL(0,1)=0.1
YSOL(0,2)=0.1
YSOL(0,3)=0.1
Y(1)=YSOL(0,1)
Y(2)=YSOL(0,2)
Y(3)=YSOL(0,3)
C Model parameters
C Sigma
PAR(1)=10.
C Rho
PAR(2)=28.
C Beta
PAR(3)=8./3.
C Computational parameters
TSTEP = 0.01
TMAX = 10000
DO I = 1,TMAX
C Call to the Runge-Kutta subroutine
CALL RKK4(Y,N,TSTEP,PAR)
DO K = 1, N
YSOL(I,K) = Y(K)
END DO
END DO
C Write the results into a file
OPEN(UNIT=1,FILE=’lorenz.dat’,STATUS=’UNKNOWN’)
TT=0.
DO I=0,TMAX
114 CHAPTER 5. NUMERICAL METHODS AND TOOLS
TT=TT+TSTEP
WRITE(1,10) TT,YSOL(I,1),YSOL(I,2),YSOL(I,3)
END DO
CLOSE (1)
10 FORMAT (4F15.8)
STOP
END
Finally, we must define the ODEs. This is done in the function YPRIM:
REAL FUNCTION YPRIM(Y,K,PAR)
C The Lorenz equations. PAR(1) is sigma,
C PAR(2) is rho and PAR(3) is beta.
REAL Y(1:10)
REAL PAR(1:10)
INTEGER K
IF (K.EQ.1) YPRIM=PAR(1)*(Y(2)-Y(1))
IF (K.EQ.2) YPRIM=PAR(2)*Y(1)-Y(2)-Y(1)*Y(3)
IF (K.EQ.3) YPRIM=-PAR(3)*Y(3)+Y(1)*Y(2)
RETURN
END
A x − y projection of the calculation results is shown in Fig. 5.3.
5.6. EXAMPLES 115
-20 -10 0 10 20 30x
-30
-20
-10
0
10
20
30
y
Figure 5.3: Solution of the Lorenz equations with a fourth order Runge-Kutta method for σ = 10, ρ = 28 and β = 8/3.
116 CHAPTER 5. NUMERICAL METHODS AND TOOLS
Chapter 6
Chaos through examples
6.1 Introduction
Up to this point we investigated the stability and response of rather sim-ple nonlinear dynamical systems. In spite of their simplicity we observedthat a variety of multiple steady states and bifurcations can occur resultingdifferent configurations of the trajectories in the phase space.
One of the most exciting aspects of nonlinear response is the chaoticresponse and the existence of the chaotic or strange attractor in the phasespace. Due to the mathematical complexity of the analysis of the chaoticresponse, in this section we will just deal only with several examples ex-hibiting chaotic behavior and we will not insist on mathematical details.
Even the definition of the chaotic attractor is rather complicated. Sincea characteristic feature of the chaotic response is the sensitive dependenceon initial conditions we will give first the following definition:
Definition 6.1 (Sensitive dependence on initial conditions) Consider a dy-namical system,
x = f(x),
117
118 CHAPTER 6. CHAOS THROUGH EXAMPLES
and denote the flow generated by this system by ϕ(t,x). Assume also thatthere is a subset Λ of the phase space where ϕ(t, Λ) ⊂ Λ for all t. Theflow ϕ(t,x) is said to have sensitive dependence on initial conditions on Λif there exists ε > 0 such that, for any x ∈ Λ and in any neighborhood ofx there exist y in this neighborhood such that |ϕ(t,x) − ϕ(t,y)| > ε fort > 0.
In other words, for any point x ∈ Λ there is (at least) one point ar-bitrary close that diverges from x. Now, if an attractor is chaotic then itsflow ϕ(t,x) has sensitive dependence on initial conditions in Λ. Here wemust note that not all systems exhibiting sensitive dependence on initialconditions are chaotic.
6.2 The logistic map
Even though up to this moment we studied dynamical systems describedby ODEs, here it might be useful to introduce a different type of dynamicalsystems which can be described by discrete maps of the form,
xi+1 = f(xi),
or, in general (the map can be non-invertible),
x 7→ f(x).
In the case of maps, the steady state is a point satisfying the equation,
x = f(x),
and the steady state will be stable (unstable) if the eigenvalues of the lin-earized system have modulus less (greater) than 1.
6.2. THE LOGISTIC MAP 119
Let us first see how discrete map can originate from ordinary differen-tial equations. Let as consider a number of species whose population isgoverned by the following law,
dy
dt= Ay − By2, (6.1)
where A and B are positive constants. The meaning of Eqs. (6.1) is thatthe species can increase exponentially as their population grows where theydecrease (due to the factor −By2) due to, let’s say, limited amount of foodsupply; no predator exists in this system.
Equation (6.1) can be written as a Riccati equation of the form,
dy
dt= Ay
(
1 − y
K
)
,
where K = A/B. The solution of this equation is,
y(t) =K
1 + Ce−At.
Thus, we see that the population finally arrives at a saturation point whereymax = K. Setting x = y/ymax, Eq. (6.1) becomes,
dx
dt= Ax(1 − x), (6.2)
where 0 ≤ x ≤ 1. Let us know write Eq. (6.2) in discrete form,
xn+1 − xn
h= Axn(1 − xn).
After rearrangment, Eq. (6.2) is approximated by,
xn+1 = axn(1 − xn), (6.3)
120 CHAPTER 6. CHAOS THROUGH EXAMPLES
or, in general,
x 7→ ax(1 − x),
x ∈ [0, 1], a > 0.(6.4)
Equation (6.4) is referred as the logistic map. Let us investigate now thelogistic map. The steady states of Eq. (6.4) are,
x = 0, (6.5)
x = 1 − 1
a, (6.6)
For a ≤ 1 the second steady state Eq. (6.6) does not exist since x ∈ [0, 1];thus, we have one steady state, Eq. (6.5), which is stable since the modulusof the eigenvalue λ = a − 2ax is smaller than 1.
For a > 1 the steady state x = 0 becomes unstable (the correspondingeigenvalue has modulus greater than 1). What kind of bifurcation we haveat a = 1? Setting f(x, a) = ax(1 − x) we get,
∂f
∂a
∣
∣
∣
∣
0,1
= 0,
∂2f
∂x2
∣
∣
∣
∣
0,1
= −2 6= 0
∂2f
∂x∂a
∣
∣
∣
∣
0,1
= 1 6= 0
Thus, this bifurcation is a transcritical bifurcation for the logistic map (seethe criteria for a transcritical bifurcation Eq. (4.30)).
At a = 1 the second steady state, x = 1− 1a, emerges. The correspond-
ing eigenvalue is λ = −a + 2, so this steady state is stable. What happensif we further increase a? At the critical value a = 3 the eigenvalue of thesteady state Eq. (6.6) is λ = −1. What kind of bifurcation is this? Let ustake the second iterate of the logistic map,
x 7→ a[ax(1 − x)][1 − ax(1 − x)]. (6.7)
6.2. THE LOGISTIC MAP 121
Obviously the point x = 1 − 1a
is a fixed point for the second iterate of thelogistic map, Eq. (6.7), at a = 3. The eigenvalue for Eq. (6.7) is λ = 1.Thus, the second iterate of the logistic equation exhibits a bifurcation ata = 3 for the fixed point x = 1 − 1
a. Now, it is easy to find the type of
bifurcation since the eigenvalue is λ = 1. Thus, for Eq. (6.7) we have, bysetting g(x, a) = a[ax(1 − x)][1 − ax(1 − x)],
∂g
∂a
∣
∣
∣
∣
x,a
= 0,
∂2g
∂x2
∣
∣
∣
∣
x,a
= 0,
∂2g
∂x∂a
∣
∣
∣
∣
x,a
= 2 6= 0,
∂3g
∂x3
∣
∣
∣
∣
x,a
= −108 6= 0.
Thus, the second iterate of the logistic map, Eq. (6.7) undergoes a pitchforkbifurcation at a = 3 (see the criteria for a pitchfork bifurcation, Eq. (4.37)).Hence, we expect the birth of two additional steady states at this bifurcationpoint which are steady states of the second iterate but not steady statesof the first iterate (we know all fixed points of Eq. (6.4) already). Sincethe new steady states are steady states of the second iterate, they mustbe period-two fixed points of the first iterate. Hence, it is said that thelogistic map, Eq. (6.4) undergoes a period doubling bifurcation at the point(x, a) =
(
1 − 1a, 3)
.In Fig. 6.1 we present a partial bifurcation diagram for the logistic map.
We can see that for a < 1 only the steady state x = 0 exists and it is stable.At a = 1 we have a transcritical bifurcation and the emergence of thesecond steady state x = 1 − 1
a. This branch is stable where as the x = 0
steady state becomes unstable. Note here that due to the assumptions of
122 CHAPTER 6. CHAOS THROUGH EXAMPLES
0 1 2 3 4a
-0.5
0
0.5
1
x
this branch does not exist
Figure 6.1: Partial bifurcation diagram for the logistic map. At a = 1the system exhibits a transcritical bifurcation (•). At a = 3 the systemexhibits a period-doubling bifurcation (◦). An additional period-doublingbifurcation (◦) is observed at a = 3.44949. Solid and dashed lines indicatestable and unstable steady states, respectively.
the problem the branch x = 1 − 1a
for a < 1 does not actually exist. Ata = 3 we observe an additional bifurcation for the steady state x = 1 − 1
a.
This is a pitchfork bifurcation for the second iterate of the logistic map,i.e. a period-doubling bifurcation of Eq. (6.4). Two new steady states areborn which are stable. The steady state x = 1− 1
abecomes unstable at this
point. Increasing a even further we observe additional period-doublingsfor the new steady states (e.g. at a = 3.44949). Increasing the valueof a in the region 3 < a < a∞ where a∞ = 3.569946 . . . a sequenceof period-doubling bifurcations occur at critical points a1, a2, . . .. In theregion a∞ < a < 4 the logistic map is chaotic. Some examples of theperiod-doubling route to chaos for the logistic map are shown in Fig. 6.2.
An additional interesting fact of the logistic map and the period-doublingroute to chaos is that the critical points where period-doubling bifurcations
6.2. THE LOGISTIC MAP 123
0 0.5 1x
n
0
0.5
1
x n+1
(a)
0 0.5 1x
n
0
0.5
1
x n+1
(b)
0 0.5 1x
n
0
0.5
1
x n+1
(c)
0 0.5 1x
n
0
0.5
1
x n+1
(d)
0 0.5 1x
n
0
0.5
1
x n+1
(e)
Figure 6.2: The logistic map for (a) a = 0.5, a zero steady state, (b) a = 2,period one, (c) a = 3.1, period two, (d) a = 3.46, period four and a = 3.9,chaos.
124 CHAPTER 6. CHAOS THROUGH EXAMPLES
occur have a simple relation,
an = a∞ − cqn, (6.8)
where c = 2.6327 . . . and q = 4.669202 . . .. The number q is called theFaigenbaum number and arises in most of the dynamical systems whereperiod-doubling takes place.
6.3 More fun with maps
Just for fun, in this section we will see another example of a discrete mapwhich shows a peculiar dynamical response; it is the Arnold’s cat map.
The Arnold’s cat map is given by the following equation,
(
xy
)
7→(
1 11 2
)(
xy
)
, (6.9)
where both x and y are taken mod(1). This map is linear and actually isperiodic with given number of iterations. But since its action is to stretchand fold the variables within the unit square, is closely related to chaoticsystems.
The response of the Arnold’s cat map can be easily visualized if ifconsider the action of the matrix ( 1 1
1 2 ) to the coordinates of an image com-posed by n × n pixels. Within the few first iterations, the original imageis completely distorted and near-by points well separated. After a specificnumber of iterations, as by magic, the original figure is restored.
In Fig. 6.3 we show the effect of this map to the cat, as it was originallyapplied to a similar image...
Of course, there is nothing special with the cat image. The Arnold’scat map can be applied to any n × n region and the result will be similar.In Fig. 6.4 we show the effect of Eq. (6.9) to an unlucky volunteer.
6.3. MORE FUN WITH MAPS 125
Figure 6.3: The Arnold’s cat map for a 32 × 32 cat at steps i = 0, 1, 2, 3,14, 21 and 22.
Figure 6.4: The Arnold’s cat map for a 128× 128 volunteer at steps i = 0,1, 3, 48, 95 and 96.
126 CHAPTER 6. CHAOS THROUGH EXAMPLES
-4 -2 0 2 4x
-4
-2
0
2
4
y
(a)
-8 -4 0 4 8x
-8
-4
0
4
y
(b)
-5 0 5 10x
-8
-4
0
4
y
(c)
-10 -5 0 5 10 15x
-15-10
-505
10
y
(d)
Figure 6.5: Solution of the Rossler equations for (a) c = 2, period one, (b)c = 3.5, period two, (c) c = 4, period four and (d) c = 5.7, chaos.
6.4 The Rossler equations
The Rossler equations represent an unrealistic chemical scenario with thefollowing kinetic equations,
x = −(y + z),
y = x + ay,
z = b + xz − cz.
(6.10)
Even though this three-dimensional dynamical system has only a simplequadratic nonlinearity (the term xz) it exhibits chaotic response. The routeto the chaotic attractor is through period doubling. An example for theroute to chaos for the Rossler equations is shown in Fig. 6.5 for a = b =0.2.
6.4. THE ROSSLER EQUATIONS 127
2 4 6 8 10 12x
n
2
4
6
8
10
12x n+
1(a)
0 0.2 0.4 0.6 0.8 1x
n
0
0.2
0.4
0.6
0.8
1
x n+1
(b)
Figure 6.6: (a) Poincare map for the Rossler equations for c = 5.7 and (b)the logistic map for a = 3.9.
Here we can observe that even though the Rossler equations and the lo-gistic map are very different dynamical systems, they both follow the sameroute to chaos, that is via period-doubling bifurcations. We can find evenmore similarities if we construct a discrete map from the Rossler equations.Let’s do this by “sampling” the solution of the Eq. (6.10) at each time thex variable reaches its maximum. This procedure will give us a sequence ofpoints, xi, for i = 1, 2, . . . , N . Then, similarly to the logistic map, let usplot the graph xi+1 versus xi. Such a graph is called the Poincare map ofthe Rossler equations.
In Fig. 6.6 we present the Poincare map of the Rossler equations forc = 5.7 and, for comparison, the logistic map for a = 3.9. It can be seenthat the similarity is striking. Both systems following a period-doublingroute to chaos exhibit the same discrete maps.
128 CHAPTER 6. CHAOS THROUGH EXAMPLES
Bibliography
V.I. Arnold. Ordinary Differential Equations. MIT Press, Cambridge,Massachusetts, 1973.
S. Wiggins. Introduction to Applied Nonlinear Dynamical Systems andChaos, volume 2 of Texts in Applied Mathematics. Springer-Verlag,New York, 1990.
V.V. Nemytskii and V.V. Stepanov. Qualitative Theory of DifferentialEquations. Dover Publications, Inc., New York, 1989.
H. Anton. Elementary Linear Algebra. Willey, New York, 5 edition, 1987.
R.A. Coddington and N. Levinson. Theory of Ordinary Differential Equa-tions. McGraw-Hill, Inc., New York, 1955.
M.W. Hirsch and S. Smale. Differential Equations, Dynamical Systems,and Linear Algebra. Academic Press, New York, 1974.
R. Grimshaw. Nonlinear Ordinary Differential Equations. Blackwell Sci-entific Publications, Oxford, 1990.
J.J. Stoker. Non Linear Vibrations. Wiley, New York, 1950.
H.T. Davis. Introduction to nonlinear differential and integral equations.Dover Publications, Inc., New York, 1962.
129
130 BIBLIOGRAPHY
J. Guckenheimer and Ph. Holmes. Nonlinear Oscillations, Dynamical Sys-tems and Bifurcation of Vector Fields, volume 42 of Applied Mathemat-ical Sciences. Spriger-Verlag, New York, 1983.
J. Carr. Applications of Centre Manifold Theory, volume 35 of AppliedMathematical Sciences. Springer-Verlag, New York, 1981.