Laplace Transform
-
Upload
hirdesh-sehgal -
Category
Documents
-
view
58 -
download
13
description
Transcript of Laplace Transform
Differential Equations for
Engineers
————
Lecture Notes for Math 3503
B. O. J. Tupper
Department of Mathematics and StatisticsUniversity of New Brunswick
July, 2006
Contents
1 LAPLACE TRANSFORMS 1
1.1 Introduction and definition. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Examples of Laplace transforms. . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 The gamma function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Inverse transforms and partial fractions. . . . . . . . . . . . . . . . . . . . . 10
1.5 The First Shifting Theorem. . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.6 Step functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.7 Differentiation and integration of transforms. . . . . . . . . . . . . . . . . . 25
1.8 Laplace transforms of derivatives and integrals. . . . . . . . . . . . . . . . . 28
1.9 Application to ordinary differential equations. . . . . . . . . . . . . . . . . . 31
1.10 Discontinuous forcing functions. . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.11 Periodic functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
1.12 Impulse functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
1.13 The convolution integral. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2 SYSTEMS OF FIRST ORDER LINEAR EQUATIONS 53
2.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.2 Basic theory of systems of first order linear equations. . . . . . . . . . . . . 55
2.3 Review of eigenvalues and eigenvectors. . . . . . . . . . . . . . . . . . . . . 64
2.4 Homogeneous linear systems with constant coefficients. . . . . . . . . . . . . 71
2.5 Complex eigenvectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
i
0 CONTENTS
2.6 Repeated eigenvalues. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
2.7 Nonhomogeneous systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
2.8 Laplace transform method for systems. . . . . . . . . . . . . . . . . . . . . . 88
3 FOURIER SERIES 91
3.1 Orthogonal sets of functions. . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.2 Expansion of functions in series of orthogonal functions. . . . . . . . . . . . 95
3.3 Fourier series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
3.4 Cosine and sine series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
3.5 Half-range expansions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
3.6 Complex form of the Fourier series. . . . . . . . . . . . . . . . . . . . . . . . 115
3.7 Separable partial differential equations. . . . . . . . . . . . . . . . . . . . . 117
A ANSWERS TO ODD-NUMBERED PROBLEMS AND TABLES 137
A.1 Answers for Chapter 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
A.2 Answers for Chapter 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
A.3 Answers for Chapter 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
A.4 Table of Laplace transforms. . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Chapter 1
LAPLACE TRANSFORMS
1.1 Introduction and definition.
Integral transforms are powerful tools for solving linear differential equations; they
change the differential equations into algebraic equations involving the initial values of
the dependent variable and its derivatives. An integral transform changes a function f(t)
of one variable into a function F (s) of another variable by means of a definite integral of
the form
F (s) =
∫ b
aK(s, t)f(t) dt. (1.1)
The function K(s, t) is called the kernel of the transformation and F (s) is the transform of
f(t).
The Laplace transform L{f(t)} or F (s) of a function f(t), defined for t ≥ 0, is defined
by the equation
L{f(t)} = F (s) =
∫
∞
0e−stf(t) dt = lim
b→∞
∫ b
0e−stf(t) dt, (1.2)
provided that the limit exists.
The Laplace transform is a linear transform, i.e., if L{f(t)} = F (s) and L{g(t)} = G(s),
then L{αf(t) + βg(t)} = αF (s) + βG(s), where α and β are constants.
1
2 CHAPTER 1. LAPLACE TRANSFORMS
The Laplace transform L{f(t)} will exist if the following conditions are satisfied by the
function f(t):
(i) f(t) is piecewise continuous on the interval [0,∞), (1.3)
(ii) f(t) is of exponential order for t > T , i.e., there exist numbers c, M > 0, T > 0 such
that | f(t) | ≤ Mect for t > T . (1.4)
The Laplace transform then exists for s > c.
A function f(t) is piecewise continuous on [0,∞), if, in any interval 0 ≤ a ≤ t ≤ b, there
are at most a finite number of points tk, k = 1, 2, ..., n at which f has finite discontinuities
and is continuous on each open interval tk−1 < t < tk.
f(t)
t
In other words, f(t) is piecewise continuous on [0,∞) if it is continuous on the interval
except for a finite number of finite jump discontinuities. If f(t) is piecewise continuous on
a ≤ t ≤ b, where b is finite, then the integral
∫ b
af(t) dt exists.
The condition (1.4) is satisfied by many functions. For example, tn where n > 0, cos at,
sin at, eat, but is not satisfied by et2 since this grows faster than any positive linear power
of e.
1.2. EXAMPLES OF LAPLACE TRANSFORMS. 3
While the conditions (1.3) and (1.4) are sufficient to ensure the existence of the Laplace
transform, they are not necessary. For example, the function f(t) = t−12 is infinite at t = 0,
and so is not piecewise continuous there, but, as we shall see, its transform does exist.
Now we give the proof that if the conditions (1.3) and (1.4) hold, then the Laplace
transform exists.
L{f(t)} =
∫
∞
0e−stf(t) dt
=
∫ T
0e−stf(t) dt +
∫
∞
Te−stf(t) dt.
The first integral exists because f(t) is piecewise continuous. For t ≥ T , we have, by (1.4)
| e−stf(t) | ≤ e−stMect = Me−(s−c)t,
so that
∣
∣
∣
∣
∫
∞
Te−stf(t) dt
∣
∣
∣
∣
≤ M
∫
∞
Te−(s−c)tdt
= −M
[
e−(s−c)t
s − c
]
∞
T
,
which converges provided that −(s − c) is negative, i.e., s > c. This proves the result.
1.2 Examples of Laplace transforms.
Example 1. Evaluate L{1}.
L{1} =
∫
∞
0e−st1 dt =
[
−1
se−st
]
∞
0
=1
s, provided s > 0.
∴ L{1} =1
s, s > 0,
i.e., if f(t) = 1, then F (s) =1
s.
4 CHAPTER 1. LAPLACE TRANSFORMS
Example 2. Evaluate L{t}.
L{t} =
∫
∞
0e−stt dt =
[
t(−1
se−st)
]
∞
0
−∫
∞
01(−1
se−st) dt
= 0 −[
1
s2e−st
]
∞
0
=1
s2, provided s > 0.
∴ L{t} =1
s2, s > 0.
Example 3. Evaluate L{tn}, where n is a positive integer.
L{tn} =
∫
∞
0e−sttndt =
[
tn(−1
se−st)
]
∞
0
−∫
∞
0ntn−1(−1
se−st) dt
= 0 +n
s
∫
∞
0e−sttn−1dt , provided s > 0.
∴ L{tn} =n
sL{tn−1}.
Put n = 2. L{t2} =2
sL{t} =
2
s3.
Put n = 3. L{t3} =3
sL{t2} =
3!
s4.
Continuing this process we see that
L{tn} =n!
sn+1, s > 0.
Example 4. Evaluate L{eat}.
L{eat} =
∫
∞
0e−steatdt =
[
e−(s−a)t
−(s − a)
]
∞
0
= 0 +1
s − a=
1
s − a, s > a.
Example 5. Evaluate L{cos at} and L{sin at}.
L{cos at} =
∫
∞
0e−st cos at dt
L{sin at} =
∫
∞
0e−st sin at dt
1.2. EXAMPLES OF LAPLACE TRANSFORMS. 5
Now cos at + i sin at = eiat, so that
L{eiat} = L{cos at} + iL{sin at}
=
∫
∞
0e−steiatdt =
[
e−(s−ia)t
−(s − ia)
]
∞
0
= 0 +1
s − ia=
s + ia
s2 + a2, s > 0.
∴ L{cos at} =s
s2 + a2, L{sin at} =
a
s2 + a2.
Example 6. Evaluate L{cos2 t}.
Now cos2 t =1
2(1 + cos 2t), so
L{cos2 t} =1
2L{1} +
1
2L{cos 2t}
=1
2s+
s
2(s2 + 4).
Example 7. Evaluate L{cosh at} and L{sinh at}.
Now cosh at =1
2(eat + e−at), so
L{cosh at} =1
2L{eat} +
1
2L{e−at}
=1
2
1
s − a+
1
2
1
s + a, s > a and s > −a
=s
s2 − a2, s > | a | .
Similarly, L{sinh at} =1
2L{eat} − 1
2L{e−at}
=a
s2 − a2, s > | a | .
Example 8. Evaluate L{t2eat}
L{t2eat} =
∫
∞
0e−stt2eatdt
=
∫
∞
0t2e−(s−a)tdt
6 CHAPTER 1. LAPLACE TRANSFORMS
=
[
t2 · e−(s−a)t
−(s − a)
]
∞
0
−∫
∞
02t ·
[
e−(s−a)t
−(s − a)
]
dt
= 0 +2
s − a
∫
∞
0te−(s−a)tdt , s > a
=2
s − a
{[
t · e−(s−a)t
−(s − a)
]
∞
0
−∫
∞
0
e−(s−a)t
−(s − a)dt
}
= 0 +2
(s − a)2
[
e−s−a)t
−(s − a)
]
∞
0
, s > a
=2
(s − a)3[0 + 1] =
2
(s − a)3, s > a.
Example 9. Evaluate L{f(t)} where f(t) =
{
0 0 ≤ t < 21 t ≥ 2
.
L{f(t)} =
∫
∞
0e−stf(t) dt
=
∫ 2
0e−st · 0 dt +
∫
∞
2e−st · 1 dt
= 0 +
[
e−st
−s
]
∞
2
=e−2s
s, s > 0.
Example 10. Evaluate L{f(t)} where f(t) is as shown in the diagram.
f(t) =
{
kc t 0 ≤ t < c
0 t ≥ c
k
c
f(t)
t
1.3. THE GAMMA FUNCTION. 7
L{f(t)} =
∫ c
0e−st k
ct dt =
k
c
{[
t
(
e−st
−s
)]c
0
−∫ c
01e−st
−sdt
}
=k
c
{
−ce−sc
s− 0 +
[
e−st
−s2
]c
0
}
=k
c
{
−ce−sc
s− e−sc
s2+
1
s2
}
.
1.3 The gamma function.
The gamma function Γ(x) is defined by
Γ(x) =
∫
∞
0e−ttx−1dt , x > 0. (1.5)
This definite integral converges when x is positive and so defines a function of x for positive
values of x.
Now
Γ(1) =
∫
∞
0e−tdt = 1 (1.6)
and integrating by parts we see that
Γ(x + 1) =
∫
∞
0txe−tdt = x
∫
∞
0tx−1e−tdt + [−txe−t]∞0
= x
∫
∞
0tx−1e−tdt,
and hence
Γ(x + 1) = xΓ(x), (1.7)
so that when x is a positive integer
Γ(2) = 1 × Γ(1) = 1Γ(3) = 2 × Γ(2) = 2
......
...Γ(n + 1) = n!
8 CHAPTER 1. LAPLACE TRANSFORMS
In particular 0! = Γ(1) = 1. For this reason the gamma function is often referred to as
the generalized factorial function.
Since Γ(x) is defined for x > 0, Eq. (1.7) shows that it is defined for 0 > x > −1
and repeated use of Eq (1.7) in the form Γ(x) =1
xΓ(x + 1) shows that Γ(x) is defined in
the intervals −2 < x < −1, −3 < x < −2, etc. However, Eqs. (1.6) and (1.7) show that
Γ(0) =1
0, so the gamma function becomes infinite when x is zero or a negative integer.
In the definition (1.5) make the substitution t = u2, then
Γ(x) = 2
∫
∞
0u2x−1e−u2
du
and putting x =1
2we obtain
Γ
(
1
2
)
= 2
∫
∞
0e−u2
du. (1.8)
But, for a definite integral,
∫
∞
0e−u2
du =
∫
∞
0e−v2
dv, so that
[
Γ
(
1
2
)]2
= 4
∫
∞
0e−u2
du
∫
∞
0e−v2
dv
= 4
∫
∞
0
∫
∞
0e−(u2+v2)du dv. (1.9)
To evaluate this integral we transform to polar co-ordinates u = r cos θ, v = r sin θ and
the double integral (1.9) becomes
[
Γ
(
1
2
)]2
= 4
∫ π
2
0
∫
∞
0e−r2
r dr dθ = π.
∴ Γ
(
1
2
)
=√
π. (1.10)
Using Eq. (1.7) then shows that
Γ
(
3
2
)
=1
2Γ
(
1
2
)
=1
2
√π, (1.11)
−1
2Γ
(
−1
2
)
= Γ
(
1
2
)
⇒ Γ
(
−1
2
)
= −2√
π. (1.12)
1.3. THE GAMMA FUNCTION. 9
Example 1. Evaluate L{ta}, where a is any number satisfying a > −1.
L{ta} =
∫
∞
0e−sttadt.
Put st = u, i.e., t =u
s, dt =
du
s. Then
L{ta} =
∫
∞
0e−u ua
sa
du
s=
1
sa+1
∫
∞
0e−uuadu
=Γ(a + 1)
sa+1,
from Eq. (1.5). Note that we must have a > −1 for the integral to converge at u = 0. Thus
we have the result
L{ta} =Γ(a + 1)
sa+1for all a > −1. (1.13)
When a is a positive integer this corresponds to the result of Example 3 in Section 1.2.
Example 2. Evaluate L{t− 12 } and L{t 1
2 }.
From Eqs. (1.10), (1.12) and (1.13) we find that
L{t− 12 } =
Γ(12)
s12
=
√
π
s
and
L{t 12 } =
Γ(32)
s32
=12Γ(1
2)
s32
=1
2
√
π
s3.
Problem Set 1.3
In problems 1-14 use the definition (1.2) to find L{f(t)}.
1. f(t) =
{
−1 0 ≤ t < 11 t ≥ 1
2. f(t) =
{
4 0 ≤ t < 20 t ≥ 2
3. f(t) =
{
t 0 ≤ t < 11 t ≥ 1
4. f(t) =
{
2t + 1 0 ≤ t < 10 t ≥ 1
5. f(t) =
{
sin t 0 ≤ t < π0 t ≥ π
6. f(t) =
{
0 0 ≤ t < π2
cos t t ≥ π2
10 CHAPTER 1. LAPLACE TRANSFORMS
7. f(t) = et+7 8. f(t) = e−2t−5
9. f(t) = te4t 10. f(t) = t2e3t
11. f(t) = e−t sin t 12. f(t) = et cos t
13. f(t) = t cos t 14. f(t) = t sin t
In problems 15 - 40 use the results of the examples in Sections 1.2 and 1.3 to evaluate
L{f(t)}.
15. f(t) = 2t4 16. f(t) = t5 17. f(t) = 4t − 10
18. f(t) = 7t + 3 19. f(t) = t2 + 6t − 3 20. f(t) = −4t2 + 16t + 9
21. f(t) = (t + 1)3 22. f(t) = (2t − 1)3 23. f(t) = 1 + e4t
24. f(t) = t2 − e−9t + 5 25. f(t) = (1 + e2t)2 26. f(t) = (et − e−t)2
27. f(t) = 4t2 − 5 sin 3t 28. f(t) = cos 5t + sin 2t 29. f(t) = t sinh t
30. f(t) = et sinh t 31. f(t) = e−t cosh t 32. f(t) = sin 2t cos 2t
33. f(t) = sin2 t 34. f(t) = cos t cos 2t 35. f(t) = sin t sin 2t
36. f(t) = sin t cos 2t 37. f(t) = sin3 t 38. f(t) = t32
39. f(t) = t14 40. f(t) = (t
12 + 1)2
1.4 Inverse transforms and partial fractions.
The Laplace transform of a function f(t) is denoted by L{f(t)} = F (s). If we are
given F (s) and are required to find the corresponding function f(t), then f(t) is the
inverse Laplace transform of F (s) and we write
f(t) = L−1{F (s)}.
From the examples of Section 1.2 we can find the following inverse Laplace transforms:
1.4. INVERSE TRANSFORMS AND PARTIAL FRACTIONS. 11
L−1
{
1
s
}
= 1, L−1
{
n!
sn+1
}
= tn,
L−1
{
1
s − a
}
= eat, L−1
{
a
s2 + a2
}
= sin at,
L−1
{
s
s2 + a2
}
= cos at, L−1
{
a
s2 − a2
}
= sinh at,
L−1
{
s
s2 − a2
}
= cosh at.
Example 1. Evaluate L−1
{
5
s + 3
}
.
L−1
{
5
s + 3
}
= 5L−1
{
1
s + 3
}
= 5e−3t.
Example 2. Evaluate L−1
{
2
s2 + 16
}
.
L−1
{
2
s2 + 16
}
=1
2L−1
{
4
s2 + 16
}
=1
2sin 4t.
Example 3. Evaluate L−1
{
s + 1
s2 + 1
}
.
L−1
{
s + 1
s2 + 1
}
= L−1
{
s
s2 + 1
}
+ L−1
{
1
s2 + 1
}
= cos t + sin t.
Example 4. Evaluate L−1
{
1
s4
}
.
L−1
{
1
s4
}
=1
3!L−1
{
3!
s4
}
=1
6t3.
In order to find inverse transforms we very often have to perform a partial fraction
decomposition. Here are some typical examples:
Example 5. Evaluate L−1
{
s2 − 10s − 25
s3 − 25s
}
.
s2 − 10s − 25
s3 − 25s=
s2 − 10s − 25
s(s2 − 25)=
s2 − 10s − 25
s(s − 5)(s + 5).
Write the last fraction ass2 − 10s − 25
s(s − 5)(s + 5)=
A
s+
B
s − 5+
C
s + 5. We need to find A, B and
C. Multiply both sides of the equation by s(s − 5)(s + 5) and we obtain
s2 − 10s − 25 = A(s − 5)(s + 5) + Bs(s + 5) + Cs(s − 5).
12 CHAPTER 1. LAPLACE TRANSFORMS
The denominator is zero when s = 0, +5, −5, so put these values into the above equation.
s = 0 −25 = A(−5)(5) ∴ A = 1.
s = 5 25 − 50 − 25 = B(5)(10) ∴ B = −1.
s = −5 25 + 50 − 25 = C(−5)(−10) ∴ C = 1.
Hence
s2 − 10s − 25
s(s − 5)(s + 5)=
1
s− 1
s − 5+
1
s + 5.
∴ L−1
{
s2 − 10s − 25
s(s − 5)(s + 5)
}
= L−1
{
1
s
}
− L−1
{
1
s − 5
}
+ L−1
{
1
s + 5
}
= 1 − e5t + e−5t.
Example 6. Evaluate L−1
{
2s − 1
s3(s + 1)
}
.
Write2s − 1
s3(s + 1)=
A
s+
B
s2+
C
s3+
D
s + 1.
Multiply both sides of the equation by s3(s + 1) to get
2s − 1 = As2(s + 1) + Bs(s + 1) + C(s + 1) + Ds3.
Put s = 0 : −1 = C ∴ C = −1
Put s = −1 : −3 = −D ∴ D = 3,
i.e., 2s − 1 = A(s3 + s2) + B(s2 + s) − s − 1 + 3s3.
s3 terms : A + 3 = 0 ∴ A = −3
s2 terms : A + B = 0 ∴ B = 3
s terms : B − 1 = 2 ∴ B = 3.
∴
2s − 1
s3(s + 1)= −3
s+
3
s2− 1
s3+
3
s + 1.
1.5. THE FIRST SHIFTING THEOREM. 13
∴ L−1
{
2s − 1
s3(s + 1)
}
= −3L−1
{
1
s
}
+ 3L−1
{
1
s2
}
− L−1
{
1
s3
}
+ 3L−1
{
1
s + 1
}
= −3 + 3t − 1
2t2 + 3e−t.
Example 7. Evaluate L−1
{
s + 2
(s + 1)(s2 + 4)
}
.
Writes + 2
(s + 1)(s2 + 4)=
A
s + 1+
Bs + C
s2 + 4.
Multiply both sides by (s + 1)(s2 + 4) to get
s + 2 = A(s2 + 4) + (Bs + C)(s + 1).
Put s = −1: 1 = 5A ∴ A = 15
s2 terms 0 = A + B ∴ B = −15
s terms 1 = B + C ∴ C = 65 .
∴
s + 2
(s + 1)(s2 + 4)=
15
s + 1+
−15s + 6
5
s2 + 4.
∴ L−1
{
s + 2
(s + 1)(s2 + 4)
}
=1
5L−1
{
1
s + 1
}
− 1
5L−1
{
s
s2 + 4
}
+3
5L−1
{
2
s2 + 4
}
.
=1
5e−t − 1
5cos 2t +
3
5sin 2t.
Problem Set 1.4
Find f(t) if L{f(t)} is given by
1.s + 12
s2 + 4s2.
s − 3
s2 − 13.
3s
s2 + 2s − 8
4.2s2 + 5s − 1
s3 − s5.
s + 1
s3(s − 1)(s + 2)6.
3s2 − 2s − 1
(s − 3)(s2 + 1)
1.5 The First Shifting Theorem.
This theorem expands our ability to find Laplace transforms and their inverses.
14 CHAPTER 1. LAPLACE TRANSFORMS
The First Shifting Theorem: If L{f(t)} = F (s) when s > c, then
L{eatf(t)} = F (s − a), s > c + a. (1.14)
Proof:
L{eatf(t)} =
∫
∞
0e−steatf(t) dt
=
∫
∞
0e−(s−a)tf(t) dt = F (s − a).
Example 1. Evaluate L{eattn}.
L{tn} =n!
sn+1so L{eattn} =
n!
(s − a)n+1.
Example 2. Evaluate L{eat sin bt}.
L{sin bt} =b
s2 + b2so L{eat sin bt} =
b
(s − a)2 + b2.
Example 3. Evaluate L{eat cos bt}.
L{cos bt} =s
s2 + b2so L{eat cos bt} =
s − a
(s − a)2 + b2.
Example 4. Evaluate L−1
{
2
s2 + 2s + 2
}
.
Now s2 + 2s + 2 = (s + 1)2 + 1, so we require
L−1
{
2
(s + 1)2 + 1
}
= 2L−1
{
1
(s + 1)2 + 1
}
.
The quantity inside { } is identical with the answer to Example 2 above with a = −1, b = 1.
∴ L−1
{
2
s2 + 2s + 2
}
= 2e−t sin t.
Example 5. Evaluate L−1
{
3s + 9
s2 + 2s + 10
}
.
1.5. THE FIRST SHIFTING THEOREM. 15
Now
3s + 9
s2 + 2s + 10=
3(s + 1) + 6
(s + 1)2 + 9=
3(s + 1)
(s + 1)2 + 9+
6
(s + 1)2 + 9.
As in Examples 2 and 3 with a = −1, b = 3, hence
L−1
{
3s + 9
s2 + 2s + 10
}
= 3e−t cos 3t + 2e−t sin 3t.
Problem Set 1.5
Use partial fractions (if necessary) and the First Shifting Theorem to find the inverse
Laplace transforms of the following functions:
1.10 − 4s
(s − 2)22.
s2 + s − 2
(s + 1)33.
s3 − 7s2 + 14s − 9
(s − 1)2(s − 2)3
4.s2 − 6s + 7
(s2 − 4s + 5)s5.
2s − 1
s2(s + 1)36.
3!
(s − 2)4
Find the Laplace transforms of the following functions:
7. L{te8t} 8. L{t7e−5t} 9. L{e−2t cos 4t}
10. L{e3t sinh t} 11. L{
sin 2t
et
}
12. L{e2t cos2 2t}
13. L{e3t(t + 2)2} 14. L{t1/2(et + e−2t)}
16 CHAPTER 1. LAPLACE TRANSFORMS
1.6 Step functions.
The unit step function ua(t) is defined as follows:
ta
u (t)a
1 ua(t) =
{
0 when t < a1 when t ≥ a
(a ≥ 0). (1.15)
When a = 0 we have
u (t)
1
0
t
u0(t) =
{
0 t < 01 t ≥ 0
.
Note that the function f(t) = 1 − uc(t) is given by
tc
f(t)
1 f(t) =
{
1 0 ≤ t < c0 t ≥ c
.
The difference between two step functions, i.e.,
f(t) = ua(t) − ub(t) (b > a)
1.6. STEP FUNCTIONS. 17
has a graph of the form:
t
f(t)
a b
1
Step functions can be used to turn on or turn off portions of the graph of a function. For
example, the function f(t) = t2 when multiplied by u1(t) becomes
f(t) = t2u1(t) =
{
0 0 ≤ t < 1t2 t ≥ 1
,
so that its graph is
1
f(t)
1
t
Given the function f(t) = t2 note the graphs of the following functions:
18 CHAPTER 1. LAPLACE TRANSFORMS
f(t) = t2
t
f(t)
f(t)=t 2
f(t) = t2 , t ≥ 0
t
f(t)=t
f(t)
2
> 0t
f(t − 1) , t ≥ 0
t
f(t)
1
f(t- )
1
1
1.6. STEP FUNCTIONS. 19
f(t − 1)u1(t) , t ≥ 0
1
t1
f(t)
f(t- )1
u (t)
Hence, given a function f(t), defined for t ≥ 0, the graph of the function f(t − a)ua(t)
consists of the graph of f(t) translated through a distance a to the right with the portion
from 0 to a ‘turned off’, i.e., put equal to zero.
The Laplace transform of ua(t) is
L{ua(t)} =
∫
∞
0e−stua(t) dt =
∫
∞
ae−stdt
=
[
e−st
−s
]
∞
a
=e−as
s,
i.e., L{ua(t)} =e−as
s, (s > 0). (1.16)
20 CHAPTER 1. LAPLACE TRANSFORMS
Example 1. Write f(t) in terms of unit step functions and find its Laplace transform
where
f(t) =
1 0 ≤ t < 10 1 ≤ t < 21 2 ≤ t < 30 t ≥ 3
.
Since f(t) can be expressed as
f(t) = u0(t) − u1(t) + u2(t) − u3(t),
we then have
L{f(t)} =1
s− e−s
s+
e−2s
s− e−3s
s=
1
s(1 − e−s)(1 + e−2s).
Example 2. Represent the function shown in the diagram below in terms of
unit step functions and find its Laplace transform.
t2 4 6 8
1
-1
-2
-3
f(t)
O
f(t) = −u0(t) − 2u2(t) + 4u4(t) − u6(t).
∴ L{f(t)} = −1
s− 2
se−2s +
4
se−4s − 1
se−6s = −1
s(1 − e−2s)(1 + 3e−2s − e−4s).
1.6. STEP FUNCTIONS. 21
Example 3. The function shown in the diagram below is periodic with period 3. Write
the function in terms of unit step functions and find its Laplace transform.
t2 4
1
1/2
3
f(t)
1O
f(t) = u0(t) −1
2u1(t) −
1
2u2(t) + u3(t) −
1
2u4(t) −
1
2u5(t) . . .
= (u0 + u3 + u6 + . . . ) − 1
2(u1 + u4 + u7 + . . . ) − 1
2(u2 + u5 + u8 + . . . ).
L{f(t)} =1
s(1 + e−3s + e−6s + . . . ) − 1
2s(e−s + e−4s + e−7s + . . . )
− 1
2s(e−2s + e−5s + e−8s + . . . )
=1
s
1
1 − e−3s− 1
2s
e−s
1 − e−3s− 1
2s
e−2s
1 − e−3s(Geometric Series)
=2 − e−s − e−2s
2s(1 − e−3s).
The Second Shifting Theorem: If L{f(t)} = F (s), then, for a > 0,
L{f(t − a)ua(t)} = e−asF (s), (1.17)
and, conversely,
L−1{e−asF (s)} = f(t − a)ua(t). (1.18)
Proof:
L{f(t − a)ua(t)} =
∫
∞
0e−stf(t − a)ua(t) dt
22 CHAPTER 1. LAPLACE TRANSFORMS
=
∫
∞
ae−stf(t − a) dt.
Put v = t − a, then v = 0 when t = a, dv = dt and the integral becomes
L{f(t − a)ua(t)} =
∫
∞
0e−s(v+a)f(v) dv
= e−as
∫
∞
0e−svf(v) dv
= e−asF (s).
Example 4. Evaluate L{(t − π)uπ(t)}.
f(t) = t, so F (s) =1
s2and L{(t − π)uπ(t)} =
e−πs
s2.
Example 5. Evaluate L{tu2(t)}.
Because the step function is u2(t), i.e., has suffix 2, the function t must be written as a
function of (t − 2), i.e., t = (t − 2) + 2.
∴ L{tu2(t)} = L{(t − 2)u2(t) + 2u2(t)}
=e−2s
s2+
2e−2s
s=
(1 + 2s)
s2e−2s,
because f(t) = t in the first term.
Example 6. Evaluate L{sin(t − 3)u3(t)}.
f(t) = sin t, so L{sin(t − 3)u3(t)} =1
s2 + 1e−3s.
Example 7. Find the Laplace transform of the function
g(t) =
{
0 t < 1t2 − 2t + 2 t ≥ 1
.
Now t2 − 2t + 2 = (t − 1)2 + 1,
i.e., f(t − 1) = (t − 1)2 + 1 so f(t) = t2 + 1.
1.6. STEP FUNCTIONS. 23
∴ L{g(t)} = L{f(t − 1)u1(t)} = e−sL{f(t)}
= e−s(2
s3+
1
s).
Example 8. Evaluate L−1
{
se−πs
s2 + 4
}
.
Now
L−1
{
s
s2 + 4
}
= cos 2t = f(t).
∴ L−1
{
se−πs
s2 + 4
}
= f(t − π)uπ(t) = cos 2(t − π)uπ(t) = cos 2tuπ(t).
Example 9. Evaluate L−1{e−s
s4}.
The e−s indicates that u1(t) appears in the function and so does f(t − 1).
L−1{ 1
s4} =
1
6t3 = f(t), so f(t − 1) =
1
6(t − 1)3 and
L−1{e−s
s4} =
1
6(t − 1)3u1(t).
Example 10. Evaluate L−1
{
1 + e−πs/2
s2 + 1
}
.
This is the sum of the inverse transforms
L−1
{
1
s2 + 1
}
and L−1
{
e−πs/2
s2 + 1
}
.
The first equals sin t; the second changes sin t to sin(t − π2 ) = − cos t and multiplies by
uπ
2(t).
∴ L−1
{
1 + e−πs/2
s2 + 1
}
= sin t − uπ
2(t) cos t.
24 CHAPTER 1. LAPLACE TRANSFORMS
Problem Set 1.6
Evaluate the following
1. L{(t − 1)u1(t)} 2. L{e2−tu2(t)} 3. L{(3t + 1)u3(t)}
4. L{(t − 1)3et−1u1(t)} 5. L{tet−5u5(t)} 6. L{cos t · u2π(t)}
7. L−1
{
e−2s
s3
}
8. L−1
{
(1 + e−2s)2
s + 2
}
9. L−1
{
e−πs
s2 + 1
}
10. L−1
{
se−πs/2
s2 + 4
}
11. L−1
{
e−s
s(s + 1)
}
12. L−1
{
e−2s
s2(s − 1)
}
13. L−1
{
1 − e−s
s2
}
14. L−1
{
2
s− 3e−s
s2+
5e−2s
s2
}
In Problems 15 - 20 write each function in terms of unit step functions and find the
Laplace transform of the given function.
15. f(t) =
{
2, 0 ≤ t < 3−2, t ≥ 3
16. f(t) =
1, 0 ≤ t < 40, 4 ≤ t < 51, t ≥ 5
17. f(t) =
{
0, 0 ≤ t < 1t2, t ≥ 1
18. f(t) =
{
0, 0 ≤ t < 3π2
sin t, t ≥ 3π2
19. f(t) =
{
t, 0 ≤ t < 20, t ≥ 2
20. f(t) is the staircase function,
i.e., see graph below.
t4
1
f(t)
2 3
2
3
1O
1.7. DIFFERENTIATION AND INTEGRATION OF TRANSFORMS. 25
1.7 Differentiation and integration of transforms.
Theorem: If F (s) = L{f(t)}, then
F ′(s) = L{−tf(t)}, (1.19)
and F (n)(s) = L{(−1)ntnf(t)}. (1.20)
Proof:
F (s) = L{f(t)} =
∫
∞
0e−stf(t) dt.
∴
dF
ds≡ F ′(s) =
∫
∞
0−te−stf(t) dt
=
∫
∞
0e−st[−tf(t)] dt.
∴ F ′(s) = L{−tf(t)}.
By continuing to differentiate with respect to t we see that each differentiation produces
another factor −t, so the result (1.20) follows easily.
Example 1. Evaluate L{t cos 2t}.
This is equal to (from (1.19)) −F ′(s) where F (s) = L{cos 2t} =s
s2 + 4. Hence
L{t cos t} = −(
s
s2 + 4
)
′
= −[
1(s2 + 4) − s · 2s
(s2 + 4)2
]
=s2 − 4
(s2 + 4)2.
Example 2. Evaluate L{te2t}.
f(t) = e2t∴ F (s) =
1
s − 2.
Hence
L{te2t} = −F ′(s) =1
(s − 2)2.
26 CHAPTER 1. LAPLACE TRANSFORMS
Example 3. Evaluate L{t2et}.
From (1.2), with n = 2, L{t2f(t)} = F ′′(s).
f(t) = et∴ F (s) =
1
s − 1∴ F ′(s) =
−1
(s − 1)2, F ′′(s) =
2
(s − 1)3.
∴ L{t2et} =2
(s − 1)3.
Example 4. Evaluate L{te−2t sin wt}.
From the First Shifting Theorem L{e−2t sin wt} =w
(s + 2)2 + w2,
i.e., if f(t) = e−2t sinwt then L{f(t)} =w
(s + 2)2 + w2= F (s).
∴ L{tf(t)} = −F ′(s) =w · 2(s + 2)
[(s + 2)2 + w2]2.
Example 5. Evaluate L−1{ 1
(s + 1)2}.
1
(s + 1)2= [− 1
s + 1]′, so if F (s) =
1
s + 1then
1
(s + 1)2= −F ′(s) and f(t) = L−1{F (s)} = e−t.
∴
1
(s + 1)2= −F ′(s) = L{te−t}.
Hence L−1
{
1
(s + 1)2
}
= te−t.
Example 6. Evaluate L−1
{
2s
(s2 − 4)2
}
.
Now2s
(s2 − 4)2= −
(
1
s2 − 4
)
′
= −F ′(s), where
1.7. DIFFERENTIATION AND INTEGRATION OF TRANSFORMS. 27
F (s) =1
s2 − 4= L{1
2sinh 2t}, i.e., f(t) =
1
2sinh 2t.
∴ −F ′(s) = L{t · 1
2sinh 2t},
i.e., L−1
{
2s
(s2 − 4)2
}
=1
2t sinh 2t.
Theorem: If f(t) satisfies the condition for the existence of the Laplace transform
and if limt→0+
f(t)
texists, then
L{
f(t)
t
}
=
∫
∞
sF (s) ds , (s > c) . (1.21)
Proof:
∫
∞
sF (s) ds =
∫
∞
s
[∫
∞
0e−stf(t) dt
]
ds .
Reversing the order of integration, we obtain
∫
∞
sF (s) ds =
∫
∞
0
[∫
∞
se−stf(t) ds
]
dt =
∫
∞
0f(t)
[∫
∞
se−st ds
]
dt
=
∫
∞
0f(t)
[
−1
te−st
]
∞
s
dt =
∫
∞
0e−st f(t)
tdt
= L{f(t)
t}, (s > c).
Example 7. Evaluate L−1{ln s + a
s − a}.
Now lns + a
s − a= ln(s + a) − ln(s − a),
− d
ds[ln(s + a) − ln(s − a)] = − 1
s + a+
1
s − a≡ F (s).
∴ f(t) = L−1{F (s)} = −e−at + eat = 2 sinh at.
Hence L−1
{
lns + a
s − a
}
= L−1
{∫
∞
sF (s) ds
}
=f(t)
t
= 2t−1 sinh at.
28 CHAPTER 1. LAPLACE TRANSFORMS
Example 8. Evaluate L−1{arc cot(s + 1)}.[
Note: If y = arc cot x, then y′ = − 1
1 + x2.
]
− d
ds[arc cot(s + 1)] =
1
1 + (s + 1)2≡ F (s)
f(t) = L−1{F (s)} = e−t sin t.
Hence L−1{arc cot(s + 1)} =f(t)
t= t−1e−t sin t.
Problem Set 1.7
Evaluate the following.
1. L{t cos 2t} 2. L{t sinh 3t} 3. L{t2 sinh t}
4. L{t2 cos t} 5. L{te2t sin 6t} 6. L{te−3t cos 3t}
7. L−1
{
s
(s2 + 1)2
}
8. L−1
{
s + 1
(s2 + 2s + 2)2
}
9. L−1
{
lns − 3
s + 1
}
10. L−1
{
lns2 + 1
s2 + 4
}
11. L−1
{
1
s− arc cot
4
s
}
12. L−1
{
arc tan1
s
}
1.8 Laplace transforms of derivatives and integrals.
Theorem: Suppose that f(t) is continuous for all t ≥ 0 and is of exponential order.
Suppose also that the derivative f ′(t) is piecewise continuous on every finite interval in the
range t ≥ 0. Then the Laplace transform of the derivative f ′(t) exists when s > c and
L{f ′(t)} = sL{f(t)} − f(0). (1.22)
Proof:
L{f ′(t)} =
∫
∞
0e−stf ′(t) dt
=[
e−stf(t)]
∞
0+ s
∫
∞
0e−stf(t) dt
= −f(0) + sL{f(t)}.
1.8. LAPLACE TRANSFORMS OF DERIVATIVES AND INTEGRALS. 29
Note that in the above proof we have assumed that f ′(t) is continuous. If f ′(t) is piecewise
continuous the proof is similar but the range of integration must be broken into parts for
which f ′(t) is continuous.
We may extend the formula (1.22) to find the Laplace transforms of higher derivatives.
For example, replacing f(t) by f ′(t) in Eq. (1.22) we obtain
L{f ′′(t)} = sL{f ′(t)} − f ′(0),
and, using Eq. (1.22) again, we obtain
L{f ′′(t)} = s2L{f(t)} − sf(0) − f ′(0). (1.23)
Similarly, we can extend this to higher derivatives to obtain
L{f (n)(t)} = snL{f} − sn−1f(0) − sn−2f ′(0) − . . . − f (n−1)(0). (1.24)
We shall use the results (1.22) and (1.23) to solve differential equations with given initial
conditions. However, we first note that (1.22) can be used to find unknown transforms.
Example 1. Evaluate L{sin2 t} using Eq. (1.22).
f(t) = sin2 t , so f(0) = 0.
f ′(t) = 2 sin t cos t = sin 2t.
∴ L{f ′(t)} = L{sin 2t} =2
s2 + 4.
But L{f ′(t)} = sL{f(t)} − f(0) = sL{sin2 t} − 0.
∴ sL{sin2 t} =2
s2 + 4
∴ L{sin2 t} =2
s(s2 + 4).
The following theorem is based on the result (1.22).
Theorem: If f(t) is piecewise continuous and of exponential order, then
L{∫ t
0f(τ) dτ
}
=1
sL{f(t)} . (1.25)
30 CHAPTER 1. LAPLACE TRANSFORMS
Proof: Put
∫ t
0f(τ) dt = g(t). Then f(t) = g′(t) except for points where f(t) is discon-
tinuous. Hence g′(t) is piecewise continuous on each finite interval and so, from Eq. (1.22)
L{f(t)} = L{g′(t)} = sL{g(t)} − g(0), s > c.
But g(0) = 0 and so
L{g(t)} = L{∫ t
0f(τ) dτ
}
=1
sL{f(t)}.
Example 2. Evaluate L−1
{
1
s2(s2 + a2)
}
.
Now L−1
{
1
s2 + a2
}
=1
asin at.
From Eq. (1.25) L−1
{
1
s(s2 + a2)
}
=1
a
∫ t
0sin aτ dτ
=1
a2(1 − cos at).
Applying Eq. (1.25) once more we obtain
L−1
{
1
s2(s2 + a2)
}
=1
a2
∫ t
0(1 − cos aτ) dτ =
1
a2
(
t − sin at
a
)
.
Problem Set 1.8
1. Use Eq. (1.22) to find L{cos2 at}.
2. Use Eq. (1.25) to find
(a) L−1
{
1
s2 + s
}
(b) L−1
{
s − 2
s(s2 + 4)
}
(c) L−1
{
1
s4(s2 + π2)
}
1.9. APPLICATION TO ORDINARY DIFFERENTIAL EQUATIONS. 31
1.9 Application to ordinary differential equations.
We now use Laplace Transform techniques to solve ordinary linear differential equations
with constant coefficients and with given initial conditions, i.e.,
y′′ + ay′ + by = f(t), y(0) = p, y′(0) = q.
We illustrate the procedure with several examples.
Example 1. Find the solution to the equation
y′′ + a2y = 0, y(0) = 0, y′(0) = 2.
Take the Laplace transform of the equation.
L{y′′} + a2L{y} = 0.
From Eq. (1.23), putting L{y} = Y (s), we obtain
s2Y (s) − sy(0) − y′(0) + a2Y (s) = 0,
i.e., (s2 + a2)Y (s) = 2 (since y(0) = 0, y′(0) = 2).
∴ Y (s) =2
s2 + a2.
Hence, y(t) =2
asin at.
Example 2. Solve 16y′′ − 9y = 0, y(0) = 3, y′(0) = 3.75.
16L{y′′} − 9L{y} = 0,
i.e., 16[s2Y (s) − sy(0) − y′(0)] − 9Y (s) = 0.
32 CHAPTER 1. LAPLACE TRANSFORMS
i.e., (s2 − 9
16)Y (s) = 3s + 3.75.
∴ Y (s) =3s
s2 − 916
+3.75
s2 − 916
.
∴ y(t) = 3 cosh3
4t + 5 sinh
3
4t.
Example 3. Solve y′′ + 2y′ + 17y = 0, y(0) = 0, y′(0) = 12.
L{y′′} + 2L{y′} + 17L{y} = 0
s2Y (s) − sy(0) − y′(0) + 2[sY (s) − y(0)] + 17Y (s) = 0.
(s2 + 2s + 17)Y (s) = 12.
∴ Y (s) =12
(s + 1)2 + 16.
∴ y(t) = 3e−t sin 4t.
Example 4. Solve y′′ + 4y′ + 4y = 0, y(0) = 2, y′(0) = −3.
L{y′′} + 4L{y′} + 4L{y} = 0,
i.e., s2Y (s) − sy(0) − y′(0) + 4[sY (s) − y(0)] + 4Y (s) = 0
(s2 + 4s + 4)Y (s) = 2s − 3 + 8,
i.e., Y (s) =2s + 5
(s + 2)2=
2(s + 2) + 1
(s + 2)2
=2
s + 2+
1
(s + 2)2.
∴ y(t) = 2e−2t + te−2t = (t + 2)e−2t.
Example 5. Solve y′′ − 2y′ + 10y = 0, y(0) = 3, y′(0) = 3.
L{y′′} − 2L{y′} + 10L{y} = 0,
1.9. APPLICATION TO ORDINARY DIFFERENTIAL EQUATIONS. 33
i.e., s2Y (s) − sy(0) − y′(0) − 2[sY (s) − y(0)] + 10Y (s) = 0,
i.e., (s2 − 2s + 10)Y (s) = 3s + 3 − 6 = 3(s − 1).
∴ Y (s) =3(s − 1)
(s − 1)2 + 9.
∴ y(t) = 3et cos 3t.
Example 6. Solve y′′ + y = 2, y(0) = 0, y′(0) = 3.
L{y′′} + L{y} = L{2} ⇒ s2Y (s) − sy(0) − y′(0) + Y (s) =2
s.
∴ (s2 + 1)Y (s) =2
s+ 3.
∴ Y (s) =2
s(s2 + 1)+
3
s2 + 1=
2
s− 2s
s2 + 1+
3
s2 + 1.
∴ y(t) = 2 − 2 cos t + 3 sin t.
Example 7. y′′ + 4y = 3 cos t, y(0) = 0, y′(0) = 0.
L{y′′} + 4L{y} = 3L{cos t} ⇒ s2Y (s) − sy(0) − y′(0) + 4Y (s) =3s
s2 + 1,
i.e., (s2 + 4)Y (s) =3s
s2 + 1⇒ Y (s) =
3s
(s2 + 1)(s2 + 4).
Partial fractions: Y (s) =s
s2 + 1− s
s2 + 4
∴ y(t) = cos t − cos 2t.
Example 8. y′′ − 4y = 8t2 − 4, y(0) = 5, y′(0) = 10.
L{y′′} − 4L{y} = L{8t2 − 4} ⇒ s2Y (s) − sy(0) − y′(0) − 4Y (s) =16
s3− 4
s.
(s2 − 4)Y (s) =16
s3− 4
s+ 5s + 10.
Y (s) =16
s3(s − 2)(s + 2)− 4
s(s − 2)(s + 2)+
5(s + 2)
(s − 2)(s + 2)
= −1
s− 4
s3+
12
s − 2+
12
s + 2+
1
s−
12
s − 2−
12
s + 2+
5
s − 2
= − 4
s3+
5
s − 2.
34 CHAPTER 1. LAPLACE TRANSFORMS
∴ y(t) = −2t2 + 5e2t.
Example 9. y′′ + 2y′ + y = e−2t, y(0) = 0, y′(0) = 0.
L{y′′} + 2L{y′} + L{y} = L{e−2t},
i.e., s2Y (s) + 2sY (s) + Y (s) =1
s + 2(because y(0) = y′(0) = 0).
∴ Y (s) =1
(s + 2)(s + 1)2=
1
s + 2− 1
s + 1+
1
(s + 1)2.
∴ y(t) = e−2t − e−t + te−t.
Example 10. Solve y′′ + 4y = 4(cos 2t − sin 2t), y(0) = 1, y′(0) = 3.
s2Y (s) − sy(0) − y′(0) + 4Y (s) = 4
[
s
s2 + 4− 2
s2 + 4
]
,
i.e., (s2 + 4)Y (s) =4s
s2 + 4− 8
s2 + 4+ s + 3.
∴ Y (s) =4s
(s2 + 4)2− 8
(s2 + 4)2+
s
s2 + 4+
3
s2 + 4.
Now4s
(s2 + 4)2= −
(
2
s2 + 4
)
′
and2
s2 + 4= L{sin 2t}.
∴ L−1
{
4s
(s2 + 4)2
}
= t sin 2t.
Also
(
s
s2 + 4
)
′
=4 − s2
(s2 + 4)2= − 1
s2 + 4+
4
(s2 + 4)2,
i.e., L−1
{
4
(s2 + 4)2
}
= −t cos 2t +1
2sin 2t.
Hence y(t) = t sin 2t + 2t cos 2t − sin 2t + cos 2t +3
2sin 2t
= t(sin 2t + 2 cos 2t) +1
2sin 2t + cos 2t
= (t +1
2)(sin 2t + 2 cos 2t).
1.10. DISCONTINUOUS FORCING FUNCTIONS. 35
Problem Set 1.9
Use Laplace transforms to solve the following initial value problems.
1. y′ − y = 1, y(0) = 0.
2. y′ + 4y = e−4t, y(0) = 2.
3. y′′ + 5y′ + 4y = 0, y(0) = 1, y′(0) = 0.
4. y′′ − 6y′ + 13y = 0, y(0) = 0, y′(0) = −3.
5. y′′ − 6y′ + 9y = t, y(0) = 0, y′(0) = 1.
6. y′′ − 4y′ + 4y = t3, y(0) = 1, y′(0) = 0.
7. y′′ − 4y′ + 4y = t3e2t, y(0) = 0, y′(0) = 0.
8. y′′ − 2y′ + 5y = 1 + t, y(0) = 0, y′(0) = 4.
9. y′′ + y = sin t, y(0) = 1, y′(0) = −1.
10. y′′ + 16y = 1, y(0) = 1, y′(0) = 2.
11. y′′ − y′ = et cos t, y(0) = 0, y′(0) = 0.
12. y′′ − 2y′ = et sinh t, y(0) = 0, y′(0) = 0.
1.10 Discontinuous forcing functions.
In some engineering problems we have to deal with differential equations in which
the forcing function, i.e., the term on the right-hand side of the differential equation, is
discontinuous. We illustrate this by a number of examples.
Example 1. Solve y′′ + 4y = g(t), y(0) = 0, y′(0) = 0,
where g(t) =
{
t 0 ≤ t < π2
π2 t ≥ π
2
,
36 CHAPTER 1. LAPLACE TRANSFORMS
i.e., g(t) = t − tuπ
2(t) +
π
2uπ
2(t) = t − (t − π
2)uπ
2(t).
∴ L{g(t)} = L{t} − L{(t − π
2)uπ
2(t)} =
1
s2− 1
s2e−
π
2s.
Hence, taking the Laplace transform of the differential equation we obtain
s2Y (s) + sy(0) − y′(0) + 4Y (s) =1
s2(1 − e−
π
2s),
i.e., (s2 + 4)Y (s) =1
s2(1 − e−
π
2s),
i.e., Y (s) =1
s2(s2 + 4)(1 − e−
π
2s)
=1
4
(
1
s2− 1
s2 + 4
)
(1 − e−π
2s).
Now L−1
{(
1
s2− 1
s2 + 4
)}
= t − 1
2sin 2t,
so L−1
{(
1
s2− 1
s2 + 4
)
e−π
2s
}
=
[
(
t − π
2
)
− 1
2sin 2
(
t − π
2
)
]
uπ
2(t)
= (t − π
2+
1
2sin 2t)uπ
2(t).
∴ y =1
4t − 1
8sin 2t +
1
4(t − π
2+
1
2sin 2t)uπ
2(t).
Example 2. Solve y′′ + y = f(t) , y(0) = 0 , y′(0) = 1 ,
where f(t)
{
1 0 ≤ t < π2
0 t ≥ π2
,
i.e., f(t) = u0(t) − uπ
2(t).
∴ L{f} =1
s− e−
π
2s
s.
∴ s2Y (s) − sy(0) − y′(0) + Y (s) =1
s
(
1 − e−π
2s)
.
(s2 + 1)Y (s) = 1 +1
s
(
1 − e−π
2s)
.
Y (s) =1
s2 + 1+
1
s(s2 + 1)
(
1 − e−π
2
)
=1
s2 + 1+
(
1
s− s
s2 + 1
)
(
1 − e−π
2s)
.
1.10. DISCONTINUOUS FORCING FUNCTIONS. 37
Now L−1
{
1
s− s
s2 + 1
}
= 1 − cos t.
∴ L−1
{
(1
s− s
s2 + 1)e−
π
2s
}
=[
1 − cos(t − π
2)]
uπ
2(t)
= (1 − sin t)uπ
2(t).
∴ y(t) = sin t + 1 − cos t + (1 − sin t)uπ
2(t).
Example 3. Solve y′′ + 4y = sin t − u2π(t) sin(t − 2π), y(0) = 0 , y′(0) = 0.
∴ s2Y (s) − sy(0) − y′(0) + 4Y (s) =1
s2 + 1− 1
s2 + 1e−2πs.
∴ (s2 + 4)Y (s) =1
s2 + 1(1 − e−2πs).
Y (s) =1
(s2 + 1)(s2 + 4)(1 − e−2πs)
=1
3
(
1
s2 + 1− 1
s2 + 4
)
(1 − e−2πs).
∴ y(t) =1
3
(
sin t − 1
2sin 2t
)
− 1
3
[
sin(t − 2π) − 1
2sin 2(t − 2π)
]
u2π(t)
=1
3
(
sin t − 1
2sin 2t
)
(1 − u2π(t)) .
Example 4. Solve y′′ + y′ +5
4y = g(t) , y(0) = 0 , y′(0) = 0 ,
where g(t) =
{
sin t 0 ≤ t < π0 t ≥ π
.
g(t) = (u0(t) − uπ(t)) sin t = sin t + uπ(t) sin(t − π)
(using sin(t − π) = − sin t).
∴ L{g(t)} =1
s2 + 1+
1
s2 + 1e−πs.
∴ s2Y (s) − sy(0) − y′(0) + sY (s) − y(0) +5
4Y (s) =
1
s2 + 1(1 + e−πs),
i.e., (s2 + s +5
4)Y (s) =
1
s2 + 1(1 + e−πs).
38 CHAPTER 1. LAPLACE TRANSFORMS
∴ Y (s) =1
(s2 + 1)(s2 + s + 54)
(1 + e−πs)
=1
17
[
−16s + 4
s2 + 1+
16s + 12
(s + 12)2 + 1
]
(1 + e−πs)
=
[
−16
17
s
s2 + 1+
4
17
1
s2 + 1+
16
17
(s + 12)
(s + 12)2 + 1
+4
17
1
(s + 12)2 + 1
]
(1 + e−πs).
∴ y(t) =−16
17cos t +
4
17sin t +
16
17e−
12t cos t +
4
17e−
12t sin t
+
[−16
17cos(t − π) +
4
17sin(t − π) +
16
17e−
12(t−π) cos(t − π)
+4
17e−
12(t−π) sin(t − π)
]
uπ(t),
i.e., y(t) =4
17(−4 cos t + sin t + 4e−
12t cos t + e−
12t sin t)
+4
17(4 cos t − sin t − 4e−
12(t−π) cos t − e−
12(t−π) sin t)uπ(t).
Example 5. Solve y′′ + 3y′ + 2y = u2(t) , y(0) = 0 , y′(0) = 1 .
s2Y (s) − sy(0) − y′(0) + 3sY (s) − 3y(0) + 2Y (s) =1
se−2s.
∴ (s2 + 3s + 2)Y (s) = 1 +1
se−2s.
∴ Y (s) =1
(s + 1)(s + 2)+
1
s(s + 1)(s + 2)e−2s
=1
s + 1− 1
s + 2+
[
12
s− 1
s + 1+
12
s + 2
]
e−2s.
∴ y(t) = e−t − e−2t +
[
1
2− e−(t−2) +
1
2e−2(t−2)
]
u2(t).
1.11. PERIODIC FUNCTIONS. 39
Problem Set 1.10
Use Laplace transforms to solve the following initial value problems
1. y′′ + 4y = f(t), where f(t) =
{
1 0 ≤ t < 10 t ≥ 1
, y(0) = 0, y′(0) = −1.
2. y′′ + 4y = u2π(t) sin t, y(0) = 1, y′(0) = 0.
3. y′′ − 5y′ + 6y = u1(t), y(0) = 0, y′(0) = 1.
4. y′′ + 4y′ + 3y = 1 − u2(t) − u4(t) + u6(t), y(0) = 0, y′(0) = 0.
5. y′′ + y = uπ(t), y(0) = 1, y′(0) = 0.
6. y(4) + 5y′′ + 4y = 1 − uπ(t), y(0) = y′(0) = y′′(0) = y′′′(0) = 0.
1.11 Periodic functions.
Let f(t) be a function which is defined for all positive t and has period T (> 0), i.e.,
f(t + T ) = f(t), all t > 0. (1.26)
Examples of periodic functions are sin t , cos t and functions such as
ta 2 3
(period )
a a
a
k
O
f(t)
and
40 CHAPTER 1. LAPLACE TRANSFORMS
( period = 3 )
t
f(t)
1
-1
1 2 3 4 5 6O
Theorem: Let f(t) be piecewise continuous on [0,∞) and of exponential order. If f(t) is
periodic with period T , then
L{f(t)} =1
(1 − e−sT )
∫ T
0e−stf(t) dt.
Proof:
L{f(t)} =
∫
∞
0e−stf(t) dt
=
∫ T
0e−stf(t) dt +
∫
∞
Te−stf(t) dt. (∗)
Now put t = u + T in the last integral which then becomes
∫
∞
0e−s(u+T )f(u + T ) du.
But f(u + T ) = f(u) because f is periodic so this integral is
e−sT
∫
∞
0e−suf(u) du = e−sTL{f(t)}.
Hence, (∗) becomes
L{f(t)} =
∫ T
0e−stf(t) dt + e−sTL{f(t)}.
∴ L{f(t)} =1
1 − e−sT
∫ T
0e−stf(t) dt.
Note that this can be written as
L{f(t)} =L{fT (t)}1 − e−sT
,
1.11. PERIODIC FUNCTIONS. 41
where
fT (t) =
{
f(t) 0 ≤ t < T0 t ≥ T
,
is called the window of length T for the function f(t).
Example 1. Evaluate L{f(t)} where
f(t) = π − t (0 ≤ t < 2π) , f(t + 2π) = f(t)
as in the diagram above.
6π5π3ππ t
π
−πO
f(t)
2π 4π
L{f(t)} =1
1 − e−2πs
∫ 2π
0e−st(π − t) dt
=[
1 − e−2πs]
−1
{
[
−(π − t)e−st
s
]2π
0
−∫ 2π
01 · 1
se−stdt
}
=[
1 − e−2πs]
−1{
πe−2πs
s+
π
s+
1
s2
[
e−st]2π
0
}
=(
1 − e−2πs)
−1{
π
s+
π
se−2πs +
1
s2e−2πs − 1
s2
}
.
Example 2. Evaluate L{f(t)} where
f(t) = | sin at |, 0 ≤ t <π
a, f(t +
π
a) = f(t).
This is known as the rectified sine wave or the full-wave rectification of sin at. The
period isπ
awith g(t) = sin at, 0 ≤ t <
π
a, as illustrated below.
42 CHAPTER 1. LAPLACE TRANSFORMS
aa 4π/3π/2π/π/
f(t)
taa
1
O
L{f(t)} =1
1 − e−πs
a
∫ π
a
0e−st sin at dt
= (1 − e−πs
a )−1
[
e−st
s2 + a2(−s sin at − a cos at)
]π
a
0
=a(e−
πs
a + 1)
(1 − e−πs
a )(s2 + a2)=
a coth πs2a
(s2 + a2).
Example 3. Evaluate L{f(t)} where
f(t) = t2 (0 ≤ t < 2π) f(t + 2π) = f(t).
L{f(t)} =(
1 − e−2πs)
−1∫ 2π
0e−stt2dt.
Now
∫ 2π
0e−stt2dt = L{[1 − u2π(t)]t2}
= L{t2 − u2π(t)[
(t − 2π)2 + 4π(t − 2π) + 4π2]
}
=2
s3− e−2πs
(
2
s3+
4π
s2+
4π2
s
)
∴ L{f(t)} =(
1 − e−2πs)
−1(
2
s3− 2
s3e−2πs − 4πe−2πs
s2− 4π2e−2πs
s
)
.
Example 4. An L − R series circuit has L = 1 henry, R = 1 ohm, E(t) given by
E(t) = t (0 ≤ t < 1) E(t + 1) = E(t)
and i(0) = 0.
1.11. PERIODIC FUNCTIONS. 43
The differential equation is Ldi
dt+ Ri = E(t),
i.e.,di
dt+ i = E(t).
Taking the Laplace transform of E(t) yields
L{E(t)} =1
1 − e−s
∫ 1
0te−stdt =
1
1 − e−s
[
−1
se−s − 1
s2(e−s − 1)
]
= −1
s
e−s
1 − e−s+
1
s2.
The Laplace transform of the differential equation is
sI(s) − i(0) + I(s) = L{E(t)},
i.e., I(s) =1
s2(s + 1)− 1
s(s + 1)
e−s
1 − e−s.
1
s2(s + 1)= −1
s+
1
s2+
1
s + 1= L{−1 + t + e−t}.
1
s(s + 1)=
1
s− 1
s + 1= L{1 − e−t}.
Nowe−s
1 − e−s= e−s(1 + e−s + e−2s + e−3s + · · · ). (Geometric Series)
∴
1
s(s + 1)
e−s
1 − e−s=
(
1
s− 1
s + 1
)
(
e−s + e−2s + e−3s + e−4s + · · ·)
= L{[u1(t) + u2(t) + u3(t) + u4(t) + · · · ]
−[e−(t−1)u1(t) + e−(t−2)u2(t) + e−(t−3)u3(t) + · · · ]}.
Hence, i(t) = −1 + t + e−t − [u1(t) + u2(t) + u3(t) + · · · ]
+[e−(t−1)u1(t) + e−(t−2)u2(t) + e−(t−3)u3(t) + · · · ].
44 CHAPTER 1. LAPLACE TRANSFORMS
Problem Set 1.11
Find the Laplace transforms of the given periodic functions.
1. f(t) =
{
1 0 ≤ t < a−1 a ≤ t < 2a
, f(t + 2a) = f(t).
2. f(t) =
{
t 0 ≤ t < 12 − t 1 ≤ t < 2
, f(t + 2) = f(t).
3. f(t) =
{
sin t 0 ≤ t < π0 π ≤ t < 2π
, f(t + 2π) = f(t).
4. f(t) = et, 0 ≤ t < 2π, f(t + 2π) = f(t).
5. Find the steady-state current in the circuit given by
R
L
E(t)
0E
432a tO a aa
E(t)
1.12 Impulse functions.
We often have to look at systems (mechanical or electrical) in which the external force
or voltage is of large magnitude but acts only for a very short time, i.e., a very high voltage
is applied to a circuit and then switched off immediately.
Consider the function defined by
δa(t − t0) =
12a t0 − a < t < t0 + a
0 t ≤ t0 − a or t ≥ t0 + a,
as shown in the first diagram on the next page. The second diagram shows how the function
changes as a gets smaller and smaller.
1.12. IMPULSE FUNCTIONS. 45
1
t
y
t +at -a t0 00
a
O
2
t
y
t0O
If a is small, then δa(t − t0) has a large constant magnitude for a short period of time
around t0. Note that the integral
I(a) =
∫ t0+a
t0−aδa(t − t0) dt =
∫ t0+a
t0−a
1
2adt = 1,
and I(a) can be written as
I(a) =
∫
∞
−∞
δa(t − t0) dt = 1,
since the function is zero outside (t0−a, t0 +a). The function δa(t− t0) is the unit impulse.
We define the “function” δ(t − t0) by
δ(t − t0) = lima→0
δa(t − t0).
The quantity δ(t− t0) is called the Dirac delta function and is an example of a generalized
function. It has the following properties:
δ(t − t0) =
{
∞ t = t00 t 6= t0
,
∫
∞
−∞
δ(t − t0) dt = 1. (1.27)
In particular, when t0 = 0 , we have
δ(t) =
{
∞ t = 00 t 6= 0
,
46 CHAPTER 1. LAPLACE TRANSFORMS
∫
∞
−∞
δ(t) dt = 1. (1.28)
To find the Laplace transform of δ(t−t0) we first find the Laplace transform of δa(t−t0).
L{δa(t − t0)} =
∫
∞
0e−stδa(t − t0) dt
=
∫ t0+a
t0−ae−st · 1
2adt
=
[
− 1
2ase−st
]t0+a
t0−a
=1
2ase−st0
(
eas − e−as)
=sinh as
ase−st0 .
Now let a → 0, then lima→0
sinh as
as= 1 (l’Hopital’s rule) so
L{δ(t − t0)} = lima→0
L{δa(t − t0)}
becomes
L{δ(t − t0)} = e−st0 . (1.29)
When t = t0, we have
L{δ(t)} = 1. (1.30)
Note the following result:
∫
∞
−∞
f(t)δ(t − t0) dt = f(t0), (1.31)
from which it follows that
L{f(t)δ(t − t0)} = f(t0)e−st0 . (1.32)
1.12. IMPULSE FUNCTIONS. 47
Example 1. Solve y′′ + 2y′ + 2y = δ(t − π), y(0) = 1 , y′(0) = 0.
Taking Laplace transforms
s2Y (s) − s + 2[sY (s) − 1] + 2Y (s) = e−sπ,
i.e., (s2 + 2s + 2)Y (s) = s + 2 + e−sπ.
Y (s) =s + 1
(s + 1)2 + 1+
1
(s + 1)2 + 1+
1
(s + 1)2 + 1e−sπ.
∴ y(t) = e−t cos t + e−t sin t + e−(t−π) sin(t − π)uπ(t),
i.e., y(t) = e−t cos t + e−t sin t[1 − eπuπ(t)].
Example 2. Solve y′′ + 4y = δ(t − π) − δ(t − 2π) , y(0) = y′(0) = 0.
s2Y (s) + 4Y (s) = e−sπ − e−2sπ.
Y (s) =1
s2 + 4(e−sπ − e−2sπ).
y(t) =1
2sin 2(t − π)uπ(t) − 1
2sin 2(t − 2π)u2π(t)
=1
2sin 2t(uπ(t) − u2π(t)).
Example 3. Solve y′′ + y = δ(t − π) cos t , y(0) = 0 , y′(0) = 1.
s2Y − 1 + Y = e−πs cos π = −e−πs.
∴ Y (s) =1
s2 + 1(1 − e−πs).
∴ y(t) = sin t − sin(t − π)uπ(t)
= sin t (1 + uπ(t)).
48 CHAPTER 1. LAPLACE TRANSFORMS
Problem Set 1.12
Solve the following initial value problems
1. y′′ + y = δ(t − 2π) , y(0) = 0 , y′(0) = 1.
2. y′′ + 2y′ + 3y = sin t + δ(t − π) , y(0) = 0 , y′(0) = 1.
3. y′′ + y = uπ
2(t) + δ(t − π) − u 3π
2(t) , y(0) = 0 , y′(0) = 0.
4. y′′ + 4y = 4δ(t − π
6) sin t , y(0) = 0 , y′(0) = 0.
5. y′′ − 2y = 1 + δ(t − 2) , y(0) = 0 , y′(0) = 1.
6. y(4) − y = δ(t − 1) , y(0) = y′(0) = y′′(0) = y′′′(0) = 0.
1.13 The convolution integral.
The inverse of the product of two Laplace transforms is not the product of the separate
inverses, i.e.,
L−1{F (s)G(s)} 6= L−1{F (s)}L−1{G(s)}.
The inverse of the product is given by the following theorem:
Theorem: If F (s) = L{f(t)} and G(s) = L{g(t)} both exist for s > a ≥ 0, then
H(s) = F (s)G(s) = L{h(t)} , s > a,
where
h(t) =
∫ t
0f(t − τ)g(τ) dτ =
∫ t
0f(τ)g(t − τ) dτ. (1.33)
The function h(t) is the convolution of f(t) and g(t) and the integrals defining h(t) are
the convolution integrals.
1.13. THE CONVOLUTION INTEGRAL. 49
We write
h(t) = (f ∗ g)(t)
so that h(t) is a “generalized product”.
Proof:
Let F (s) =
∫
∞
0e−sξf(ξ) dξ
G(s) =
∫
∞
0e−sηg(η) dη
then F (s)G(s) =
∫
∞
0e−sξf(ξ) dξ
∫
∞
0e−sηg(η) dη
=
∫
∞
0g(η) dη
∫
∞
0e−s(ξ+η)f(ξ) dξ.
Put ξ = t − η (η fixed) in second integral and let η = τ , then
F (s)G(s) =
∫
∞
0g(τ) dτ
∫
∞
τe−stf(t − τ) dt.
Reversing the order of integration, this becomes
F (s)G(s) =
∫
∞
0e−stdt
∫ t
0f(t − τ)g(τ) dτ
=
∫
∞
0e−stdt h(t) = L{h(t)}.
Example 1. Evaluate L−1
{
1
s2(s2 + 1)
}
.
Choose F (s) =1
s2, G(s) =
1
s2 + 1, so that
f(t) = t, g(t) = sin t.
Then h(t) =
∫ t
0(t − τ) sin τ dτ
= [−(t − τ) cos τ ]t0 −∫ t
01 cos τ dτ
= t − sin t.
Alternatively, we could have written
h(t) =
∫ t
0τ sin(t − τ) dτ,
which leads to the same result.
50 CHAPTER 1. LAPLACE TRANSFORMS
Example 2. Evaluate L−1
{
1
(s2 + a2)2
}
.
Choose F (s) = G(s) =1
s2 + a2.
Then f(t) = g(t) =1
asin at,
h(t) =1
a2
∫ t
0sin a(t − τ) sin aτ dτ.
Using the identity sinA sin B =1
2[cos(A − B) − cos(A + B)]
h(t) =1
2a2
∫ t
0[cos a(2τ − t) − cos at] dτ ( A = aτ ; B = a(t − τ) )
=1
2a2
[
1
2asin a(2τ − t) − τ cos at
]t
0
=1
2a2
[
1
2asin at − t cos at − 1
2asin(−at)
]
=1
2a3(sin at − at cos at).
Example 3. Find the Laplace transform of
f(t) =
∫ t
0(t − τ)2 cos 2τ dτ .
L{f(t)} = L{t2}L{cos 2t} =2
s3· s
s2 + 4
=2
s2(s2 + 4).
Example 4. Find L−1
{
1
s2 + 1F (s)
}
.
Let1
s2 + 1= G(s) , then g(t) = sin t.
L−1
{
1
s2 + 1F (s)
}
= h(t) =
∫ t
0sin(t − τ)f(τ) dτ,
where f(t) = L−1{F (s)}.
Example 5. Express in terms of a convolution integral, the solution of the initial value
problem
y′′ + 4y′ + 4y = g(t) , y(0) = 2 , y′(0) = −3.
1.13. THE CONVOLUTION INTEGRAL. 51
s2Y (s) − 2s + 3 + 4sY (s) − 8 + 4Y (s) = G(s).
Y (s) =2s + 5
(s + 2)2+
G(s)
(s + 2)2=
2
s + 2+
1
(s + 2)2+
G(s)
(s + 2)2.
∴ y(t) = 2e−2t + te−2t +
∫ t
0e−2(t−τ)g(τ) dτ.
Example 6. Find L−1
{
1
s4(s2 + 1)
}
.
Let G(s) =1
s2 + 1and F (s) =
1
s4. H(s) = F (s)G(s).
Then g(t) = sin t , and f(t) =1
6t3.
h(t) =
∫ t
0
1
6τ3 sin(t − τ) dτ.
Integrating by parts we obtain
h(t) =1
6t3 − t + sin t.
Problem Set 1.13
Evaluate the following using the convolution theorem.
1. L{∫ t
0(t − τ)eτdτ} 2. L{
∫ t
0sin(t − τ) cos τ dτ}
3. L{∫ t
0(t − τ) cos τ dτ} 4. L{
∫ t
0τ sin τ dτ}
5. L{t2 ∗ t3} 6. L−1
{
1
s + 5F (s)
}
7. L−1
{
s
s2 + 4F (s)
}
8. L−1
{
1
s(s + 1)
}
9. L−1
{
s
(s2 + 4)2
}
10. L−1
{
1
(s + 1)2
}
11. L−1
{
1
(s2 + 4s + 5)2
}
12. L−1
{
1
(s − 3)(s2 + 4)
}
13. Compute cos t ∗ cos t and thus show that f ∗ f is not necessarily non-negative.
14. Solve y′′ + 3y′ + 2y = cos t , y(0) = 1 , y′(0) = 0. Leave the answer in terms of a
convolution integral.
52 CHAPTER 1. LAPLACE TRANSFORMS
Chapter 2
SYSTEMS OF FIRST ORDER
LINEAR EQUATIONS
2.1 Introduction.
Systems of ordinary differential equations play a very important role in applied mathe-
matics. We shall illustrate the application of systems with two examples. In these examples,
and throughout this chapter, x1, x2, x3, . . . represent dependent variables which are func-
tions of the independent variable t and a prime denotes differentiation with respect to t.
Example 1. Two masses m1 and m2 are connected to two springs A and B of
negligible mass with spring constants k1 and k2, respectively. Let
x1(t) and x2(t) denote the vertical displacements of the masses
from their equilibrium positions. When the system is in motion
the spring B is subject to both an elongation and a compression;
the net elongation is x2−x1. Hence, from Hooke’s law, the springs
exert forces −k1x1 and k2(x2 − x1) on m1 and −k2(x2 − x1) on
m2. Hence, the differential equations of the system are
m
1
2
k
x
x
A
1
B
k2
m
1
2
53
54 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
m1d2x1
dt2= −k1x1 + k2(x2 − x1)
m2d2x2
dt2= −k2(x2 − x1).
i.e., m1x′′
1 = −(k1 + k2)x1 + k2x2
(2.1)m2x
′′
2 = k2x1 − k2x2.
Such a system may be supplemented by initial conditions, e.g., information that the
masses start from their equilibrium positions with certain velocities.
The system (2.1) is a linear system of the second order, since second derivatives of the
variables appear.
Example 2. Consider an electrical net-
work with more than one loop, as shown
in the diagram. The current i1(t) splits
into two directions at the branch point
B1. From Kirchhoff’s first law
i1(t) = i2(t) + i3(t).
A B C
L
R
i
B
R
L
1
1i i
1 1
2
3
2
2
2
1
1
C
2
2A
E
Applying Kirchhoff’s second law to each loop, the voltage drops across each part of the loop
give the following system of differential equations
E(t) = i1R1 + L1i′
2 + i2R2
E(t) = i1R1 + L2i′
3.
2.2. BASIC THEORY OF SYSTEMS OF FIRST ORDER LINEAR EQUATIONS. 55
Eliminating i1 from these equations we obtain
L1i′
2 = −(R1 + R2)i2 − R1i3 + E(t)
L2i′
3 = R1i2 + R1i3 + E(t).
This system can be supplemented by initial conditions such as i2(0) = 0 , i3(0) = 0. This
is a first order linear system.
2.2 Basic theory of systems of first order linear equations.
We first note than any nth order differential equation of the form
y(n) = F (t, y, y′, . . . , y(n−1)) (2.2)
can be reduced to a system of n first-order equations of a special form. Introduce the
variables x1, x2, . . . , xn defined by
x1 = y, x2 = y′, x3 = y′′, . . . , xn = y(n−1). (2.3)
Then Eqs. (2.2) and (2.3) can be written in the form of a system:
x′
1 = x2
x′
2 = x3
... (2.4)
x′
n−1 = xn
x′
n = F (t, x1, x2, . . . , xn).
Example 1. Consider the differential equation
y′′ + 3y′ + 2y = 0.
56 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
Put x1 = y , x2 = y′, and the equation becomes
x′
2 = −3x2 − 2x1 , with x2 = x′
1,
i.e., we have the system
x′
1 = x2
x′
2 = −2x1 − 3x2,
which can be written in vector-matrix form as
x′ = Ax,
where
x =
[
x1
x2
]
and A =
[
0 1−2 −3
]
.
The system (2.4) is a special case of the more general system
x′
1 = F1(t, x1, . . . , xn)
... (2.5)
x′
n = Fn(t, x1, . . . , xn).
However, we are interested in systems in which each of the functions F1, . . . , Fn is linear in
the variables x1, . . . , xn; such a system is said to be linear. The most general system of n
linear first order equations has the canonical form
x′
1 = a11(t)x1 + . . . + a1n(t)xn + f1(t)
... (2.6)
x′
n = an1(t)x1 + . . . + ann(t)xn + fn(t).
2.2. BASIC THEORY OF SYSTEMS OF FIRST ORDER LINEAR EQUATIONS. 57
If the functions f1(t), . . . , fn(t) are all zero the system is said to be homogeneous; otherwise
the system is nonhomogeneous. In addition to the system of equations, there may also be
given initial conditions of the form
x1(t0) = x01 , x2(t0) = x0
2, . . . , xn(t0) = x0n. (2.7)
Theorem: If the functions aij(t) , fi(t) (i, j = 1, . . . , n) are continuous on an open interval
α < t < β, containing the point t = t0, then there exists a unique solution x1, . . . , xn of the
system of differential equations (2.6) which also satisfies the initial conditions (2.7). This
solution is valid throughout the interval α < t < β.
We write the system (2.6) in vector-matrix form, i.e.,
x′ = Ax + f(t), (2.8)
where
x =
x1...
xn
, A =
a11 . . . a1n...
...an1 . . . ann
, f =
f1(t)...
fn(t)
.
A vector x is said to be a solution of the system (2.8) if its components satisfy the system
(2.6). We assume that A and f(t) are continuous, i.e., all of their components are continuous,
on some interval α < t < β. From the last theorem this guarantees the existence of a solution
on the interval α < t < β.
We commence by considering the homogeneous system
x′ = Ax, (2.9)
i.e., system (2.8) with f(t) = 0. Specific solutions of this system will be denoted by
x(1)(t),x(2)(t), . . . ,x(k)(t).
58 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
Theorem (Principle of Superposition): If the vector functions x(1),x(2), . . . ,x(k) are
solutions of the system (2.9), then the linear combination
x = c1x(1)(t) + c2x
(2)(t) + . . . + ckx(k)(t).
is also a solution for any constants c1, . . . , ck.
Example 2. Consider the system
x′ =
[
3 21 2
]
x.
It can be shown that
x(1)(t) =
[
1−1
]
et
and
x(2)(t) =
[
21
]
e4t
are solutions of this system. From the above theorem, the vector
x = c1
[
1−1
]
et + c2
[
21
]
e4t
= c1x(1)(t) + c2x
(2)(t)
also satisfies the system. Check this!
Definition. The set of solutions x(1), . . . ,x(k) is said to be linearly dependent on some
interval α < t < β if there exist constants c1, . . . , ck such that
c1x(1) + . . . + ckx
(k) = 0,
for every t in the interval. Otherwise the vectors are said to be linearly independent.
2.2. BASIC THEORY OF SYSTEMS OF FIRST ORDER LINEAR EQUATIONS. 59
Suppose that x(1), . . . ,x(n) are n solutions of the nth order system (2.9). We write
x(i) =
x1i
x2i...
xni
,
i.e., xmi is the mth component of the ith solution. Form a matrix by writing each solution
vector as a column of the matrix, i.e.,
Φ(t) =
x11(t) x12(t) . . . x1n(t)...
......
xn1(t) xn2(t) . . . xmn(t)
. (2.10)
Now the columns of Φ(t) are linearly independent for a given value of t if and only if
detΦ(t) 6= 0 for that value of t. This determinant is denoted by W (x(1), . . . ,x(n)) and is
called the Wronskian of the n solutions x(1), . . . ,x(n),
i.e., W = detΦ(t).
Hence, the solutions x(1), . . . ,x(n) are linearly independent at a point if and only if W 6= 0
at that point.
In fact, it can be shown that, if x(1), . . . ,x(n) are solution vectors of the system, then
either
W (x(1), . . . ,x(n)) 6= 0,
for every t in α < t < β or
W (x(1), . . . ,x(n)) = 0,
for every t in the interval. Hence, if we can show that W 6= 0 for some t0 in α < t < β,
then W 6= 0 for every t and so the solutions are linearly independent on the interval.
60 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
Example 3. Consider the system of Example 2.
The solutions
x(1) =
[
1−1
]
et , x(2) =
[
21
]
e4t
are clearly linearly independent in (−∞,∞) since neither vector is a constant multiple of
the other.
The Wronskian is given by
W (x(1),x(2)) =
∣
∣
∣
∣
et 2e4t
−et e4t
∣
∣
∣
∣
= 3e5t 6= 0,
for all real values of t.
Definition. Any set x(1), . . . ,x(n) of n linearly independent solution vectors of the ho-
mogeneous system (2.9) is said to be a fundamental set of solutions.
Definition. If x(1), . . . ,x(n) is a fundamental set of solutions of the homogeneous system
(2.9) then the general solution of the system is
x = c1x(1) + c2x
(2) + . . . + cnx(n), (2.11)
where c1, . . . , cn are arbitrary constants.
Note that the general solution (2.11) can be written in the form
x = c1
x11...
xn1
+ c2
x12...
xn2
+ . . . + cn
x1n...
xnn
=
c1x11 + c2x12 + . . . + cnx1n...
c1xn1 + c2xn2 + . . . + cnxnn
2.2. BASIC THEORY OF SYSTEMS OF FIRST ORDER LINEAR EQUATIONS. 61
=
x11 x12 . . . x1n...
......
xn1 xn2 . . . xnn
c1...
cn
,
i.e., x = Φ(t)c, (2.12)
where c is the column vector
c =
c1...
cn
and Φ(t) is as defined in Eq. (2.10) with the solutions x(1)(t), . . . ,x(n)(t) linearly indepen-
dent. The matrix Φ(t) is said to be a fundamental matrix of the system on the interval
α < t < β. Note that, since detΦ(t) = W 6= 0, the inverse Φ−1(t) exists for every value of
t in the interval.
Example 4. For the problem of Examples 2 and 3, a fundamental matrix is
Φ(t) =
[
et 2e4t
−et e4t
]
and it follows that
Φ−1(t) =
[
13e−t −2
3e−t
13e−4t 1
3e−4t
]
.
Suppose that x(1), . . . ,x(n) are solutions of the system (2.9) which satisfy the initial
conditions
x(1)(t0) = e(1) ≡
10...0
, x(2)(t0) = e(2) =
01...0
, . . . , x(n)(t0) = e(n) ≡
00...1
,
where t0 is some point in the interval α < t < β. The fundamental matrix of this system is
a special case of Φ(t) and is usually denoted by Ψ(t). It has the property that
Ψ(t0) = I =
1 0 . . . 00 1 . . . 0...
......
0 0 . . . 1
. (2.13)
62 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
Example 5. Consider the system of Example 3. The general solution is
x = c1
[
1−1
]
et + c2
[
21
]
e4t.
Choose t0 = 0 and let the initial conditions be
x(1)(0) =
[
10
]
,x(2)(0) =
[
01
]
.
The first of these conditions leads to
[
10
]
= c1
[
1−1
]
+ c2
[
21
]
,
i.e., c1 = c2 =1
3, so that x(1)(t) =
1
3
[
1−1
]
et +1
3
[
21
]
e4t,
i.e., x(1)(t) =
[
13et + 2
3e4t
−13et + 1
3e4t
]
.
The second initial condition leads to
[
01
]
= c1
[
1−1
]
+ c2
[
21
]
,
i.e., c1 = −2
3, c2 =
1
3, so that x(2)(t) =
[
−23et + 2
3e4t
23et + 1
3e4t
]
.
Hence
Ψ(t) =
[
13et + 2
3e4t −23et + 2
3e4t
−13et + 1
3e4t 23et + 1
3e4t
]
and it is easily seen that Ψ(0) = I.
2.2. BASIC THEORY OF SYSTEMS OF FIRST ORDER LINEAR EQUATIONS. 63
Problem Set 2.2
In problems 1 - 6 verify that the vector x is a solution of the given system.
1.dx
dt= 3x − 4y ,
dy
dt= 4x − 7y , x =
[
12
]
e−5t
2.dx
dt= −2x + 5y ,
dy
dt= −2x + 4y , x =
[
5 cos t3 cos t − sin t
]
et
3. x′ =
[
−1 14
1 −1
]
x ; x =
[
−12
]
e−32t
4. x′ =
[
2 1−1 0
]
x ; x =
[
13
]
et +
[
4−4
]
tet
5. x′ =
1 2 16 −1 0
−1 −2 −1
x ; x =
16
−13
6. x′ =
1 0 11 1 0
−2 0 −1
x ; x =
sin t−1
2 sin t − 12 cos t
− sin t + cos t
In problems 7 and 8 the given vectors are solutions of a system x′ = Ax. Determine
whether the vectors form a fundamental set on (−∞,∞).
7. x(1) =
[
1−1
]
et , x(2) =
[
26
]
et +
[
8−8
]
tet
8. x(1) =
1−2
4
+ t
122
, x(2) =
1−2
4
, x(3) =
3−612
+ t
244
9. Prove that the general solution of x′ =
0 6 01 0 11 1 0
x on the interval (−∞,∞) is
x = c1
6−1−5
e−t + c2
−311
e−2t + c3
211
e3t.
64 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
In problems 10-12 the indicated column vectors form a fundamental set of solutions for
the given system on (−∞,∞). Form a fundamental matrix Φ(t) and compute Φ−1(t).
10. x′ =
[
2 33 2
]
x ; x(1) =
[
−11
]
e−t , x(2) =
[
11
]
e5t
11. x′ =
[
4 1−9 −2
]
x ; x(1) =
[
−13
]
et , x(2) =
[
−13
]
tet +
[
01
]
et
12. x′ =
[
3 −25 −3
]
x ; x(1) =
[
2 cos t3 cos t + sin t
]
, x(2) =
[
−2 sin tcos t − 3 sin t
]
13. Find the fundamental matrix Ψ(t) satisfying Ψ(0) = I for the system in problem 10.
14. Find the fundamental matrix Ψ(t) satisfying Ψ(0) = I for the system in problem 11.
15. Find the fundamental matrix Ψ(t) satisfying Ψ[π
2
]
= I for the system in problem 12.
16. Show that the fundamental matrices Φ(t) and Ψ(t) satisfy Ψ(t) = Φ(t)Φ−1(t0).
2.3 Review of eigenvalues and eigenvectors.
Given the n×n matrix A, the equation Ax = y can be regarded as a linear transforma-
tion that maps (or transforms) a given vector x into a new vector y. In many applications
it is useful to find a vector x which is transformed into a multiple of itself by the action of
A, i.e., we need to find the solution vectors, x, of the linear system
Ax = λx, (2.14)
where λ is the proportionality factor. A vector x satisfying Eq. (2.14) is called an eigenvector
of A corresponding to the eigenvalue λ.
Eq. (2.14) can be rewritten in the form
(A − λI)x = 0, (2.15)
2.3. REVIEW OF EIGENVALUES AND EIGENVECTORS. 65
where I is the n × n identity matrix. This is a homogeneous system of linear equations
which will have non-trivial solutions if and only if
det (A − λI) = 0. (2.16)
Eq. (2.16) is a polynomial equation of degree n known as the characteristic equation.
Thus, the n × n matrix A has exactly n eigenvalues, some of which may be repeated. If a
given eigenvalue appears m times as a root of Eq. (2.16) then that eigenvalue is said to be
of multiplicity m. Each eigenvalue has at least one associated eigenvector; an eigenvalue of
multiplicity m may have q linearly independent eigenvectors where 1 ≤ q ≤ m. If all of the
eigenvalues of a matrix A are simple (i.e., of multiplicity one), then the n eigenvectors of
A are linearly independent.
Given the eigenvalues, we use Gauss-Jordan elimination to solve the system of
equations (2.15) to find the eigenvectors. These eigenvectors are determined only up to
a multiplicative constant. We can normalize the eigenvector by choosing the constant
appropriately.
We illustrate the above theory by presenting a number of examples. These examples
will cover the following possibilities:
(a) All eigenvalues real and simple.
(b) Some eigenvalues complex. (Note that for a real matrix A, complex eigenvalues must
occur in conjugate pairs).
(c) Eigenvalues of multiplicity m with m linearly independent associated eigenvectors.
66 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
(d) Eigenvalues of multiplicity m with less than m linearly independent associated eigen-
vectors.
Example 1. Find the eigenvalues and eigenvectors of the matrix A =
1 −1 43 2 −12 1 −1
.
The characteristic equation is
det (A − λI) =
∣
∣
∣
∣
∣
∣
1 − λ −1 43 2 − λ −12 1 −1 − λ
∣
∣
∣
∣
∣
∣
= 0,
i.e., − (λ3 − 2λ2 − 5λ + 6) = 0,
i.e., (λ − 1)(λ + 2)(λ − 3) = 0.
Thus A has the three simple eigenvalues λ1 = 1 , λ2 = −2 and λ3 = 3. For λ = 1,
Eq. (2.15) is
0 −1 4 03 1 −1 02 1 −2 0
⇒
0 −1 4 01 0 1 02 0 2 0
⇒
0 −1 4 01 0 1 00 0 0 0
.
Hence x1 = −x3 , x2 = 4x3 and, putting x3 = 1, the associated eigenvector is x(1) =
−141
.
For λ = −2, Eq. (2.15) is
3 −1 4 03 4 −1 02 1 1 0
⇒
3 −1 4 01 0 1 05 0 5 0
⇒
−1 −1 0 01 0 1 00 0 0 0
.
Hence x2 = −x1 , x3 = −x1 and, putting x1 = 1, the associated eigenvector is x(2) =
1−1−1
.
For λ = 3, Eq. (2.15) is
−2 −1 4 03 −1 −1 02 1 −4 0
⇒
−2 −1 4 01 0 −1 00 0 0 0
⇒
2 −1 0 01 0 −1 00 0 0 0
.
Hence x3 = x1 , x2 = 2x1 and, putting x1 = 1, the associated eigenvector is x(3) =
121
.
2.3. REVIEW OF EIGENVALUES AND EIGENVECTORS. 67
Note that any multiple of x(1), x(2) or x(3) is also an eigenvector.
Example 2. Find the eigenvalues and eigenvectors of the matrix A =
[
3 −24 −1
]
.
The characteristic equation is
det (A − λI) =3 − λ −2
4 −1 − λ= 0,
i.e., λ2 − 2λ + 5 = 0,
i.e., λ = 1 + 2i , 1 − 2i.
Thus A has the conjugate complex pair of eigenvalues λ1 = 1 + 2i , λ2 = 1 − 2i. For
λ = 1 + 2i , Eq. (2.15) is
[
2 − 2i −2 04 −2 − 2i 0
]
⇒[
1 − i −1 00 0 0
]
.
Hence x2 = (1 − i)x1 and, putting x1 = 1, the associated eigenvector is x(1) =
[
11 − i
]
.
For λ = 1 − 2i it follows that the associated eigenvector is the complex conjugate of
x(1), i.e., x(2) =
[
11 + i
]
.
Example 3. Find the eigenvalues and eigenvectors of the matrix A =
0 1 11 0 11 1 0
.
The characteristic equation is
det (A − λI) =−λ 1 1
1 −λ 11 1 −λ
= 0,
i.e., (λ + 1)2(λ − 2) = 0.
Thus A has the simple eigenvalue λ1 = 2 and the eigenvalue of multiplicity two, λ2 = −1.
For λ = 2, Eq. (2.15) is
−2 1 1 01 −2 1 01 1 −2 0
⇒
−2 1 1 00 1 −1 00 0 0 0
⇒
−2 0 2 00 1 −1 00 0 0 0
.
68 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
Hence x3 = x2 = x1 and the eigenvector is x(1) =
111
. For λ = −1, Eq. (2.15) is
1 1 1 01 1 1 01 1 1 0
⇒
1 1 1 00 0 0 00 0 0 0
.
Hence, we have the single equation
x1 + x2 + x3 = 0,
and thus two parameters that can be assigned arbitrary values. If we put x2 = a , x3 = b,
then x1 = −a − b and the eigenvector is of the form
−a − bab
= a
−110
+ b
−101
.
Hence, a pair of linearly independent eigenvectors associated with the repeated eigenvalue
λ2 = −1 is
x(2) =
−110
, x(3) =
−101
.
Thus, in this case, the number of linearly independent eigenvectors equals the multiplicity
of the repeated eigenvalue. Note that any linear combination of x(2) and x(3) is also an
eigenvector associated with the eigenvalue λ2 = −1.
Example 4. Find the eigenvalues and eigenvectors of the matrix A =
−5 −5 −98 9 18
−2 −3 −7
.
The characteristic equation is
det (A − λI) =−5 − λ −5 −9
8 9 − λ 18−2 −3 −7 − λ
= 0,
i.e., − (λ + 1)3 = 0.
Thus A has an eigenvalue, λ = −1 , of multiplicity 3. For λ = −1, Eq. (2.15) is
2.3. REVIEW OF EIGENVALUES AND EIGENVECTORS. 69
−4 −5 −9 08 10 18 0
−2 −3 −6 0
⇒
0 1 3 00 0 0 0
−2 0 3 0
.
Hence, x2 = −3x3 , 2x1 = 3x3 and, putting x3 = 2, we obtain the single linearly dependent
eigenvector
x(1) =
3−6
2
.
Example 5. Find the eigenvalues and eigenvectors of the matrix A =
−1 −3 −90 5 180 −2 −7
.
The characteristic equation is
det (A − λI) =−1 − λ −3 −9
0 5 − λ 180 −2 −7 − λ
= 0,
i.e., − (1 + λ)(λ2 + 2λ + 1) = 0,
i.e., (λ + 1)3 = 0.
Thus A has an eigenvalue, λ = −1, of multiplicity 3. For λ = −1, Eq. (2.15) is
0 −3 −9 00 6 18 00 −2 −6 0
⇒
0 1 3 00 0 0 00 0 0 0
.
Hence x1 is arbitrary, x2 = −3x3, i.e., put x1 = a, x3 = b and the eigenvector is
a−3b
b
= a
100
+ b
0−3
1
.
Thus the repeated eigenvalue λ = −1 has two linearly independent associated eigenvectors
x(1) =
100
, x(2) =
0−3
1
.
70 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
Example 6. Find the eigenvalues and eigenvectors of the matrix A =
2 0 00 2 00 0 2
.
The characteristic equation is
det (A − λI) =2 − λ 0 0
0 2 − λ 00 0 2 − λ
= 0,
i.e., (2 − λ)3 = 0.
Thus A has an eigenvalue, λ = 2, of multiplicity 3. For λ = 2, Eq. (2.15) is
0 0 0 00 0 0 00 0 0 0
,
i.e., x1, x2, x3 may have any values. Put x1 = a , x2 = b , x3 = c and the eigenvector is
abc
= a
100
+ b
010
+ c
001
.
Hence the repeated eigenvalue λ = 2 has three linearly independent associated eigenvectors
x(1) =
100
, x(2) =
010
, x(3) =
001
.
Problem Set 2.3
In each of the problems find the eigenvalues and eigenvectors of the given matrix.
1.
[
−1 2−7 8
]
2.
[
2 12 1
]
3.
[
−8 −116 0
]
4.
[
1 114 1
]
5.
5 −1 00 −5 95 −1 0
6.
3 0 00 2 04 0 1
7.
0 4 0−1 −4 0
0 0 −2
8.
1 6 00 2 10 1 2
9.
[
−1 2−5 1
]
10.
2 −1 05 2 40 1 2
2.4. HOMOGENEOUS LINEAR SYSTEMS WITH CONSTANT COEFFICIENTS. 71
2.4 Homogeneous linear systems with constant coefficients.
We now show how to construct the general solution of a system of homogeneous linear
equations with constant coefficients, i.e., a system of the form (2.9), x′ = Ax, where A is
a constant n× n matrix. We have seen in Section 2.2 that the systems considered there all
had solutions of the form
x = keλt, (2.17)
where k is a constant vector, so we look for solutions of this form. Since Eq. (2.17) implies
that x′ = λkeλt, substitution into Eq. (2.9) gives
λkeλt = Akeλt,
i.e., Ak = λk
and non-trivial solutions of this equation exist if and only if
det (A − λI) = 0.
This is the characteristic equation of the matrix A. Hence it follows that x = keλt is a
solution of the system (2.9) if and only if λ is an eigenvalue of A and k is the associated
eigenvector.
If the n×n matrix A possesses n distinct real eigenvalues λ1, . . . , λn, then the associated
eigenvectors k(1), . . . ,k(n) are linearly independent and
x(1) = k(1)eλ1t, x(2) = k(2)eλ2t, . . . , x(n) = k(n)eλnt (2.18)
form a fundamental set of solutions for the system and the general solution on the interval
(−∞,∞) is
x = c1k(1)eλ1t + c2k
(2)eλ2t + . . . + cnk(n)eλnt. (2.19)
72 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
Example 1. Consider the system x′ = Ax where
A =
1 −1 43 2 −12 1 −1
.
This is the matrix of Ex. 1 of Section 2.3. The eigenvalues are 1,−2, and 3 and the
associated eigenvectors are
k(1) =
−141
, k(2) =
1−1−1
, k(3) =
121
.
Thus the solutions are
x(1) =
−141
et, x(2) =
1−1−1
e−2t, x(3) =
121
e3t,
and so a fundamental matrix is
Φ(t) =
−et e−2t e3t
4et −e−2t 2e3t
et −e−2t e3t
.
The general solution is
x = c1
−141
et + c2
1−1−1
e−2t + c3
121
e3t
or, equivalently,
x = Φ(t)c =
−et e−2t e3t
4et −e−2t 2e3t
et −e−2t e3t
c1
c2
c3
.
Taking t0 = 0, the special fundamental matrix Ψ(t) is given by
Ψ(t) = Φ(t)Φ−1(t0) =
−et e−2t e3t
4et −e−2t 2e3t
et −e−2t e3t
−1 1 14 −1 21 −1 1
−1
=
−et e−2t e3t
4et −e−2t 2e3t
et −e−2t e3t
−16
13 −1
213
13 −1
12 0 −1
2
2.4. HOMOGENEOUS LINEAR SYSTEMS WITH CONSTANT COEFFICIENTS. 73
=
16et +1
3e−2t +12e3t
−23et −1
3e−2t +e3t
−16et −1
3e−2t +12e3t
−13et +1
3e−2t
43et −1
3e−2t
13et −1
3e−2t
12et −e−2t +1
2e3t
−2et +e−2t +e3t
−12et +e−2t +1
2e3t
.
It is easily seen that Ψ(0) = I.
Note that if a matrix A possesses a repeated eigenvalue of multiplicity m to which
correspond m linearly independent eigenvectors then the general solution of the system
(2.9) is of the form (2.19). In particular, if the matrix A is real and symmetric, i.e.,
A = AT , then there is always a full set of n linearly independent eigenvectors even if some
of the eigenvalues are repeated.
Example 2. Consider the system x′ = Ax where
A =
0 1 11 0 11 1 0
.
This matrix is real and symmetric and so has real eigenvalues and three linearly inde-
pendent eigenvectors. The eigenvalues are
λ1 = 2 , λ2 = −1 , λ3 = −1,
i.e., there is an eigenvalue of multiplicity two. The associated eigenvectors are
k(1) =
111
, k(2) =
10
−1
, k(3) =
01
−1
.
(Check this!). Thus the eigenvalue of multiplicity two has two linearly independent associ-
ated eigenvectors and the general solution is
x = c1
111
e2t + c2
10
−1
e−t + c3
01
−1
e−t
or, in terms of a fundamental matrix,
x = Φ(t)c =
e2t e−t 0e2t 0 e−t
e2t −e−t −e−t
c1
c2
c3
.
74 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
2.5 Complex eigenvectors.
We assume that the coefficient matrix, A, is real but possesses a pair of conjugate
complex eigenvalues.
λ1 = α + iβ , λ2 = λ1 = α − iβ, (2.20)
where α , β are real constants. The associated eigenvectors are also complex conjugates,
i.e.,
k(1) = a + ib , k(2) = k(1)
= a − ib, (2.21)
where a and b are real vectors. Then the two linearly independent solutions of the system
which correspond to these eigenvalues are k(1)eλ1t and k(2)eλ2t = k(1)
eλ1t. However, the
solutions are complex; we need to find real solutions of the system. Since the system
x′ = Ax has real coefficients, the real and imaginary parts of the single complex solution
xc(t) = k(1)eλ1t can be shown to be two linearly independent real solutions. Thus the
general solution of the system can be written as x(t) = c1x(1)(t) + c2x
(2)(t), where
x(1)(t) = Re(xc(t))
(2.22)
x(2)(t) = Im(xc(t)).
The real and imaginary parts of k(2)eλ2t will not give rise to any additional linearly inde-
pendent real solutions.
Example 1. Consider the system x′ = Ax where
A =
[
3 −24 −1
]
.
This is the matrix of Ex. 2 of Section 2.3. The eigenvalues are λ1 = 1 + 2i , λ2 = 1 − 2i
2.5. COMPLEX EIGENVECTORS. 75
and the associated eigenvectors are k(1) =
[
11 − i
]
, k(2) =
[
11 + i
]
.
We need use only λ1 and k(1). Thus
xc(t) =
[
11 − i
]
et(cos 2t + i sin 2t)
which expands to yield
xc(t) =
[
et cos 2t + iet sin 2tet cos 2t + et sin 2t + i(et sin 2t − et cos 2t)
]
Hence, from Eq. (2.22), the solutions are
x(1)(t) = et
[
cos 2tcos 2t + sin 2t
]
x(2)(t) = et
[
sin 2tsin 2t − cos 2t
]
,
and a fundamental matrix is
Φ(t) =
[
et cos 2t et sin 2tet cos 2t + et sin 2t et sin 2t − et cos 2t
]
.
Problem Set 2.5
In each of the following problems the given matrix A is the coefficient matrix of a linear
homogeneous system of equations x′ = Ax. Find the general solution of each system in the
form of a fundamental matrix.
1. A =
[
1 24 3
]
2. A =
[
0 28 0
]
3. A =
[
−4 2−5
2 2
]
4. A =
[
12 9
12 2
]
5. A =
[
10 −58 −12
]
6. A =
[
−6 2−3 1
]
76 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
7. A =
1 1 −10 2 00 1 −1
8. A =
−1 1 01 2 10 3 −1
9. A =
3 −1 −11 1 −11 −1 1
10. A =
1 0 10 1 01 0 1
11. A =
[
6 −15 2
]
12. A =
[
1 1−2 −1
]
13. A =
[
5 1−2 3
]
14. A =
[
4 5−2 6
]
15. A =
[
4 −55 −4
]
16. A =
[
1 −81 −3
]
17. A =
0 0 10 0 −10 1 0
18. A =
1 −1 2−1 1 0−1 0 1
19. A =
4 0 10 6 0
−4 0 4
20. A =
2 5 1−5 −6 4
0 0 2
In the following problems solve the given system subject to the indicated initial condi-
tions.
21. x′ =
[
12 01 −1
2
]
x , x(0) =
[
35
]
22. x′ =
[
6 −15 4
]
x , x(0) =
[
−28
]
2.6 Repeated eigenvalues.
We have seen that if the coefficient matrix A of the system x′ = Ax has a repeated
eigenvalue λ of multiplicity m and there are m linearly independent eigenvectors associated
with λ, then there exist m linearly independent solutions of the system corresponding to
λ. This case was covered in Section 2.4 and is essentially the same as the case of distinct
eigenvalues. In this section we consider the case of a repeated eigenvalue of multiplicity m
2.6. REPEATED EIGENVALUES. 77
with less than m associated eigenvectors.
Suppose that the coefficient matrix A has a repeated eigenvalue λ1 = λ2 of multiplicity
two and that there is only one associated eigenvector k. Then
x(1) = keλ1t (2.23)
is a solution and we need to find a second linearly independent solution. This can be
achieved by assuming a solution of the form
x(2) = mteλ1t + neλ1t, (2.24)
where m and n are constant vectors. Substituting the expression (2.24) into the equation
of the system we find that
λ1mteλ1t + meλ1t + λ1neλ1t = A(mteλ1t + neλ1t).
The coefficients of teλ1t and eλ1t are, respectively,
λ1m = Am and m + λ1n = An,
i.e., (A − λ1I)m = 0, (2.25)
(A − λ1I)n = m. (2.26)
Eq. (2.25) shows that m is the eigenvector associated with the eigenvalue λ1, i.e., m = k,
so that Eq. (2.26) becomes
(A − λ1I)n = k. (2.27)
By solving this equation for n we are able to complete the second solution given by
x(2) = kteλ1t + neλ1t. (2.28)
78 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
Example 1. Find the general solution of the system
x′ =
[
3 −41 −1
]
x = Ax.
The eigenvalues of the coefficient matrix A are given by
3 − λ −41 −1 − λ
= 0 ⇒ λ2 − 2λ + 1 = 0,
i.e., λ1 = λ2 = 1.
For λ = 1, the system is
[
2 −4 01 −2 0
]
,
i.e., x1 − 2x2 = 0.
Hence, there is a single eigenvector k =
[
21
]
and the corresponding solution of the system
is
x(1) =
[
21
]
et.
The second linearly independent solution is of the form
x(2) =
[
21
]
tet + net,
where n is a solution of the equation (A − λ1I)n = k,
i.e.,
[
2 −4 21 −2 1
]
,
i.e., n1 − 2n2 = 1. (2.29)
If we put n2 = a, where a is arbitrary, then
n =
[
2a + 1a
]
= a
[
21
]
+
[
10
]
.
2.6. REPEATED EIGENVALUES. 79
Hence, x(2) =
[
21
]
tet +
[
10
]
et + a
[
21
]
et. Note that the last term is simply a multiple
of the first solution x(1), so the second linearly independent solution is
x(2) =
[
21
]
tet +
[
10
]
et
=
[
2t + 1t
]
et.
Note that the vector n could have been found by putting one of the components, n1 or n2
of n equal to zero in Eq. (2.28).
A fundamental matrix for the system is
Φ(t) =
[
2et 2tet + et
et tet
]
and the general solution is
x = c1
[
21
]
et + c2
[
2t + 1t
]
et.
Now suppose the coefficient matrix A has an eigenvalue of multiplicity three, i.e., λ1 =
λ2 = λ3 and there is only one associated eigenvector k. In this case it can be shown that
the first solution is given by Eq. (2.23), a second solution by Eq. (2.28) and a third solution
is of the form
x(3) =1
2kt2eλ1t + nteλ1t + peλ1t, (2.30)
where k is the eigenvector, n satisfies Eq. (2.27) and p satisfies
(A − λ1I)p = n. (2.31)
Example 2. Consider the system x′ = Ax where A is the matrix of Example 4 of Sec-
tion 2.3, i.e.,
x′ =
−5 −5 −98 9 18
−2 −3 −7
x.
80 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
There is a single repeated eigenvalue λ1 = λ2 = λ3 = −1 and a single eigenvector k =
3−6
2
.
Thus the first solution is
x(1) =
3−6
2
e−t.
The second solution is given by Eq. (2.28) , i.e.,
x(2) =
3−6
2
te−t + ne−t,
where n is given by (A − λ1I)n = k, i.e.,
−4 −5 −9 38 10 18 −6
−2 −3 −6 2
,
and a solution of this is n =
00
−13
, so that the second solution is
x(2) =
3−6
2
te−t +
00
−13
e−t.
The third solution is given by Eq. (2.30), i.e.,
x(3) =1
2
3−6
2
t2e−t +
00
−13
te−t + pe−t,
where p is given by (A − λ1I)p = n, i.e.,
−4 −5 −9 08 10 18 0
−2 −3 −6 −13
.
Row operations on this augmented matrix lead to
−2 0 3 53
0 1 3 23
0 0 0 0
2.6. REPEATED EIGENVALUES. 81
and a solution of this is p =
−5623
0
, so that the third solution is
x(3) =1
2
3−6
2
t2e−t +
00
−13
te−t +
−5623
0
e−t.
Hence a fundamental matrix is
Φ(t) =
3e−t 3te−t (32 t2 − 5
6)e−t
−6e−t −6te−t (−3t2 + 23)e−t
2e−t (2t − 13)e−t (t2 − 1
3 t)e−t
.
Another possibility is that the coefficient matrix A has an eigenvalue of multiplicity three,
i.e., λ1 = λ2 = λ3, with two linearly independent associated eigenvectors, k(1) and k(2),
then finding the solution is a little more complicated. In this case two linearly independent
solutions are
x(1) = k(1)eλ1t , x(2) = k(2)eλ1t
and a third solution is of the form
x(3) = kteλ1t + neλ1t, (2.32)
where n is a solution of Eq. (2.27) and k is a linear combination of k(1) and k(2) chosen in
such a way that Eq. (2.27) has a solution. We demonstrate this with the following example.
Example 3. Consider the system x′ = Ax where A is the matrix of Ex. 5 of Section 2.3,
i.e.,
x′ =
−1 −3 90 5 180 −2 −7
x.
There is a single repeated eigenvalue λ1 = λ2 = λ3 = −1 and two linearly independent
associated eigenvectors
k(1) =
100
, k(2) =
0−3
1
.
82 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
Hence two linearly independent solutions are
x(1) =
100
e−t, x(2) =
0−3
1
e−t.
Eq. (2.27) is of the form
0 −3 −9 k1
0 6 18 k2
0 −2 −6 k3
,
where k =
k1
k2
k3
= ak(1) + bk(2) =
a−3bb
for some constants a and b. Thus we have
0 −3 −9 a0 6 18 −3b0 −2 −6 b
R2 + 2R1
=⇒R3 − 1
3R2
0 −3 −9 a0 0 0 2a − 3b0 0 0 0
.
For consistency we must have 2a − 3b = 0, so we take a = 3, b = 2 and so
k = 3k(1) + 2k(2) =
3−6
2
.
Thus we have the equation −3n2−9n3 = 3 and a solution of this is n1 = 0, n2 = −1, n3 = 0,
i.e., n =
0−1
0
.
Hence the third solution is
x(3) =
3−6
2
te−t +
0−1
0
e−t
and a fundamental matrix is
Φ(t) =
e−t 0 3te−t
0 −3e−t −(6t + 1)e−t
0 e−t 2te−t
.
2.7. NONHOMOGENEOUS SYSTEMS. 83
Problem Set 2.6
In the following problems find the general solution of the given system.
1.dx
dt= 3x − y 2.
dx
dt= −6x + 5y
dy
dt= 9x − 3y
dy
dt= −5x + 4y
3.dx
dt= −x + 3y 4. x′ =
5 −4 01 0 20 2 5
x
dy
dt= −3x + 5y
5. x′ =
1 0 00 3 10 −1 1
x 6. x′ =
1 0 02 2 −10 1 0
x
7. x′ =
4 1 00 4 10 0 4
x 8. x′ =
0 4 0−1 −4 0
0 0 −2
x
2.7 Nonhomogeneous systems.
Suppose that we have a nonhomogeneous system of equations of the form
x′ = Ax + f(t). (2.33)
In solving this system, the first step is to find the solution of the homogeneous system
x′ = Ax. We know that this solution can be written in the form
x = Φ(t)c, (2.34)
where Φ(t) is a fundamental matrix of the homogeneous system and c is a column vector of
constants. In seeking a solution of the system (2.33) we replace c in Eq. (2.34) by a column
84 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
vector of functions u(t) =
u1(t)...
un(t)
so that
x = Φ(t)u(t) (2.35)
is a particular solution of the system (2.33).
Differentiating Eq. (2.35) gives
x′ = Φ′(t)u(t) + Φ(t)u′(t)
and, substituting into Eq. (2.33), we obtain
Φ′(t)u(t) + Φ(t)u′(t) = AΦ(t)u(t) + f(t). (2.36)
Now, since Φ(t) is a fundamental matrix of the homogeneous system it satisfies the equation
Φ′ = AΦ, so Eq. (2.36) becomes
Φ(t)u′(t) = f(t),
i.e., u(t) =
∫
Φ−1(t)f(t) dt.
Hence, a particular solution of the system (2.33) is
xp = Φ(t)
∫
Φ−1(t)f(t) dt, (2.37)
and the general solution of the system is
x = Φ(t)c + Φ(t)
∫
Φ−1(t)f(t) dt. (2.38)
If there is an initial condition of the form
x(t0) = x0, (2.39)
2.7. NONHOMOGENEOUS SYSTEMS. 85
it is useful to rewrite the solution (2.38) in the form
x = Φ(t)c + Φ(t)
∫ t
t0
Φ−1(s)f(s) ds, (2.40)
so that the particular solution chosen is the specific one that is zero at t = t0. In this case
the initial condition (2.39) becomes x0 = Φ(t0)c, i.e.,
c = Φ−1(t0)x0, (2.41)
so that the solution of the initial value problem is
x = Φ(t)Φ−1(t0)x0 + Φ(t)
∫ t
t0
Φ−1(s)f(s) ds. (2.42)
If we use as fundamental matrix the matrix Ψ(t) which satisfies Ψ(t0) = I (see Section 2.2),
then this solution can be written as
x = Ψ(t)x0 + Ψ(t)
∫ t
t0
Ψ−1(s) f(s) ds. (2.43)
Example 1. Find the general solution of the system
x′ =
[
2 −13 −2
]
x +
[
et
t
]
.
This matrix A =
[
2 −13 −2
]
has two distinct eigenvalues λ1 = 1, λ2 = −1 with associ-
ated eigenvectors k(1) =
[
11
]
, k(2) =
[
13
]
. Hence a fundamental matrix is
Φ(t) =
[
et e−t
et 3e−t
]
,
so that
Φ−1(t) =1
2
[
3e−t −e−t
−et et
]
.
86 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
Now f(t) =
[
et
t
]
and so
Φ−1(t)f(t) =1
2
[
3 − te−t
−e2t + tet
]
.
Hence∫
Φ−1(t)f(t) dt =1
2
[
3t + (t + 1)e−t
−12e2t + (t − 1)et
]
, and
xp = Φ(t)
∫
Φ−1(t)f(t) dt =
[
32 tet − 1
4et + t32 tet − 3
4et + 2t − 1
]
.
The general solution is
x = c1
[
11
]
et + c2
[
13
]
e−t +
[
32 tet − 1
4et + t32 tet − 3
4et + 2t − 1
]
.
Example 2. Find the general solution of the system
x′ =
[
2 −51 −2
]
x +
[
− cos tsin t
]
.
The matrix A =
[
2 −51 −2
]
has complex eigenvalues λ1 = i, λ2 = −i with associ-
ated eigenvectors k(1) =
[
2 + i1
]
, k(2) =
[
2 − i1
]
. Hence, the a complex solution of the
homogeneous system is given by
[
2 + i1
]
eit =
[
2 + i1
]
(cos t + i sin t) =
[
2 cos t − sin t + i(cos t + 2 sin t)cos t + i sin t
]
,
i.e., two linearly independent real solutions are
x(1) =
[
2 cos t − sin tcos t
]
, x(2) =
[
cos t + 2 sin tsin t
]
and a fundamental matrix is
Φ(t) =
[
2 cos t − sin t cos t + 2 sin tcos t sin t
]
.
2.7. NONHOMOGENEOUS SYSTEMS. 87
Then Φ−1(t) is given by
Φ−1(t) =
[
− sin t cos t + 2 sin tcos t −2 cos t + sin t
]
.
Now f(t) =
[
− cos tsin t
]
and so Φ−1(t)f(t) =
[
1 − cos 2t + sin 2t− cos 2t − sin 2t
]
.
Hence
∫
Φ−1(t)f(t) dt =
[
t − 12 sin 2t − 1
2 cos 2t
−12 sin 2t + 1
2 cos 2t
]
, and
xp = Φ(t)
∫
Φ−1(t)f(t) dt =
[
2t cos t − t sin t − 32 sin t − 1
2 cos t
t cos t − 12 sin t − 1
2 cos t
]
.
The general solution is
x = c1
[
2 cos t − sin tcos t
]
+ c2
[
cos t + 2 sin tsin t
]
+
[
2t cos t − t sin t − 32 sin t − 1
2 cos t
t cos t − 12 sin t − 1
2 cos t
]
.
Problem Set 2.7
Find the general solution of the given system.
1. x′ =
[
3 −32 −2
]
x +
[
41
]
2. x′ =
[
1√
3√3 −1
]
x +
[
et√
3e−t
]
3. x′ =
[
1 −11 1
]
x +
[
cos tsin t
]
et 4. x′ =
[
1 14 −2
]
x +
[
e−2t
−2et
]
5. x′ =
[
4 −28 −4
]
x +
[
t−3
−t−2
]
6. x′ =
[
−4 22 −1
]
x +
[
t−1
2t−1 + 4
]
7. x′ =
[
1 14 1
]
x +
[
2−1
]
et 8. x′ =
[
2 −13 −2
]
x +
[
1−1
]
et
9. x′ =
[
−54
34
34 −5
4
]
x +
[
2tet
]
10. x′ =
[
2 −51 −2
]
x +
[
0cos t
]
88 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
2.8 Laplace transform method for systems.
If initial conditions are specified, Laplace transform methods can be used to reduce a
system of differential equations to a set of simultaneous algebraic equations and thus to find
the solution. The system need not be of the first order. We demonstrate the method with
two examples.
Example 1. Use the Laplace transform to solve the system of differential equations
dx
dt= −x + y ,
dy
dt= 2x , x(0) = 0, y(0) = 1.
Let X(s) = L{x(t)} and Y (s) = L{y(t)}. Transforming each equation of the system,
we obtain
sX(s) − x(0) = −X(s) + Y (s),
sY (s) − y(0) = 2X(s),
i.e., (s + 1)X(s) − Y (s) = 0,
−2X(s) + sY (s) = 1.
Multiplying the first equation by s and adding the second equation we obtain
(s2 + s − 2)X(s) = 1,
i.e., X(s) =1
(s + 2)(s − 1)=
13
s − 1−
13
s + 2.
∴ x(t) =1
3et − 1
3e−2t.
2.8. LAPLACE TRANSFORM METHOD FOR SYSTEMS. 89
Also
Y (s) = (s + 1)X(s) =s + 1
(s + 2)(s − 1)=
23
s − 1+
13
s + 2.
∴ y(t) =2
3et +
1
3e−2t.
Hence, the solution of the given system is
x(t) =1
3(et − e−2t) , y(t) =
1
3(2et + e−2t).
Example 2. Use the Laplace transform to solve the system of differential equations
d2x
dt2+
d2y
dt2= t2 ,
d2x
dt2− d2y
dt2= 4t,
x(0) = 8 , x′(0) = 0 , y(0) = 0 , y′(0) = 0.
Taking the Laplace transform of each equation:
s2X(s) − sx(0) − x′(0) + s2Y (s) − sy(0) − y′(0) =2
s3
s2X(s) − sx(0) − x′(0) − s2Y (s) + sy(0) + y′(0) =4
s2,
i.e., X(s) + Y (s) =8
s+
2
s5
X(s) − Y (s) =8
s+
4
s4,
∴ X(s) =8
s+
2
s4+
1
s5, Y (s) =
1
s5− 2
s4.
Hence
x(t) = 8 +1
3t3 +
1
24t4, y(t) =
1
24t4 − 1
3t3.
90 CHAPTER 2. SYSTEMS OF FIRST ORDER LINEAR EQUATIONS
Problem Set 2.8
Use the Laplace transform to solve the given systems of differential equations.
1.dx
dt= x − 2y ,
dy
dt= 5x − y , x(0) = 0 , y(0) = 1.
2. 2dx
dt+
dy
dt− 2x = 1 ,
dx
dt+
dy
dt− 3x − 3y = 2 , x(0) = 0 , y(0) = 0.
3.d2x
dt2+ x − y = 0 ,
d2y
dt2+ y − x = 0 , x(0) = 0 , x′(0) = −2 , y(0) = 0 , y′(0) = 1.
4.d2x
dt2+
3dy
dt+ 3y = 0 ,
d2x
dt2+ 3y = te−t , x(0) = 0 , x′(0) = 2 , y(0) = 0.
Chapter 3
FOURIER SERIES
3.1 Orthogonal sets of functions.
Suppose we have two vectors u, v in an n-dimensional vector space V (which can be
the usual 3-space). The inner product (scalar product) is written as (u,v) or u · v and has
the following properties:
(i) (u,v) = (v,u)
(ii) (ku,v) = k(u,v) for any scalar k
(iii) (u,u) = 0 if u = 0 and (u,u) > 0 if u 6= 0
(iv) (u + v,w) = (u,w) + (v,w)
The inner product of a vector with itself is
(u,u) = u21 + u2
2 + . . . + u2n
and the non-negative square root of (u,u) is called the norm of u and is denoted by ‖u‖,
i.e.,
‖u‖ =√
(u,u), (3.1)
91
92 CHAPTER 3. FOURIER SERIES
so that (u,u) = ‖u‖2 is the squared norm of u.
Two vectors are said to be orthogonal if their inner product is zero, i.e., if (u,v) = 0. In
n-dimensional space we can find n mutually orthogonal vectors ui (i = 1, . . . , n) and these
are said to form an orthogonal set. If each vector is divided by its norm, i.e., if we form the
vectors
ei =ui
‖ui‖, (3.2)
then the unit vectors ei satisfy
(ei, ej) = δij (i, j = 1, . . . , n), (3.3)
where δij is the Kronecker delta defined by
δij =
{
0 if i 6= j1 if i = j
. (3.4)
The set of vectors ei form an orthonormal set which is denoted by {ei}. For example, the
basis vectors i, j,k of 3-space form an orthonormal set.
Every vector v in the n-dimensional space can be expressed as a linear combination of
the orthonormal vectors ei , i.e.,
v = c1e1 + c2e2 + . . . + cnen, (3.5)
where the coefficients ci are given by
(v, ei) = ci(ei, ei) = ci (i = 1, . . . , n), (3.6)
so that ci is the projection of v on ei.
Now we extend these ideas of inner product and orthogonality to functions. Suppose
that fm(x) and fn(x) are two real-valued functions defined on an interval [a, b] and which
3.1. ORTHOGONAL SETS OF FUNCTIONS. 93
are such that the integral
(fm, fn) =
∫ b
afm(x)fn(x) dx (3.7)
exists. We make the following definitions in analogy with the concepts of vector theory.
Definition 1. The inner product of two functions fm(x) and fn(x) is the number (fm, fn)
defined by Eq. (3.7).
Definition 2. Two functions fm(x) and fn(x) are said to be orthogonal on an interval
[a, b] if (fm, fn) = 0.
Definition 3. The norm of a function fm(x) is
‖fm(x)‖ =√
(fm, fm) =
√
∫ b
a[fm(x)]2 dx . (3.8)
Note that ‖fm(x)‖ ≥ 0.
Our primary interest is in infinite sets of orthogonal functions. A set of real-valued
functions f1(x), f2(x), f3(x), . . . is called an orthogonal set of functions on an interval [a, b]
if the functions are defined on [a, b] and if the condition
(fm, fn) =
∫ b
afm(x)fn(x) dx = 0 (3.9)
holds for all pairs of distinct functions in the set.
Assuming that none of the functions fm(x) has zero norm, we can form a new set of
functions {φm(x)} defined by
φm(x) =fm(x)
‖fm(x)‖ . (3.10)
The set {φm(x)} is called an orthonormal set and satisfies
∫ b
aφm(x)φn(x) dx = δmn. (3.11)
94 CHAPTER 3. FOURIER SERIES
Example 1. Consider the functions fm(x) = sinmπx
ℓ, i.e.,
f1(x), f2(x), f3(x), . . . = sinπx
ℓ, sin
2πx
ℓ, sin
3πx
ℓ, . . .
These functions form an orthogonal set on [−ℓ, ℓ] since, for all m and n
(fm, fn) =
∫ ℓ
−ℓsin
mπx
ℓsin
nπx
ℓdx
=1
2
∫ ℓ
−ℓ
[
cos(m − n)πx
ℓ− cos(m + n)
πx
ℓ
]
dx
= 0 if m 6= n.
The norm of fm(x) is given by ‖fm‖ =
√
1
2
∫ ℓ
−ℓ
(
1 − cos2mπx
ℓ
)
dx =√
ℓ , so that the
corresponding orthonormal set is
{
1√ℓ
sinmπx
ℓ
}
.
This concept of orthogonal functions may be generalized in two ways. First, we say that
a set of functions {fm(x)} is orthogonal on [a, b] with respect to a weight function w(x)
where w(x) ≥ 0, if∫ b
aw(x)fm(x)fn(x) dx = 0 (m 6= n). (3.12)
The norm ‖fm(x)‖ is given by
‖fm(x)‖2 =
∫ b
aw(x)[fm(x)]2 dx (3.13)
and the set {φm(x)}, where
φm(x) =fm(x)
‖fm(x)‖ , (3.14)
forms an orthonormal set.
Note that this type of orthogonality reduces to the ordinary type by using the product
functions√
w(x)fm(x).
3.2. EXPANSION OF FUNCTIONS IN SERIES OF ORTHOGONAL FUNCTIONS. 95
Another type of orthogonality concerns complex-valued functions. A set {tm} of complex-
valued functions of a real variable x is orthogonal in the hermitian sense, on an interval [a, b]
if∫ b
atm(x)tn(x) dx = 0, m 6= n. (3.15)
The integral is the hermitian inner product (tm, tn) corresponding to the definition (3.7).
The norm of tm is real and non-negative and is given by
‖tm‖2 =
∫ b
atmtmdx =
∫ b
a([um]2 + [vm]2) dx,
where tm(x) = um(x) + ivm(x) , um(x) and vm(x) being real-valued functions of x.
Example 2. Consider the functions
tn ≡ einx = cos nx + i sinnx (n = 0,±1,±2, . . . ).
These functions form a set with hermitian orthogonality on the interval [−π, π] since
(tm, tn) =
∫ π
−π(cos mx + i sinmx)(cos nx − i sinnx) dx
=
∫ π
−π[cos(m − n)x + i sin(m − n)x] dx
= 0 (m 6= n),
(tm, tm) = ‖tm‖2 =
∫ π
−πeimxe−imxdx =
∫ π
−π1 dx = 2π,
i.e., ‖tm‖ =√
2π.
3.2 Expansion of functions in series of orthogonal functions.
Given a set {φm(x)} of functions which are orthogonal on an interval [a, b], and which
may be orthonormal but not necessarily so, we can, under certain conditions, expand a
96 CHAPTER 3. FOURIER SERIES
given function f(x) in terms of a series of the functions φm(x), i.e.,
f(x) = c0φ0(x) + c1φ1(x) + c2φ2(x) + . . . ,
i.e., f(x) =∞∑
n=0
cnφn(x). (3.16)
Assuming that this expansion exists we need to determine the coefficients cn. To do this we
multiply each side of Eq. (3.16) by φm(x), where φm(x) is the mth element of the orthogonal
set to obtain
f(x)φm(x) =∞∑
n=0
cnφm(x)φn(x).
Integrating both sides of this equation over [a, b] and assuming that the integral of the
infinite sum is equivalent to the sum of the integrals, we obtain
∫ b
af(x)φm(x) dx =
∞∑
n=0
cn
∫ b
aφm(x)φn(x) dx. (3.17)
Since {φm(x)} is an orthogonal set, all the integrals on the right side of (3.17) are zero
except that one for which m = n. Hence, Eq. (3.17) reduces to
cn
∫ b
a[φn(x)]2dx =
∫ b
af(x)φn(x) dx,
i.e., cn =
∫ ba f(x)φn(x) dx
‖φn(x)‖2, (3.18)
which determines each constant cn.
The above analysis can be extended to the case when the set {φm(x)} is orthogonal with
respect to the weight function w(x). In this case we multiply Eq. (3.16) by w(x)φm(x) and
eventually obtain
cn =
∫ ba w(x)f(x)φn(x) dx
‖φn(x)‖2, (3.19)
3.2. EXPANSION OF FUNCTIONS IN SERIES OF ORTHOGONAL FUNCTIONS. 97
where now ‖φn(x)‖2 =∫ ba w(x)[φn(x)]2dx.
The series (3.16), with coefficients given by Eqs. (3.18) or (3.19), is called a generalized
Fourier series.
Note that, although we have found a formal series of the form (3.16) we have not shown
that this series actually does represent the function f(x) in (a, b) or even that it converges
in (a, b). One necessary condition for the expansion to converge to f(x) is that the set
{φm(x)} must be complete, i.e., there must be no function with positive norm which is
orthogonal to each of the functions φm(x).
In general, the functions with which we shall be concerned are sectionally continuous or
piecewise continuous. A function is said to be piecewise continuous on [a, b] if it is defined
on [a, b] and if it has only a finite number of finite discontinuities in that interval.
Problem Set 3.2
1. Define (f, g) by (f, g) =
∫ 1
0f(x)g(x) dx.
(i) Are f(x) = x and g(x) = x2 orthogonal?
(ii) Find α, β, γ such that f(x) = 1, g(x) = x+α, h(x) = x2+βx+γ are orthogonal.
(iii) Form the orthonormal set corrersponding to the above set of three vectors.
(iv) If the vectors of part (iii) are labelled φ1, φ2, φ3, express F (x) = x2 − x + 1 as a
combination F (x) = c1φ1(x) + c2φ2(x) + c3φ3(x).
2. Show that f1(x) = ex and f2(x) = (x − 1)e−x are orthogonal on the interval [0, 2].
3. Show that {fm(x)} = {sinx, sin 3x, sin 5x, . . . } is orthogonal on the interval [0, π2 ] and
98 CHAPTER 3. FOURIER SERIES
find the norm of each function fm(x).
4. Show that {fm(x)} = {1, cos nπxℓ } , n = 1, 2, 3, . . . is orthogonal on the interval [0, ℓ]
and find the norm of each function fm(x).
5. Given the three functions L0(x) = 1 , L1(x) = −x + 1 , L2(x) = 12x2 − 2x + 1, verify
by direct integration that the functions are orthogonal with respect to the weight
function w(x) = e−x on the interval [0,∞).
3.3 Fourier series.
We first note that the set of functions
{
1, cosnπx
ℓ, sin
nπx
ℓ
}
, n = 1, 2, 3, . . . (3.20)
is a complete orthogonal set on the interval [−ℓ, ℓ]. This follows from the integrals
∫ ℓ
−ℓsin
nπx
ℓdx = 0 ,
∫ ℓ
−ℓcos
nπx
ℓdx = 0 (all n ≥ 1), (3.21)
∫ ℓ
−ℓsin
mπx
ℓsin
nπx
ℓdx = 0 (m 6= n), (3.22)
∫ ℓ
−ℓcos
mπx
ℓcos
nπx
ℓdx = 0 (m 6= n), (3.23)
∫ ℓ
−ℓsin
mπx
ℓcos
nπx
ℓdx = 0 (all m, n ≥ 1). (3.24)
Note also that∫ ℓ
−ℓcos2
mπx
ℓdx =
∫ ℓ
−ℓsin2 mπx
ℓdx = ℓ. (3.25)
Hence, from Section 3.2 we can, under certain circumstances, expand a given function f(x)
in terms of the functions of the set (3.20), i.e.,
f(x) =a0
2+
∞∑
n=1
(
an cosnπx
ℓ+ bn sin
nπx
ℓ
)
, (3.26)
3.3. FOURIER SERIES. 99
where the coefficients a0, an, bn are real and independent of x. Such a series may, or may
not, be convergent. If it does converge to the sum f(x), then for every integer k
f(x + 2kℓ) = f(x) (3.27)
and f(x) is a periodic function of period 2ℓ so that we need only study the series in the
interval (−ℓ, ℓ), or some other interval of length 2ℓ, such as (0, 2ℓ). A series of the form
(3.26) is called a trigonometric series.
Suppose that f(x) is a periodic function of period 2ℓ which can be represented by the
trigonometric series (3.26). We need to find the coefficients a0, an, bn; this can be done by
using the results (3.21) - (3.25). First integrate Eq. (3.26) from −ℓ to ℓ
∫ ℓ
−ℓf(x) dx =
∫ ℓ
−ℓ
a0
2dx +
∞∑
n=1
∫ ℓ
−ℓ
(
an cosnπx
ℓ+ bn sin
nπx
ℓ
)
dx
= a0ℓ + 0 + 0 ,
using Eq. (3.21). Hence
a0 =1
ℓ
∫ ℓ
−ℓf(x) dx. (3.28)
Now multiply Eq. (3.26) by cosmπx
ℓand integrate from −ℓ to ℓ.
∫ ℓ
−ℓf(x) cos
mπx
ℓdx =
∫ ℓ
−ℓ
a0
2cos
mπx
ℓdx +
∞∑
n=1
{∫ ℓ
−ℓan cos
nπx
ℓcos
mπx
ℓdx
+
∫ ℓ
−ℓbn sin
nπx
ℓcos
mπx
ℓdx
}
= 0 + amℓ + 0 ,
i.e., am =1
ℓ
∫ ℓ
−ℓf(x) cos
mπx
ℓdx. (3.29)
Similarly, multiplying by sinmπx
ℓand integrating we find
bm =1
ℓ
∫ ℓ
−ℓf(x) sin
mπx
ℓdx. (3.30)
100 CHAPTER 3. FOURIER SERIES
The expressions (3.28), (3.29), (3.30) are called the Euler formulae and the coeffi-
cients a0, am, bm are called the Fourier coefficients of f(x). The series (3.26) is called the
Fourier series corresponding to f(x).
Even if the right-hand side of Eq. (3.26) does not converge to f(x) for all x in (−ℓ, ℓ) we
can still calculate the Fourier coefficients of f(x) from Eqs. (3.28) - (3.30) and then write
f(x) ∼ a0
2+
∞∑
n=1
(
an cosnπx
ℓ+ bn sin
nπx
ℓ
)
,
where the right-hand side is the Fourier series of f(x). The symbol ∼ means that f(x) is
not necessarily equal to the right-hand side which may be divergent or converge to some
function other than f(x).
In order to know whether the Fourier series does, in fact, represent the function we need
the following theorem:
Fourier’s Theorem: If a periodic function f(x), with period 2ℓ, is piecewise continuous
on −ℓ < x < ℓ and has left-hand and right-hand derivatives at each point of (−ℓ, ℓ), then
the corresponding Fourier series (3.26), with coefficients (3.28) - (3.30), is convergent to
f(x) at a point of continuity. At a point of discontinuity, the Fourier series converges to
the sum
1
2[f(x+) + f(x−)],
where f(x+) is the value of f(x) when x is approached from the right, and f(x−) is the
value of f(x) when x is approached from the left.
Summary. The Fourier series of a function f(x) defined on the interval (−ℓ, ℓ) is given by
3.3. FOURIER SERIES. 101
1
2
f(x-)
O x
f(x+)
[ ] f(x+)+f(x-)
f(x) =a0
2+
∞∑
n=1
(
an cosnπx
ℓ+ bn sin
nπx
ℓ
)
, (3.26)
where a0 =1
ℓ
∫ ℓ
−ℓf(x) dx, (3.28)
an =1
ℓ
∫ ℓ
−ℓf(x) cos
nπx
ℓdx, (3.29)
bn =1
ℓ
∫ ℓ
−ℓf(x) sin
nπx
ℓdx. (3.30)
Note that when ℓ = π, i.e., when the interval is of length 2π, the expressions (3.26),
(3.28) - (3.30) take the slightly simpler form
f(x) =a0
2+
∞∑
n=1
(an cos nx + bn sinnx), (3.31)
a0 =1
π
∫ π
−πf(x) dx , an =
1
π
∫ π
−πf(x) cos nx dx , bn =
1
π
∫ π
−πf(x) sin nx dx. (3.32)
Example 1. Find the Fourier series corresponding to the function f(x) = x in the interval
(−ℓ, ℓ).
The function obviously satisfies the conditions of the theorem. Using Eqs. (3.28) - (3.30),
102 CHAPTER 3. FOURIER SERIES
we obtain
a0 =1
ℓ
∫ ℓ
−ℓx dx =
1
ℓ
[
1
2x2
]ℓ
−ℓ
= 0,
an =1
ℓ
∫ ℓ
−ℓx cos
nπx
ℓdx
=1
ℓ
{
[
x · ℓ
nπsin
nπx
ℓ
]ℓ
−ℓ
− ℓ
nπ
∫ ℓ
−ℓsin
nπx
ℓdx
}
= 0 +ℓ
n2π2
[
cosnπx
ℓ
]ℓ
−ℓ= 0,
bn =1
ℓ
∫ ℓ
−ℓx sin
nπx
ℓdx
=1
ℓ
{
[
−xℓ
nπcos
nπx
ℓ
]ℓ
−ℓ
+ℓ
nπ
∫ ℓ
−ℓcos
nπx
ℓdx
}
=1
ℓ
[−ℓ2
nπcos nπ − ℓ2
nπcos(−nπ) +
ℓ2
nπ
[
sinnπx
ℓ
]ℓ
−ℓ
]
=−2ℓ
nπ(−1)n.
Hence the required series is
f(x) =∞∑
n=1
(−1)n+1 · 2ℓ
nπsin
nπx
ℓ=
2ℓ
π
[
sinπx
ℓ− 1
2sin
2πx
ℓ+
1
3sin
3πx
ℓ. . .
]
and this does converge to the function in (−ℓ, ℓ).
Example 2. Consider the step function given by
f(x) =
0 −π < x < −π2
1 −π2 < x < π
20 π
2 < x < π.
This function has only a finite number of finite discontinuities and satisfies the conditions
3.3. FOURIER SERIES. 103
π
1
2−π π−π
xO
2
of the theorem. The period is 2π, i.e., ℓ = π, so we use Eq. (3.31) and (3.32)
a0 =1
π
∫ π
−πf(x) dx =
1
π
∫ π
2
−π
2
1 · dx = 1,
an =1
π
∫ π
−πf(x) cos nx dx =
1
π
∫ π
2
−π
2
cos nx dx =1
π
[
1
nsinnx
]π
2
−π
2
=2
nπsin
nπ
2.
Now sinnπ
2= 0 if n is even. If n is odd, i.e., n = 2m + 1, then
sinnπ
2= sin(mπ +
π
2) = cos mπ = (−1)m.
∴ an = 0 (n even) , a2m+1 = (−1)m 2
(2m + 1)π(m = 0, 1, 2, . . . ),
bn =1
π
∫ π
−πf(x) sin nx dx =
1
π
∫ π
2
−π
2
sin nx dx =1
π
[
− 1
ncos nx
]π
2
−π
2
= 0.
Hence the series is
f(x) =1
2+
2
πcos x − 2
3πcos 3x +
2
5πcos 5x − 2
7πcos 7x + . . . ,
i.e., f(x) =1
2+
2
π
∞∑
m=0
(−1)m
2m + 1cos(2m + 1)x.
Note that when x = −π
2and when x =
π
2, cos(2m + 1)x = 0 so at these points f(x) =
1
2,
which is the average of the two values either side of those points.
104 CHAPTER 3. FOURIER SERIES
The following diagrams illustrate how the Fourier series converges to the given function.
Diagrams 1, 3, 5 and 7 show each of the individual terms of the partial sums on the same
graph. Diagrams 2, 4, 6 and 8 show the terms summed together.
–0.8
–0.6
–0.4
–0.20
0.2
0.4
0.6
0.8
1
1.2
y
–3 –2 –1 1 2 3x
–0.8
–0.6
–0.4
–0.20
0.2
0.4
0.6
0.8
1
1.2
y
–3 –2 –1 1 2 3x
DIAGRAM 11
2+
2
πcos x DIAGRAM 2
–0.8
–0.6
–0.4
–0.20
0.2
0.4
0.6
0.8
1
1.2
y
–3 –2 –1 1 2 3x
–0.8
–0.6
–0.4
–0.20
0.2
0.4
0.6
0.8
1
1.2
y
–3 –2 –1 1 2 3x
DIAGRAM 31
2+
2
πcos x − 2
3πcos 3x DIAGRAM 4
3.3. FOURIER SERIES. 105
–0.8
–0.6
–0.4
–0.20
0.2
0.4
0.6
0.8
1
1.2
y
–3 –2 –1 1 2 3x
–0.8
–0.6
–0.4
–0.20
0.2
0.4
0.6
0.8
1
1.2
y
–3 –2 –1 1 2 3x
DIAGRAM 51
2+
2
πcos x − 2
3πcos 3x +
2
5πcos 5x DIAGRAM 6
–0.8
–0.6
–0.4
–0.20
0.2
0.4
0.6
0.8
1
1.2
y
–3 –2 –1 1 2 3x
–0.8
–0.6
–0.4
–0.20
0.2
0.4
0.6
0.8
1
1.2
y
–3 –2 –1 1 2 3x
DIAGRAM 71
2+
2
πcos x − 2
3πcos 3x +
2
5πcos 5x − 2
7πcos 7x DIAGRAM 8
Example 3. Find the Fourier series of the function f(x) of period 2 defined by
1
1
2
-1
-1
O
f(x) =
{
−1 −1 < x < 02x 0 ≤ x < 1
.
The period is 2 so that ℓ = 1 and hence
a0 = 1 ·∫ 1
−1f(x) dx =
∫ 0
−1(−1)dx +
∫ 1
02x dx
= [−x]0−1 +
[
x2]1
0
= −1 + 1 = 0,
106 CHAPTER 3. FOURIER SERIES
an =
∫ 1
−1f(x) cos nπx dx
=
∫ 0
−1(−1) cos nπx dx +
∫ 1
02x cos nπx dx
= − 1
nπ[sinnπx]0
−1 +
[
2x1
nπsinnπx
]1
0
− 2
nπ
∫ 1
01 · sinnπx dx
= 0 + 0 +2
n2π2(cos nπ − 1)
=2
n2π2[(−1)n − 1]
=
{
0 n even
− 4n2π2 n odd, i.e., − 4
(2m + 1)2π2 (m = 0, 1, 2 . . . ),
bn =
∫ 1
−1f(x) sin nπx dx
=
∫ 0
−1(−1) sin nπx dx +
∫ 1
02x sinnπx dx
=1
nπ[cos nπx]0
−1 −[
2x1
nπcos nπx
]1
0
+2
nπ
∫ 1
0cos nπx dx
=1
nπ(1 − cos nπ) − 2
nπcos nπ +
2
n2π2[sinnπx]10
=1
nπ(1 − 3 cos nπ) + 0
=
− 2nπ (n even) = − 1
mπ (m = 1, 2, . . . )
4nπ (n odd) = 4
(2m + 1)π(m = 0, 1, 2, . . . ).
Hence f(x) =
(
− 4
π2cos πx − 4
9π2cos 3πx − . . .
)
+
(
4
πsinπx − 1
πsin 2πx +
4
3πsin 3πx − . . .
)
,
3.4. COSINE AND SINE SERIES. 107
i.e., f(x) = − 4
π2
∞∑
m=0
1
(2m + 1)2cos(2m + 1)πx +
4
π
∞∑
m=0
1
2m + 1sin(2m + 1)πx
− 1
π
∞∑
m=1
sin 2mπx.
Problem Set 3.3
In the following problems find the Fourier series of f(x) on the given interval.
1. f(x) =
{
0 −π < x < 01 0 ≤ x < π
2. f(x) =
{
−1 −π < x < 02 0 ≤ x < π
3. f(x) =
{
1 −1 < x < 0x 0 ≤ x < 1
4. f(x) =
{
0 −1 < x < 0x 0 ≤ x < 1
5. f(x) =
{
0 −π < x < 0x2 0 ≤ x < π
6. f(x) =
{
π2 −π < x < 0π2 − x2 0 ≤ x < π
7. f(x) = x + π −π < x < π 8. f(x) = 3 − 2x −π < x < π
9. f(x) =
{
0 −π < x < 0sinx 0 ≤ x < π
10. f(x) =
{
0 −π2 < x < 0
cos x 0 ≤ x < π2
11. f(x) =
0 −2 < x < 0x 0 ≤ x < 11 1 ≤ x < 2
12. f(x) =
{
2 + x −2 < x < 02 0 ≤ x < 2
13. f(x) = ex −π < x < π 14. f(x) =
{
0 −π < x < 0ex − 1 0 ≤ x < π
3.4 Cosine and sine series.
A function f(x) defined in the interval (−ℓ, ℓ) is said to be an even function of x if, for
every value of x in the interval
f(−x) = f(x). (3.33)
108 CHAPTER 3. FOURIER SERIES
On the other hand, f(x) is said to be an odd function of x if, for every value of x in the
interval,
f(−x) = −f(x). (3.34)
For Eq. (3.34) to be consistent we must have f(0) = 0.
For example, f(x) = xn is an even function if n is an even integer, including zero, and
is an odd function if n is odd. Also cos x is an even function while sinx is an odd function.
The graph of an even function is symmetric with respect to the y-axis, and the graph of an
odd function is symmetric with respect to the origin.
Some properties of even and odd functions are:
(i) The product of two even functions is even.
(ii) The product of two odd functions is even.
(iii) The product of an even function and an odd function is odd.
(iv) The sum (difference) of two even functions is even.
(v) The sum (difference) of two odd functions is odd.
(vi) If f is even, then
∫ a
−af(x)dx = 2
∫ a
0f(x) dx .
(vii) If f is odd, then
∫ a
−af(x) dx = 0.
If f(x) is an even function on (−ℓ, ℓ), then from properties (i), (iii) and (vi), the Fourier
coefficients (3.28) - (3.30) become
a0 =1
ℓ
∫ ℓ
−ℓf(x) dx =
2
ℓ
∫ ℓ
0f(x) dx,
an =1
ℓ
∫ ℓ
−ℓf(x) cos
nπ
ℓx dx =
2
ℓ
∫ ℓ
0f(x) cos
nπx
ℓdx,
bn =1
ℓ
∫ ℓ
−ℓf(x) sin
nπx
ℓdx = 0.
3.4. COSINE AND SINE SERIES. 109
Hence, we have that the Fourier series of an even function f(x) on (−ℓ, ℓ) is the cosine series
f(x) =a0
2+
∞∑
n=1
an cosnπx
ℓ, (3.35)
where a0 =2
ℓ
∫ ℓ
0f(x) dx , an =
2
ℓ
∫ ℓ
0f(x) cos
nπx
ℓdx. (3.36)
The Fourier series of an odd function on (−ℓ, ℓ) is the sine series
f(x) =∞∑
n=1
bn sinnπx
ℓ, (3.37)
where bn =2
ℓ
∫ ℓ
0f(x) sin
nπx
ℓdx. (3.38)
Example 1. Find the Fourier series of the function f(x) =1
4x2 on the interval (−π, π).
The function is an even function, so the Fourier series is the cosine series given by
Eqs. (3.35), (3.36). In this case ℓ = π, so
f(x) =a0
2+
∞∑
n=1
an cos nx,
and a0 =2
π
∫ π
0f(x) dx =
1
2π
∫ π
0x2dx
=1
2π
[
1
3x3
]π
0
=1
6ππ3 =
π2
6,
an =2
π
∫ π
0f(x) cos nx dx =
1
2π
∫ π
0x2 cos nx dx
=1
2π
{[
x2 1
nsinnx
]π
0
−∫ π
02x · 1
nsinnx dx
}
=1
2π
{
0 − 2
n
([
x
(
− 1
ncos nx
)]π
0
+
∫ π
01 · 1
ncos nx dx
)}
= − 1
nπ
[
[
−x
ncos nx
]π
0+
[
1
n2sinnx
]π
0
]
110 CHAPTER 3. FOURIER SERIES
= − 1
nπ
[
−π
ncos nπ + 0 + 0
]
=1
n2cos nπ =
(−1)n
n2.
∴ a1 = −1 , a2 =1
4, a3 = −1
9, a4 =
1
16, etc.
∴ f(x) =π2
12+
∞∑
n=1
(−1)n
n2cos nx
=π2
12− cos x +
1
4cos 2x − 1
9cos 3x +
1
16cos 4x − . . . .
Example 2. Find the Fourier series of the function
f(x) =
{
−x2 −π < x < 0
x2 0 ≤ x < π.
on the interval (−π, π).
This is an odd function, so the Fourier series is the sine series given by Eqs. (3.37),
(3.38).
∴ bn =2
π
∫ π
0x2 sinnx dx
=2
π
{[
x2
(
− 1
n
)
cos nx
]π
0
−∫ π
02x
(
− 1
n
)
cos nx dx
}
=2
π
{[
− 1
nx2 cos nx
]π
0
+2
n
([
x1
nsinnx
]π
0
−∫ π
011
nsinnx dx
)}
=2
π
{
− 1
nπ2 cos nπ + 0 + 0 +
2
n
[
1
n2cos nx
]π
0
}
=2
π
{
−π2
n(−1)n +
2
n3cos nπ − 2
n3
}
=2
π
{
−π2
n(−1)n +
2
n3(−1)n − 2
n3
}
.
∴ b1 =2
π
{
+π2 − 2 − 2}
=2
π(π2 − 4) ; b2 =
2
π
{
−π2
2+
2
8− 2
8
}
= −π;
3.5. HALF-RANGE EXPANSIONS. 111
b3 =2
π
{
π2
3− 2
27− 2
27
}
=2
27π(9π2 − 4) ; b4 =
2
π
{
−π2
4
}
= −π
2; etc.
∴ f(x) =2
π(π2 − 4) sin x − π sin 2x +
2
27π(9π2 − 4) sin 3x − π
2sin 4x + . . . .
3.5 Half-range expansions.
In many physical and engineering problems we need to find a Fourier series expansion
for a function f(x) which is defined on some finite interval such as (0, ℓ) and very often we
need consider only a cosine series or only a sine series. For a series of cosines only we extend
f(x) as an even function into the interval −ℓ < x < 0, and elsewhere periodically. For a
series of sines only we extend f(x) as an odd function into the interval −ℓ < x < 0, and
elsewhere periodically. These definitions of f(x) into the interval −ℓ < x < 0 are called,
respectively, the even periodic extension and the odd periodic extension of f(x).
EVEN PERIODIC EXTENSION
f(x)
x
y
l-l
f(x)
x
y
l-l
ODD PERIODIC EXTENSION
The even extension is given by
f(x) =a0
2+
∞∑
n=1
an cosnπx
ℓ, (3.39)
a0 =2
ℓ
∫ ℓ
0f(x) dx , an =
2
ℓ
∫ ℓ
0f(x) cos
nπx
ℓdx, (3.40)
112 CHAPTER 3. FOURIER SERIES
and the odd extension is given by
f(x) =∞∑
n=1
bn sinnπx
ℓ, (3.41)
bn =2
ℓ
∫ ℓ
0f(x) sin
nπx
ℓdx. (3.42)
The series (3.39) and (3.41) are called the half-range expansions of the function f(x).
Example. Find both the even and the odd periodic extensions of the function
f(x) =
π
20 < x <
π
2
π − xπ
2≤ x < π
f(x)
y
xπ
π/2
π/2O
O
y
xπ
π/2
π/2−π/2
EVEN EXTENSION
−π O
y
xπ
π/2
π/2
−π/2
−π/2−π
ODD EXTENSION
The even extension is given by Eq. (3.39) with
a0 =2
π
∫ π
0f(x) dx =
2
π
∫ π
2
0
π
2dx +
2
π
∫ π
π
2
(π − x) dx
=2
π
[π
2x]π
2
0+
2
π
[
πx − 1
2x2
]π
π
2
=π
2+
2
π
[
π2 − 1
2π2 − 1
2π2 +
1
8π2
]
=3
4π,
an =2
π
∫ π
0f(x) cos nx dx =
2
π
∫ π
2
0
π
2cos nx dx +
2
π
∫ π
π
2
(π − x) cos nx dx
3.5. HALF-RANGE EXPANSIONS. 113
=2
π
[ π
2nsin nx
]π
2
0+
2
π
{
[
(π − x)1
nsin nx
]π
π
2
+
∫ π
π
2
1 · 1
nsinnx dx
}
=2
π· π
2nsin
nπ
2+
2
π
(
0 − π
2nsin
nπ
2
)
− 2
πn2[cos nx]ππ
2
= − 2
πn2
[
cos nπ − cosnπ
2
]
.
Hence
a1 = − 2
π
(
cos π − cosπ
2
)
=2
π,
a2 = − 2
4π(cos 2π − cos π) = − 1
π,
a3 = − 2
9π
(
cos 3π − cos3π
2
)
=2
9π,
a4 = − 2
16π(cos 4π − cos 2π) = 0,
a5 = − 2
25π
(
cos 5π − cos5π
2
)
=2
25π,
a6 = − 2
36π(cos 6π − cos 3π) = − 1
9π.
Hence, the even periodic extension is
f(x) =3
8π +
1
π
(
2 cos x − cos 2x +2
9cos 3x +
2
25cos 5x − 1
9cos 6x + . . .
)
.
The odd extension is given by Eq. (3.41) with
bn =2
π
∫ π
0f(x) sin nx dx =
2
π
∫ π
2
0
π
2sinnx dx +
2
π
∫ π
π
2
(π − x) sin nx dx
=
[
− 1
ncos nx
]π
2
0
+2
π
{
[
(π − x)(− 1
ncos nx)
]π
π
2
−∫ π
π
2
1
ncos nx dx
}
= − 1
ncos
nπ
2+
1
n+
2
π
[
0 +π
2ncos
nπ
2
]
− 2
πn2[sinnx]ππ
2
=1
n+
2
πn2sin
nπ
2.
Hence,
114 CHAPTER 3. FOURIER SERIES
b1 = 1 +2
πsin
π
2= 1 +
2
π, b2 =
1
2+
1
2πsinπ =
1
2,
b3 =1
3+
2
9πsin
3π
2=
1
3− 2
9π, b4 =
1
4+
1
8πsin 2π =
1
4,
b5 =1
5+
2
25πsin
5π
2=
1
5+
2
25π, b6 =
1
6+
1
18πsin 3π =
1
6.
Hence, the odd periodic extension is
f(x) =
(
1 +2
π
)
sinx +1
2sin 2x +
(
1
3− 2
9π
)
sin 3x +1
4sin 4x +
(
1
5+
2
25π
)
sin 5x + . . . .
Problem Set 3.5
In problems 1 - 12 determine whether f(x) is odd or even and expand the function in
an approporiate sine or cosine series.
1. f(x) =
{
−1 −π < x < 01 0 ≤ x < π
2. f(x) =
1 −2 < x < −10 −1 < x < 11 1 < x < 2
3. f(x) = |x |, −π < x < π 4. f(x) = x, −π < x < π
5. f(x) = x2, −1 < x < 1 6. f(x) = x|x |, −1 < x < 1
7. f(x) = π2 − x2, −π < x < π 8. f(x) = x3, −π < x < π
9. f(x) =
{
x − 1 −π < x < 0x + 1 0 ≤ x < π
10. f(x) =
{
x + 1 −1 < x < 0x − 1 0 ≤ x < 1
11. f(x) = | sinx |, −π < x < π 12. f(x) = cos x, −π
2< x <
π
2
In problems 13 - 22 find the half-range cosine and sine expansions of f(x).
13. f(x) =
{
1 0 < x < 12
0 12 ≤ x < 1
14. f(x) =
{
0 0 < x < 12
1 12 ≤ x < 1
15. f(x) = cos x , 0 < x <π
216. f(x) = sin x , 0 < x < π
17. f(x) =
x 0 < x < π2
π − x π2 ≤ x < π
18. f(x) =
{
0 0 < x < π
x − π π ≤ x < 2π
19. f(x) =
{
x 0 < x < 11 1 ≤ x < 2
20. f(x) =
{
1 0 < x < 12 − x 1 ≤ x < 2
21. f(x) = x2 + x , 0 < x < 1 22. f(x) = x(2 − x) , 0 < x < 2
3.6. COMPLEX FORM OF THE FOURIER SERIES. 115
3.6 Complex form of the Fourier series.
The complex exponential function is given by
eix = cos x + i sinx , e−ix = cos x − i sinx.
Hence cos nx =1
2(einx + e−inx) and sinnx =
1
2i(einx − e−inx).
The Fourier series
f(x) =a0
2+
∞∑
n=1
(an cosnπx
ℓ+ bn sin
nπx
ℓ)
becomes
f(x) =a0
2+
∞∑
n=1
[
an1
2
(
einπx
ℓ + e−inπx
ℓ
)
+ bn · 1
2i
(
einπx
ℓ − e−inπx
ℓ
)
]
=a0
2+
∞∑
n=1
[
1
2(an +
1
ibn)e
inπx
ℓ +1
2(an − 1
ibn)e−
inπx
ℓ
]
= h0 +∞∑
n=1
(
hneinπx
ℓ + hne−inπx
ℓ
)
, (3.43)
where h0 =a0
2, hn =
1
2(an − ibn) , hn =
1
2(an + ibn).
By writing hn as h−n, Eq. (3.43) can be written as
f(x) =∞∑
n=−∞
hneinπx
ℓ . (3.44)
The Fourier coefficients are given by
h0 =a0
2=
1
2ℓ
∫ ℓ
−ℓf(x) dx, (3.45)
hn =1
2(an − ibn) =
1
2ℓ
∫ ℓ
−ℓf(x)
(
cosnπx
ℓ− i sin
nπx
ℓ
)
dx,
116 CHAPTER 3. FOURIER SERIES
i.e., hn =1
2ℓ
∫ ℓ
−ℓf(x)e−
inπx
ℓ dx, (3.46)
hn =1
2(an + ibn) =
1
2ℓ
∫ ℓ
−ℓf(x)e
inπx
ℓ dx. (3.47)
Hence, the complex Fourier series is given by Eq. (3.43) with Eqs. (3.45) - (3.47). Alterna-
tively, the Fourier series is given by Eq. (3.44) with hn given by Eq. (3.46).
If f(x) is an even function, then in its Fourier expansion bn = 0. Hence hn = hn =1
2an,
so that the complex Fourier coefficients are real. Similarly, if f(x) is an odd function, then
a0 = an = 0 and hn = −ibn so that the complex Fourier coefficients are pure imaginary.
Example. Find the complex Fourier series of the function
f(x) = ex, −π < x < π, f(x + 2π) = f(x).
We use the form (3.44) so that
hn =1
2π
∫ π
−πexe−inxdx =
1
2π
[
1
1 − ine(1−in)x
]π
−π
=1
2π
1
1 − in
(
e(1−in)π − e−(1−in)π)
=1
2π
1 + in
1 + n2
(
eπe−inπ − e−πeinπ)
.
Now einπ = cos nπ + i sinnπ = (−1)n. Similarly e−inπ = (−1)n. Hence
hn =1
2π
1 + in
1 + n2(−1)n(eπ − e−π) =
1 + in
π(1 + n2)(−1)n sinhπ,
∴ f(x) =sinhπ
π
∞∑
n=−∞
1 + in
1 + n2(−1)neinx.
Separating this expression for f(x) into real and imaginary parts we obtain
f(x) =sinhπ
π
∞∑
n=−∞
(−1)n
1 + n2[(cos nx − n sinnx) + i(n cos nx + sinnx)].
3.7. SEPARABLE PARTIAL DIFFERENTIAL EQUATIONS. 117
The terms corresponding to n = −1. . . . ,−∞ can be included in a sum from n = 1 to ∞
by replacing n by −n and the n = 0 term can be listed separately to give
f(x) =sinhπ
π
{
1 +∞∑
n=1
(−1)n
1 + n2[(cos nx − n sinnx) + i(n cos nx + sinnx)]
+∞∑
n=1
(−1)n
1 + n2[(cos nx − n sinnx) + i(−n cos nx − sinnx)]
}
=sinhπ
π
{
1 + 2∞∑
n=1
(−1)n
1 + n2cos nx − 2
∞∑
n=1
(−1)nn
1 + n2sinnx
}
,
which is the corresponding real Fourier series.
Problem Set 3.6
Find the complex form of the Fourier series of the following functions f(x).
1. f(x) = x − π < x < π.
2. f(x) =
{
−1 −π < x < 01 0 < x < π
.
3. f(x) = e−x − π
2< x <
π
2.
3.7 Separable partial differential equations.
Many problems in applied mathematics can be reduced to the solution of the partial
differential equation
∇2V = L∂2V
∂t2+ M
∂V
∂t+ N, (3.48)
where V is a physical quantity depending on the three cartesian space co-ordinates x, y, z
and on time t, ∇2 is the differential operator
∇2 =
(
∂2
∂x2+
∂2
∂y2+
∂2
∂z2
)
, (3.49)
118 CHAPTER 3. FOURIER SERIES
known as the Laplacian, and L, M, N are functions of x, y, z or constants. The following
four special cases of Eq. (3.48) are of particular importance.
(a) Laplace’s equation. Here L = M = N = 0 and examples of quantitites which can
be represented by the function V are:
(i) The gravitational potential in a region devoid of attracting matter.
(ii) The electrostatic potential in a uniform dielectric.
(iii) The magnetic potential.
(iv) The velocity potential in the irrotational motion of a homogeneous fluid.
(v) The steady state temperature in a uniform solid.
(b) Poisson’s Equation. In this equation L = M = 0 and N is a given function of x, y, z.
Examples of quantities represented by V are:
(i) The gravitational potential in a region in which N is proportional to the density
of the material at a point (x, y, z) in the region.
(ii) The electrostatic potential in a region in which N is proportional to the charge
distribution.
(iii) In the two-dimensional (z absent) form of the equation in which N is a constant,
V is a measure of the shear stress entailed by twisting a long bar of specified
cross-section.
(c) Heat conduction equation. When L = N = 0 and M = k−2, where k2 is the
diffusivity of a homogeneous isotropic body, Eq. (3.48) gives the temperature V at a
3.7. SEPARABLE PARTIAL DIFFERENTIAL EQUATIONS. 119
point (x, y, z) of the body. In certain circumstances the same equation can be used
in diffusion problems, the quantity V then being the concentration of the diffusing
substance.
(d) The wave equation. Eq. (3.48) with M = N = 0 and L = c−2 arises in investigations
of waves propagated with velocity c independent of wave length. Typical examples of
quantities which can be represented by the function V are:
(i) Components of the displacement in vibrating systems.
(ii) The velocity potential of a gas in the theory of sound.
(iii) Components of the electric or magnetic vector in the electromagnetic theory of
light.
In the two-dimensional case, i.e., when there are two independent variables x1 and x2, the
general second-order homogeneous linear partial differential equation for a function u(x1, x2)
can be written in the form
A∂2u
∂x21
+ B∂2u
∂x1∂x2+ C
∂2u
∂x22
+ D∂u
∂x1+ E
∂u
∂x2+ Fu = 0, (3.50)
where A, B, C, D, E, F are real constants. Such an equation can be classified into one of
three types; the equation is said to be
(a) hyperbolic if B2 − 4AC < 0 ,
(b) parabolic if B2 − 4AC = 0 ,
(c) elliptic if B2 − 4AC > 0.
Three important partial differential equations which are special cases of both Eqs. (3.48)
and (3.50) are:
120 CHAPTER 3. FOURIER SERIES
1. The one-dimensional heat conduction equation
k2 ∂2u
∂x2=
∂u
∂t, (3.51)
where k2 is the thermal diffusivity, and u(x, t) is the temperature. This equation
governs the temperature distribution in a straight bar of uniform cross-section and
homogeneous material. It is a parabolic equation.
2. The one-dimensional wave equation
c2 ∂2u
∂x2=
∂2u
∂t2(3.52)
describes the motion of a vibrating string, where c is the wave velocity for the string.
It is a hyperbolic equation.
3. The two-dimensional Laplace equation
∂2u
∂x2+
∂2u
∂y2= 0. (3.53)
This equation arises in problems involving time-independent potential functions. It is
an elliptic equation.
We shall first turn our attention to the one-dimensional heat conduction equation, ap-
plying it to the study of the temperature distribution in a bar of thermal diffusivity k2 and
of length ℓ with the ends of the bar denoted by x = 0 and x = ℓ. In order to solve such a
problem we require an initial condition, such as the temperature distribution in the bar at
time t = 0. Only one such condition is required since the differential equation contains only
the first derivative with respect to t. However, we also require two boundary conditions,
3.7. SEPARABLE PARTIAL DIFFERENTIAL EQUATIONS. 121
i.e., conditions on x, since the differential equation contains the second derivative with re-
spect to x. The boundary conditions may be that the ends of the bar are held at fixed
temperatures or that the ends are insulated or that one end is at a fixed temperature while
the other end is insulated. The initial and boundary conditions that we will use may be
stated as follows:
Initial condition:
u(x, 0) = f(x). (3.54)
This states that at time t = 0 , the temperature distribution in the bar is given by f(x).
Boundary conditions: Either
u(0, t) = 0 , u(ℓ, t) = 0 , (3.55)
which states that both ends of the bar are held at zero temperature, or
ux(0, t) = 0 , ux(ℓ, t) = 0 , (3.56)
which states that there is no heat flow at the ends of the bar, i.e., the ends are insulated.
In order to solve Eq. (3.51) we assume that u(x, t) is a product function, i.e.,
u(x, t) = X(x)T (t). (3.57)
Substituting this into Eq. (3.51) we obtain
k2X ′′T = XT , (3.58)
where the prime denotes differentiation with respect to x and the dot denotes differentiation
with respect to t. Dividing through by XT , Eq. (3.58) becomes
X ′′
X= k−2 T
T. (3.59)
122 CHAPTER 3. FOURIER SERIES
The left side of this equation is a function of x only while the right side is a function of t
only. Hence, each side must be equal to a constant, i.e.,
X ′′
X= k−2 T
T= α , (3.60)
so that the single partial differential equation is replaced by the two ordinary differential
equations
X ′′ − αX = 0, (3.61)
T − αk2T = 0. (3.62)
The product of two solutions of Eq. (3.61), (3.62), respectively, for any value of α is a
solution of Eq. (3.51). However, we require solutions that satisfy the boundary conditions
(3.55) or (3.56) and this severely restricts the possible values of α.
First we consider the case of the boundary conditions (3.55). The first of these gives
u(0, t) = X(0)T (t) = 0 and, since T (t) = 0 would imply that u(x, t) = 0 for all x, the only
possibility is X(0) = 0. Similarly, the second condition of (3.55) gives u(ℓ, t) = X(ℓ)T (t) = 0
which implies that X(ℓ) = 0. Hence, the boundary conditions (3.55) imply that
X(0) = X(ℓ) = 0. (3.63)
Now, in solving Eq. (3.61) there are three cases to consider, namely α = 0, α > 0 , (i.e.,
α = λ2) , and α < 0 , (i.e., α = −λ2).
Case 1. α = 0. Eq. (3.61) becomes X ′′ = 0, i.e.,
X = ax + b,
3.7. SEPARABLE PARTIAL DIFFERENTIAL EQUATIONS. 123
where a, b are constants. Substituting x = 0 and x = ℓ, the boundary conditions (3.63)
lead to a = b = 0, i.e., X = 0, so that u(x, t) = 0. This is not an acceptable solution so we
discard it.
Case 2. α = λ2. Eq. (3.61) becomes X ′′ − λ2X = 0, i.e.,
X = K1eλx + K2e
−λx.
The boundary conditions (3.63) give
K1 + K2 = 0 , K1eλℓ + K2e
−λℓ = 0 ,
and the solution of this system is K1 = K2 = 0, i.e., X = 0 , so that u(x, t) = 0. Again, we
discard this solution.
Case 3. α = −λ2. Eq. (3.61) becomes X ′′ + λ2X = 0, i.e.,
X = K1 cos λx + K2 sinλx.
The boundary conditions (3.63) give
K1 = 0 , K2 sinλℓ = 0.
We discard the possibility K2 = 0 since this implies that X = 0, so the solution is given by
sinλℓ = 0, i.e.,
λℓ = nπ , (3.64)
where n is a non-zero integer. Thus there are an infinite number of solutions. The constant
α in Eq. (3.61) is given by
α = −λ2 =−n2π2
ℓ2(3.65)
124 CHAPTER 3. FOURIER SERIES
and X is proportional to sin(nπx
ℓ
)
.
From Eq. (3.65), the differential equation (3.62) for T becomes
T =−n2π2
ℓ2k2T, (3.66)
so that T is proportional to e−n
2π2
k2
t
ℓ2 . Thus, neglecting constant multipliers, the functions
un(x,t) = e−n
2π2
k2
t
ℓ2 sinnπx
ℓ(n = 1, 2, . . . ) (3.67)
are each solutions of Eq. (3.51) and satisfy the boundary conditions (3.55). Note that we
consider only positive values of n because negative values of n give the same solutions. Since
the differential equation (3.51) and the boundary conditions (3.55) are linear and homo-
geneous, it follows that any linear combination of the un(x, t) also satisfies the differential
equation and boundary conditions. Consequently, the solution of the differential equation
can be written as
u(x, t) =∞∑
n=1
cnun(x, t) =∞∑
n=1
cne−n
2π2
k2
t
ℓ2 sinnπx
ℓ. (3.68)
To complete the solution we have to satisfy the initial condition (3.54). Putting t = 0
in Eq. (3.68) we obtain
u(x, 0) = f(x) =∞∑
n=1
cn sinnπx
ℓ,
so that the cn are the Fourier coefficients for the Fourier sine series corresponding to f(x),
i.e.,
cn =2
ℓ
∫ ℓ
0f(x) sin
nπx
ℓdx. (3.69)
This completes the solution for u(x, t) which is Eq. (3.68) with cn given by Eq. (3.69).
3.7. SEPARABLE PARTIAL DIFFERENTIAL EQUATIONS. 125
Example 1. Find the solution to the heat conduction problem
k2uxx = ut 0 < x < ℓ , t > 0,
u(0, t) = 0 , u(ℓ, t) = 0 t > 0,
u(x, 0) = x(ℓ − x).
The solution is of the form (3.68) with cn given by Eq. (3.69) with f(x) = x(ℓ− x), i.e.,
cn =2
ℓ
∫ ℓ
0x(ℓ − x) sin
nπx
ℓdx
=2
ℓ
{
[
x(ℓ − x)(− ℓ
nπcos
nπx
ℓ)
]ℓ
0
+ℓ
nπ
∫ ℓ
0(ℓ − 2x) cos
nπx
ℓdx
}
=2
ℓ[0] +
2
nπ
{
[
(ℓ − 2x)ℓ
nπsin
nπx
ℓ
]ℓ
0
+2ℓ
nπ
∫ ℓ
0sin
nπx
ℓdx
}
=2
nπ[0] +
4ℓ
n2π2
[−ℓ
nπcos
nπx
ℓ
]ℓ
0
=4ℓ2
n3π3(cos 0 − cos nπ)
=4ℓ2
n3π3[1 − (−1)n]
=
{
0 n even8ℓ2
n3π3 n odd.
Put n = 2m + 1 (m = 0, 1, 2, . . . ), then
c2m+1 =8ℓ2
(2m + 1)3π3
and the final solution is
u(x, t) =∞∑
m=0
8ℓ2
(2m + 1)3π3e
−(2m+1)2π2
k2
t
ℓ2 sin(2m + 1)πx
ℓ
126 CHAPTER 3. FOURIER SERIES
=8ℓ2
π3
[
e−π
2k2
t
ℓ2 sinπx
ℓ+
1
27e
−9π2
k2
t
ℓ2 sin3πx
ℓ+
1
125e
−25π2
k2
t
ℓ2 sin5πx
ℓ+ . . .
]
.
Example 2. Find the solution to the heat conduction problem
100uxx = ut 0 < x < 1 , t > 0,
u(0, t) = 0 , u(1, t) = 0 t > 0,
u(x, 0) = sin 2πx − 2 sin 5πx 0 ≤ x ≤ 1.
In this case k2 = 100 , ℓ = 1 , f(x) = sin 2πx − 2 sin 5πx.
The solution is of the form
u(x, t) =
∞∑
n=1
cne−100n2π2t sin nπx.
When t = 0 , u(x, 0) = sin 2πx − 2 sin 5πx =∞∑
n=1
cn sinnπx.
Hence c2 = 1 , c5 = −2 , cn = 0(n 6= 2, 5) and the solution is
u(x, t) = e−400π2t sin 2πx − 2e−2500π2t sin 5πx.
Now consider the case of the boundary conditions (3.56). The first of these gives
ux(0, t) = X ′(0)T (t) = 0 which, to avoid u(x, t) = 0 for all x, requires that X ′(0) = 0.
Similarly, the second condition implies that X ′(ℓ) = 0. Hence, the boundary conditions
(3.56) imply that
X ′(0) = X ′(ℓ) = 0. (3.70)
As in the previous case we consider the three cases in which α = 0 , α = λ2 ,
α = −λ2 , respectively, in Eq. (3.61).
3.7. SEPARABLE PARTIAL DIFFERENTIAL EQUATIONS. 127
Case 1. α = 0.
We again obtain X = ax + b and each of the conditions (3.70) imply that a = 0 so that
X is a constant, i.e.,
X =1
2c0. (3.71)
Case 2. α = λ2.
We again obtain X = K1eλx + K2e
−λx, so that X ′ = K1λeλx − K2λe−λx and the bound-
ary conditions (3.70) lead to
K1λ − K2λ = 0 , K1λeλℓ − K2λe−λℓ = 0
and the solution of this system is K1 = K2 = 0, i.e., X = 0, so that u(x, t) = 0 and we
discard this solution.
Case 3. α = −λ2.
As before we obtain X = K1 cos λx + K2 sinλx. Then X ′ = −K1λ sin λx + K2λ cos λx
and the boundary conditions (3.70) give
K2λ = 0 , −K1λ sinλℓ = 0.
Hence K2 = 0 and, since K1 cannot also be zero, we must have sin λℓ = 0, i.e.,
λℓ = nπ, (3.72)
where n is a non-zero integer. Thus there is an infinite number of solutions of Eq. (3.51)
satisfying the boundary conditions (3.56) of the form
un(x, t) = e−n2
π2
k2
ℓ2t cos
nπx
ℓ(n = 1, 2, . . . ) (3.73)
128 CHAPTER 3. FOURIER SERIES
together with the solution of Case 1. Thus the solution of the differential equation can be
written as
u(x, t) =1
2c0 +
∞∑
n=1
cne−n2
π2
k2
ℓ2t cos
nπx
ℓ. (3.74)
We now have to satisfy the initial condition (3.54). Putting t = 0 in Eq. (3.74) we obtain
u(x, 0) = f(x) =1
2c0 +
∞∑
n=1
cn cosnπx
ℓ, (3.75)
so that c0 and cn are the Fourier coefficients for the Fourier cosine series corresponding to
f(x), i.e.,
c0 =2
ℓ
∫ ℓ
0f(x) dx , (3.76)
cn =2
ℓ
∫ ℓ
0f(x) cos
nπx
ℓdx. (3.77)
Thus, in the case of the boundary conditions (3.56), the complete solution is given by
Eq. (3.74) with c0, cn given by Eq. (3.76) , (3.77).
Example 3. Consider a uniform rod of length ℓ with an initial temperature given by
sinπx
ℓ, 0 ≤ x ≤ ℓ. Assume that both ends of the bar are insulated. Find a formal series
expansion for the temperature u(x, t). What is the steady-state temperature as t → ∞?
The solution is of the form (3.74) with c0 and cn given by Eqs. (3.76) and (3.77) with
f(x) = sinπx
ℓ, i.e.,
c0 =2
ℓ
∫ ℓ
0sin
πx
ℓdx
=2
ℓ
[
− ℓ
πcos
πx
ℓ
]ℓ
0
= − 2
π(cos π − cos 0) =
4
π.
3.7. SEPARABLE PARTIAL DIFFERENTIAL EQUATIONS. 129
cn =2
ℓ
∫ ℓ
0sin
πx
ℓcos
nπx
ℓdx
=2
ℓ
∫ ℓ
0
1
2
[
sin(n + 1)πx
ℓ− sin
(n − 1)πx
ℓ
]
dx
=1
ℓ
[
− ℓ
(n + 1)πcos(n + 1)
πx
ℓ+
ℓ
(n − 1)πcos(n − 1)
πx
ℓ
]ℓ
0
(n 6= 1)
= − 1
(n + 1)πcos(n + 1)π +
1
(n − 1)πcos(n − 1)π +
1
(n + 1)πcos 0 − 1
(n − 1)πcos 0
=
{
0 n odd, n 6= 1−4
(n2 − 1)πn even . (3.78)
When n = 1
c1 =2
ℓ
∫ ℓ
0sin
πx
ℓcos
πx
ℓdx
=1
ℓ
∫ ℓ
0sin
2πx
ℓdx
=1
ℓ
[
− ℓ
2πcos
2πx
ℓ
]ℓ
0
= − 1
2π(cos 2π − cos 0) = 0.
Hence the solution is
u(x, t) =2
π−
∞∑
n=1
cne−n2
π2
k2
ℓ2t cos
nπx
ℓ,
where cn are given by Eq. (3.78).
The steady state temperature is the limit as t → ∞ in the expression for u(x, t), i.e.,
uss =2
π.
Now consider the one-dimensional wave equation (3.52) as applied to the vibrations of
an elastic string tightly stretched between two points at the same horizontal level, distance
130 CHAPTER 3. FOURIER SERIES
ℓ apart, so that the x-axis lies along the string. These vibrations are described by Eq. (3.52)
provided that damping effects, such as air resistance, are neglected and that the amplitude
of the motion is not too large. The constant c2 appearing in Eq. (3.52) is given by
c2 = T/ρ, (3.79)
where T is the tension in the string and ρ is the mass per unit length of the string. As
we remarked earlier, c is called the wave velocity for the string, i.e., the velocity at which
waves are propagated along the string.
Since Eq. (3.52) is of second order with respect to x and also with respect to t, it follows
that we require two boundary conditions and two initial conditions. Assuming that the
ends of the string are fixed, the boundary conditions are
u(0, t) = 0 , u(ℓ, t) = 0 , (3.80)
and the initial conditions are
u(x, 0) = f(x) , 0 ≤ x ≤ ℓ, (3.81)
which describes the initial position of the string, and
ut(x, 0) = g(x) , 0 ≤ x ≤ ℓ, (3.82)
which describes the initial velocity, where f(x) and g(x) are given functions which, for the
consistency of Eqs. (3.80) to (3.82) must satisfy
f(0) = f(ℓ) = 0 , g(0) = g(ℓ) = 0. (3.83)
The string may be set in motion by plucking, i.e., by pulling the string aside and letting it
go from rest. In this case f(x) 6= 0, but g(x) = 0. Alternatively, the string may be struck
3.7. SEPARABLE PARTIAL DIFFERENTIAL EQUATIONS. 131
while in an initial horizontal position, in which case f(x) = 0 but g(x) 6= 0. Of course, the
actual initial conditions may be some combination of these two possibilities.
To solve Eq. (3.52) we assume a separable solution of the form
u(x, t) = X(x)T (t) (3.84)
and substitution of this into Eq. (3.52) yields
X ′′
X=
1
c2
T
T= α , (3.85)
where α is a constant. As in the case of the heat-conduction equation with boundary
conditions (3.55), a non-trivial solution is obtained only if α < 0, i.e., α = −λ2, in which
case Eq. (3.85) leads to the two ordinary differential equations
X ′′ + λ2X = 0 , (3.86)
T + c2λ2T = 0 , (3.87)
for which the solutions are
X = K1 cos λx + K2 sin λx , (3.88)
T = K3 cos cλt + K4 sin cλt , (3.89)
where K1, . . . , K4 are arbitrary constants.
The boundary conditions (3.80) become
X(0) = 0 , X(ℓ) = 0 , (3.90)
which lead to K1 = 0 , K2 sinλℓ = 0. In order to avoid the trivial solution u(x, t) = 0
everywhere we require sin λℓ = 0, i.e.,
λ =nπ
ℓ, (3.91)
132 CHAPTER 3. FOURIER SERIES
so that the solution for X(x) is
X = K2 sinnπx
ℓ(n = 1, 2, 3, . . . ). (3.92)
From Eqs. (3.89), (3.91) and (3.92) we see that the solutions for u satisfying the boundary
conditions are
un = (An cosnπct
ℓ+ Bn sin
nπct
ℓ) sin
nπx
ℓ(3.93)
and the general solution is the sum of all such solutions, i.e.,
u(x, t) =∞∑
n=1
(An cosnπct
ℓ+ Bn sin
nπct
ℓ) sin
nπx
ℓ. (3.94)
Putting t = 0 in Eq. (3.94) we have
u(x, 0) = f(x) =∞∑
n=1
An sinnπx
ℓ, (3.95)
so that the An are the Fourier coefficients for the half-range expansion sine series for f(x),
i.e.,
An =2
ℓ
∫ ℓ
0f(x) sin
nπx
ℓdx. (3.96)
To determine Bn, we differentiate Eq. (3.94) with respect to t to obtain
ut(x, t) =∞∑
n=1
(−Annπc
ℓsin
nπct
ℓ+ Bn
nπc
ℓcos
nπct
ℓ) sin
nπx
ℓ,
so that
ut(x, 0) = g(x) =∞∑
n=1
Bnnπc
ℓsin
nπx
ℓ, (3.97)
and hence the coefficients Bnnπc
ℓare the Fourier coefficients for the half-range expansion
sine series for g(x), i.e.,
Bnnπc
ℓ=
2
ℓ
∫ ℓ
0g(x) sin
nπx
ℓdx,
i.e., Bn =2
nπc
∫ ℓ
0g(x) sin
nπx
ℓdx. (3.98)
3.7. SEPARABLE PARTIAL DIFFERENTIAL EQUATIONS. 133
Example 4. Find the solution of the wave equation for a vibrating string of length ℓ
subject to the boundary conditions
u(0, t) = 0 , u(ℓ, t) = 0,
and the initial conditions
u(x, 0) = 0 , ut(x, 0) = x(ℓ − x).
In this problem f(x) = 0 , g(x) = x(ℓ − x), so that An = 0 and
Bn =2
nπc
∫ ℓ
0x(ℓ − x) sin
nπx
ℓdx
=2
nπc
{
[−ℓ
nπx(ℓ − x) cos
nπx
ℓ
]ℓ
0
+ℓ
nπ
∫ ℓ
0(ℓ − 2x) cos
nπx
ℓdx
}
=2ℓ
n2π2c
{
[
(ℓ − 2x)ℓ
nπsin
nπx
ℓ
]ℓ
0
+2ℓ
nπ
∫ ℓ
0sin
nπx
ℓdx
}
=4ℓ2
n3π3c· ℓ
nπ
[
cosnπx
ℓ
]ℓ
0
=4ℓ3
n4π4c[1 − (−1)n].
Hence the solution for u(x, t) is
u(x, t) =4ℓ3
π4c
∞∑
n=1
1
n4[1 − (−1)n] sin
nπct
ℓsin
nπx
ℓ.
When the tension T in the string is large enough, the vibrating string will produce
a musical sound. This sound is the result of standing waves. The solution (3.94) is a
superposition of product solutions called standing waves or normal modes
u(x, t) = u1(x, t) + u2(x, t) + u3(x, t) + . . . .
134 CHAPTER 3. FOURIER SERIES
The product solutions (3.93) can be written as
un(x, t) = Cn sin
(
nπct
ℓ+ αn
)
sinnπx
ℓ, (3.99)
where Cn =√
A2n + B2
n and sinαn =An
Cn, cos αn =
Bn
Cn. For n = 1, 2, 3, . . . the stand-
ing waves are essentially the graphs of sinnπx
ℓwith a time varying amplitude given by
Cn sin
(
nπct
ℓ+ αn
)
. Alternatively, we see from Eq. (3.99) that, at a fixed value of x, each
product function un(x, t) represents simple harmonic motion with amplitude Cn| sinnπx
ℓ|
and frequency fn =nc
2ℓ, i.e., each point on a standing wave vibrates with a different ampli-
tude but with the same frequency. When n = 1
u1(x, t) = C1 sin
(
πct
ℓ+ αn
)
sinπx
ℓ,
is called the first standing wave, the first normal mode, or the fundamental mode of vibration.
l0
FIRST STANDING WAVE
0 l/2 l
node
SECOND STANDING WAVE
The first three standing waves, or normal modes, are shown in the figures. The dashed
line graphs represent the standing waves at various values of time. The points in the
3.7. SEPARABLE PARTIAL DIFFERENTIAL EQUATIONS. 135
02l/3l/3
l
nodes
THIRD STANDING WAVE
interval (0, ℓ) for which sinnπx
ℓ= 0 correspond to points on a standing wave where there
is no motion; these points are called nodes. In general, the nth normal mode of vibration
has n − 1 nodes.
The frequency f1 =c
2ℓ=
1
2ℓ
√
T
ρof the first normal mode is called the fundamental
frequency, or first harmonic, and is directly related to the pitch produced by a stringed
instrument. The frequencies fn of the other normal modes, which are integer multiples of
the fundamental frequency, are called overtones. The second harmonic is the first overtone,
and so on.
Problem Set 3.7
1. Consider the conduction of heat in a copper rod 100 cm. in length whose ends are
maintained at 0◦C for all t > 0. Find an expression for the temperature u(x, t) if the
initial temperature distribution in the rod is given by
(a) u(x, 0) =
{
x 0 ≤ x < 50100 − x 50 ≤ x ≤ 100
.
(b) u(x, 0) =
0 0 ≤ x < 2550 25 ≤ x < 750 75 ≤ x ≤ 100
.
2. Consider a uniform rod of length ℓ with an initial temperature given by sinπx
ℓ,
136 CHAPTER 3. FOURIER SERIES
0 ≤ x ≤ ℓ. Assume that both ends of the bar are insulated. Find a Fourier series ex-
pansion for the temperature u(x, t). What is the steady-state temperature as t → ∞?
3. Find the displacement u(x, t) in an elastic string that is fixed at its ends, x = 0 and
x = ℓ, and is set in motion by plucking it at its centre. The initial displacement f(x)
is defined by
f(x) =
Ax 0 ≤ x ≤ 12ℓ
A(ℓ − x) 12ℓ < x ≤ ℓ
,
where A is a constant.
4. Find the displacement u(x, t) in an elastic string of length ℓ, fixed at both ends, that
is set in motion from its straight equilibrium position with the initial velocity g(x)
defined by
g(x) =
x 0 ≤ x ≤ 14ℓ
14ℓ 1
4ℓ < x < 34ℓ
ℓ − x 34ℓ ≤ x ≤ ℓ
.
Appendix A
ANSWERS TO
ODD-NUMBERED PROBLEMS
AND TABLES
A.1 Answers for Chapter 1.
Problem Set 1.3 (p. 9)
1.1
s
(
2e−s − 1)
3.1
s2(1 − e−s) 5.
−(e−sπ + 1)
s2 + 1
7.e7
s − 19.
1
(s − 4)211.
1
(s + 1)2 + 1
13.s2 − 1
(s2 + 1)215.
48
s517.
4
s2− 10
s
19.2
s3+
6
s2− 3
s21.
6
s4+
6
s3+
3
s2+
1
s23.
1
s+
1
s − 4
25.1
s+
2
s − 2+
1
s − 427.
8
s3− 15
s2 + 929.
2s
(s2 − 1)2
31.s + 1
s(s + 2)33.
2
s(s2 + 4)35.
1
2
[
− s
s2 + 9+
s
s2 + 1
]
37.6
(s2 + 1)(s2 + 9)39.
Γ(54)
s5/4
Problem Set 1.4. (p. 13)
1. 3 − 2e−4t 3. 2e−4t + e2t 5. −5
8− 3
4t − 1
4t2 +
2
3et − 1
24e−2t
137
138 APPENDIX A. ANSWERS TO ODD-NUMBERED PROBLEMS AND TABLES
Problem Set 1.5 (p. 15)
1. 2(t − 2)e2t 3. tet − 1
2t2e2t
5. −t + 5 − 3
2e−tt2 − 4e−tt − 5e−t 7.
1
(s − 8)2
9.s + 2
s2 + 4s + 2011.
2
s2 + 2s + 5
13.2
(s − 3)3+
4
(s − 3)2+
4
(s − 3)
Problem Set 1.6 (p. 24)
1.1
s2e−s 3.
(
3
s2+
10
s
)
e−3s
5.
[
1
(s − 1)2+
5
s − 1
]
e−5s 7.1
2(t − 2)2u2(t)
9. − sin t uπ(t) 11. [1 − e−(t−1)]u1(t)
13. t − (t − 1)u1(t) 15.2
s(1 − 2e−3s), f(t) = 2u0 − 4u3
17.
(
2
s3+
2
s2+
1
s
)
e−s, f(t) = t2u1 19.1
s2− 1
s2e−2s − 2
se−2s, f(t) = t(u0 − u2)
Problem Set 1.7 (p. 28)
1.s2 − 4
(s2 + 4)23.
2(3s2 + 1)
(s2 − 1)35.
12(s − 1)
(s2 − 4s + 40)2
7.1
2t sin t 9.
1
t(e−t − e3t) 11. 1 +
1
tsin 4t
Problem Set 1.8 (p. 30)
1.s2 + 2a2
s(s2 + 4a2)
A.1. ANSWERS FOR CHAPTER 1. 139
Problem Set 1.9 (p. 35)
1. et − 1 3.1
3(4e−t − e−4t)
5.1
27(2 + 3t − 2e3t + 30te3t) 7.
1
20t5e2t
9. cos t − 1
2sin t − 1
2t cos t 11.
1
2(1 + et sin t − et cos t)
Problem Set 1.10 (p. 39)
1.1
4[1 + 2 sin 2t − cos 2t − u1(t) + cos 2(t − 1)u1(t)]
3.1
6[1 − 3e2(t−1) + 2e3(t−1)]u1(t) + e3t − e2t
5. cos t + (1 + cos t)uπ(t)
Problem Set 1.11 (p. 44)
1.1
stanh
1
2as 3.
1
s2 + 1· 1
1 − e−πs
5.Eo
R
[(
eRa
L − 1)
ua(t) −(
e2Ra
L − 1)
u2a(t) +(
e3Ra
L − 1)
u3a(t) − . . .]
Problem Set 1.12 (p. 48)
1. sin t(1 + u2π(t))
3. uπ
2(t) − u 3π
2(t) − sin t
[
uπ
2(t) + uπ(t) + u 3π
2(t)]
5.1√2
sinh√
2t +1
2cosh
√2t − 1
2+
1√2
sinh√
2(t − 2)u2(t)
Problem Set 1.13 (p. 51)
1.1
s2(s − 1)3.
1
s(s2 + 1)5.
12
s7
7.
∫ t
0f(t − τ) cos 2τ dτ 9.
1
4t sin 2t
11.1
2e−2t(sin t − t cos t) 13.
1
2(t cos t + sin t) [< 0 when t = π]
140 APPENDIX A. ANSWERS TO ODD-NUMBERED PROBLEMS AND TABLES
A.2 Answers for Chapter 2.
Problem Set 2.2 (p. 63)
7. Fundamental Set
11. Φ(t) =
[
−et −tet
3et (3t + 1)et
]
, Φ−1(t) =
[
−(3t + 1)e−t −te−t
3e−t e−t
]
13. Ψ(t) =
[ 12e−t + 1
2e5t −12e−t + 1
2e5t
−12e−t + 1
2e5t 12e−t + 1
2e5t
]
15. Ψ(t) =
[
sin t − 3 cos t 2 cos t−5 cos t sin t + 3 cos t
]
Problem Set 2.3 (p. 70)
1. λ1 = 6 , λ2 = 1 , x(1) =
[
27
]
, x(2) =
[
11
]
3. λ1 = λ2 = −4 , x(1) =
[
1−4
]
5. λ1 = 0 , λ2 = 4 , λ3 = −4 , x(1) =
94525
, x(2) =
111
, x(3) =
191
7. λ1 = λ2 = λ3 = −2 , x(1) =
2−1
0
, x(2) =
001
9. λ1 = 3i , λ2 = −3i , x(1) =
[
1 − 3i5
]
, x(2) =
[
1 + 3i5
]
Problem Set 2.5 (p. 75)
1. x = c1
[
12
]
e5t + c2
[
1−1
]
e−t
3. x = c1
[
21
]
e−3t + c2
[
25
]
et
5. x = c1
[
52
]
e8t + c2
[
14
]
e−10t
A.2. ANSWERS FOR CHAPTER 2. 141
7. x = c1
100
et + c2
231
e2t + c3
102
e−t
9. x = c1
111
et + c2
110
e2t + c3
101
e2t
11. x = c1
[
cos t2 cos t + sin t
]
e4t + c2
[
sin t2 sin t − cos t
]
e4t
13. x = c1
[
cos t− cos t − sin t
]
e4t + c2
[
sin t− sin t + cos t
]
e4t
15. x = c1
[
5 cos 3t4 cos 3t +3 sin 3t
]
+ c2
[
5 sin 3t4 sin 3t −3 cos 3t
]
17. x = c1
100
+ c2
− cos tcos tsin t
+ c3
sin t− sin t
cos t
19. x = c1
010
e6t + c2
cos 2t0
−2 sin 2t
e4t + c3
sin 2t0
2 cos 2t
e4t
21. x = 3
[
11
]
e12t + 2
[
01
]
e−12t
Problem Set 2.6 (p. 83)
1. x = c1
[
13
]
+ c2
(
[
13
]
t +
[
14
−14
])
3. x = c1
[
11
]
e2t + c2
([
11
]
t +
[
−130
))
e2t
5. x = c1
100
et + c2
01
−1
e2t + c3
01
−1
t +
010
e2t
7. x = c1
100
e4t + c2
100
t +
010
e4t + c3
100
t2
2+
010
t +
001
e4t
Problem Set 2.7 (p. 87)
1. x = c1
[
11
]
+ c2
[
32
]
et +
[
−5t − 3−5t
]
142 APPENDIX A. ANSWERS TO ODD-NUMBERED PROBLEMS AND TABLES
3. x = c1
[
cos tsin t
]
et + c2
[
sin t− cos t
]
et +
[
cos tsin t
]
tet
5. x = c1
[
12
]
+ c2
([
12
]
t − 1
2
[
01
])
− 2
[
12
]
ln t +
[
25
]
t−1 −[
120
]
t−2
7. x = c1
[
12
]
e3t + c2
[
1−2
]
e−t +1
4
[
1−8
]
et
9. x = c1
[
11
]
e−12t + c2
[
1−1
]
e−2t +
[
5232
]
t −[
174154
]
+
[
1612
]
et
Problem Set 2.8 (p. 90)
1. x = −2
3sin 3t , y = −1
3sin 3t + cos 3t
3. x = −1
2t − 3
4
√2 sin
√2t , y = −1
2t +
3
4
√2 sin
√2t
A.3 Answers for Chapter 3.
Problem Set 3.2 (p. 97)
1. (i) Not orthogonal
(ii) α = −1
2, β = −1 , γ =
1
6
(iii) φ1(x) = 1 , φ2(x) = 2√
3(x − 1
2) , φ3(x) = 6
√5(x2 − x − 1
6)
(iv) F (x) =5
6φ1(x) +
1
6√
5φ3(x)
3. Norm of each function is1
2
√π
Problem Set 3.3 (p. 107)
1. f(x) =1
2+
1
π
∞∑
n=1
1
n[1 − (−1)n] sin nx
3. f(x) =3
4+
∞∑
n=1
{
(−1)n − 1
n2π2cos nπx − 1
nπsin nπx
}
5. f(x) =1
6π2 + 2
∞∑
n=1
(−1)n
n2cos nx +
2
π
∞∑
n=1
(−1)n − 1
n3sin nx + π
∞∑
n=1
(−1)n+1
nsinnx
A.3. ANSWERS FOR CHAPTER 3. 143
7. f(x) = π + 2
∞∑
n=1
(−1)n+1
nsinnx
9. f(x) =1
π− 1
π
∞∑
n=1
1
n2 − 1
[
1 − (−1)n+1]
cos nx +1
2sinx
11. f(x) =3
8+
2
π2
∞∑
n=1
{
− 1
n2(1 − cos
nπ
2) cos
nπx
2+
[
1
n2sin
nπ
2− (−1)nπ
2n
]
sinnπx
2
}
13. f(x) =2 sinhπ
π
[
1
2+
∞∑
n=1
(−1)n
n2 + 1(cos nx − n sinnx)
]
Problem Set 3.5 (p. 114)
1. ODD f(x) =2
π
∞∑
n=1
[1 − (−1)n]
nsin nx
3. EVEN f(x) =π
2+
2
π
∞∑
n=1
[(−1)n − 1]
n2cos nx
5. EVEN f(x) =1
3+
4
π2
∞∑
n=1
(−1)n
n2cos nπx
7. EVEN f(x) =2
3π2 + 4
∞∑
n=1
(−1)n+1
n2 cos nx
9. ODD f(x) =2
π
∞∑
n=1
1 − (−1)n(π + 1)
nsinnx
11. EVEN f(x) =2
π+
2
π
∞∑
n=1
1 + (−1)n
1 − n2cos nx
13. EVEN EXTENSION f(x) =1
2+
2
π
∞∑
n=1
1
nsin
nπ
2cos nπx
ODD EXTENSION f(x) =2
π
∞∑
n=1
1
n
(
1 − cosnπ
2
)
sinnπx
15. EVEN EXTENSION f(x) =2
π+
4
π
∞∑
n=1
(−1)n
1 − 4n2cos 2nx
ODD EXTENSION f(x) =8
π
∞∑
n=1
n
4n2 − 1sin 2nx
144 APPENDIX A. ANSWERS TO ODD-NUMBERED PROBLEMS AND TABLES
17. EVEN EXTENSION f(x) =π
4+
2
π
∞∑
n=1
2 cos nπ2 − 1 − (−1)n
n2cos nx
ODD EXTENSION f(x) =4
π
∞∑
n=1
1
n2sin
nπ
2sinnx
19. EVEN EXTENSION f(x) =3
4+
4
π
∞∑
n=1
(cos nπ2 − 1)
n2cos
nπx
2
ODD EXTENSION f(x) =∞∑
n=1
[
4
n2π2sin
nπ
2− 2
nπ(−1)n
]
sinnπx
2
21. EVEN EXTENSION f(x) =5
6+
2
π2
∞∑
n=1
[3(−1)n − 1]
n2cos nπx
ODD EXTENSION f(x) = 4∞∑
n=1
[
(−1)n+1
nπ+
(−1)n − 1
n3π2
]
sinnπx
Problem Set 3.6 (p. 117)
1. f(x) = i∞∑
n=−∞
1
n(−1)n+1einx (n 6= 0)
3. f(x) =2 sinh π
2
π
∞∑
n=−∞
(−1)n (1 − 2in)
1 + 4n2e2inx
Problem Set 3.7 (p. 135)
1. (a) u(x, t) =400
π2
∞∑
n=1
1
n2sin
nπ
2sin
nπx
100e
(
−nπ2
c2
t
104
)
(b) u(x, t) =100
π
∞∑
n=1
1
n
[
cosnπ
4− cos
3nπ
4
]
sinnπx
100e
(
−nπ2
c2
t
104
)
3. u(x, t) =4Aℓ
π2
∞∑
n=1
1
n2sin
nπ
2sin
nπx
ℓcos
nπct
ℓ
A.4. TABLE OF LAPLACE TRANSFORMS. 145
A.4 Table of Laplace transforms.
f(t) = L−1{F (s)} F (s) = L{f(t)}
11
s, s > 0
eat 1
s − a, s > a
sin ata
s2 + a2, s > 0
tn, n = positive integern!
sn+1, s > 0
tp, p > −1Γ(p + 1)
sp+1, s > 0
cos ats
s2 + a2, s > 0
sinh ata
s2 − a2, s > | a |
cosh ats
s2 − a2, s > | a |
eat sin btb
(s − a)2 + b2, s > a
eat cos bts − a
(s − a)2 + b2, s > a
tneat , n = positive integer n!(s − a)−n−1 , s > a
uc(t)e−cs
s, s > 0
uc(t)f(t − c) e−csF (s)
ectf(t) F (s − c)
f(ct)1
cF(s
c
)
, c > 0∫ t
0f(t − τ)g(τ) dτ F (s)G(s)
δ(t − c) e−cs
f (n)(t) snF (s) − sn−1f(0) − . . . − f (n−1)(0)
(−t)nf(t) F (n)(s)
t−1f(t)
∫
∞
sF (s) ds
∫ t
0f(u) du s−1F (s)
1
2a3sin at − t
2a2cos at
1
(s2 + a2)2
1
2at sin at
s
(s2 + a2)2
f(t + T ) = f(t)1
(1 − e−sT )
∫ T
0e−stf(t) dt