m 651 Course Notes

39
Notes from Math 651 Richard O. Moore August 27, 2014 Contents 1 Ordinary differential equations 2 1.1 Definitions, existence and uniqueness ...................... 2 1.2 Linear homogeneous ODEs ............................ 4 1.2.1 Constant coefficient ODEs ........................ 6 1.2.2 Equidimensional equations ........................ 7 1.3 Linear inhomogeneous ODEs ........................... 8 1.3.1 Variation of parameters ......................... 9 1.3.2 Undetermined coefficients ........................ 10 1.4 Singular points and power series solutions ................... 11 1.5 Boundary and eigenvalue problems ....................... 15 1.6 Inhomogeneous BVPs and Green’s functions .................. 20 1.7 Phase plane and nonlinear ODEs ........................ 22 1.7.1 Hamiltonian systems ........................... 24 2 Partial differential equations 25 2.1 Characteristics and classification ......................... 25 2.2 The wave equation and D’Alembert’s solution ................. 27 2.3 The heat equation ................................ 28 2.4 Separation of variables .............................. 28 2.5 Inhomogeneities and eigenfunction expansions ................. 28 2.6 Laplace’s equation ................................ 29 2.6.1 Laplace’s equation in cylindrical coordinates .............. 30 2.7 Vibrating membranes (wave equation in higher dimension) .......... 33 2.8 Transform methods ................................ 33 2.8.1 The Fourier transform .......................... 34 2.9 The Laplace transform .............................. 35 2.10 Green’s functions for PDEs ........................... 37 2.10.1 Free-space Green’s functions and the Method of Images ........ 38 1

Transcript of m 651 Course Notes

Page 1: m 651 Course Notes

Notes from Math 651

Richard O. Moore

August 27, 2014

Contents

1 Ordinary differential equations 21.1 Definitions, existence and uniqueness . . . . . . . . . . . . . . . . . . . . . . 21.2 Linear homogeneous ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.2.1 Constant coefficient ODEs . . . . . . . . . . . . . . . . . . . . . . . . 61.2.2 Equidimensional equations . . . . . . . . . . . . . . . . . . . . . . . . 7

1.3 Linear inhomogeneous ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.3.1 Variation of parameters . . . . . . . . . . . . . . . . . . . . . . . . . 91.3.2 Undetermined coefficients . . . . . . . . . . . . . . . . . . . . . . . . 10

1.4 Singular points and power series solutions . . . . . . . . . . . . . . . . . . . 111.5 Boundary and eigenvalue problems . . . . . . . . . . . . . . . . . . . . . . . 151.6 Inhomogeneous BVPs and Green’s functions . . . . . . . . . . . . . . . . . . 201.7 Phase plane and nonlinear ODEs . . . . . . . . . . . . . . . . . . . . . . . . 22

1.7.1 Hamiltonian systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

2 Partial differential equations 252.1 Characteristics and classification . . . . . . . . . . . . . . . . . . . . . . . . . 252.2 The wave equation and D’Alembert’s solution . . . . . . . . . . . . . . . . . 272.3 The heat equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.4 Separation of variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.5 Inhomogeneities and eigenfunction expansions . . . . . . . . . . . . . . . . . 282.6 Laplace’s equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

2.6.1 Laplace’s equation in cylindrical coordinates . . . . . . . . . . . . . . 302.7 Vibrating membranes (wave equation in higher dimension) . . . . . . . . . . 332.8 Transform methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

2.8.1 The Fourier transform . . . . . . . . . . . . . . . . . . . . . . . . . . 342.9 The Laplace transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352.10 Green’s functions for PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

2.10.1 Free-space Green’s functions and the Method of Images . . . . . . . . 38

1

Page 2: m 651 Course Notes

Introduce syllabus, texts, grade breakdown, office hours. Any non-math majors shouldsee professor.

The objective of this course is to provide everybody, particularly first-year graduatestudents, a common background in methods used to solve differential equations. We willalso touch upon material related to dynamical systems and vector calculus.

1 Ordinary differential equations

c.f. Bender & Orszag, Boyce & DiPrima

1.1 Definitions, existence and uniqueness

Consider generally written nth-order ODE

G(x, y(x), y′(x), . . . , y(n)(x)) = 0.

where

y(n) :=dny

dxn.

Suppose G is differentiable in all arguments, with ∂G∂y(n) 6= 0. Then at least locally (by the

implicit function theorem) we can express the ODE as

y(n) = F (x, y(x), . . . , y(n−1)).

Definition: Recall that function F (x) is said to be linear if F (ax+ by) = aF (x) + bF (y)for all constants a and b and all arguments x and y.

Definition: ODE (1.1) is said to be linear if multivariate function F (x1, y, y′, . . . , y(n)) is

linear in each of its arguments. An ODE that is not linear is referred to as nonlinear .

Example: y′′ = (2x+ y)y′ is nonlinear. xy′′′ = 2y′′ − (tanx)y′ is linear.

Note: y′′′ = 2y′′ − (tanx)y′ + 1 is not technically linear by the above definition, but werefer to it as linear nonetheless since F can be expressed as a linear function added to afunction of x only.

Under the assumption of linearity, ODE (1.1) can be written in the form

Ly = f(x) with L := p0(x) + p1(x)d

dx+ · · ·+ pn−1

dn−1

dxn−1+

dn

dxn.

2

Page 3: m 651 Course Notes

Definition: ODE (1.1) is said to be homogeneous if f(x) ≡ 0. A linear ODE that is nothomogeneous is said to be inhomogeneous or nonhomogeneous. Note that it doesn’t strictlymake sense to refer to a nonlinear ODE as either homogeneous or inhomogeneous. Why?

Linear, homogeneous ODEs enjoy the principle of superposition, which states that anytwo functions that satisfy the ODE can be added to produce a third function that alsosatisfies the ODE. This is not generally true for nonlinear ODEs. (It isn’t strictly true forlinear, nonhomogeneous ODEs either, although we’ll see that the general method for solvingproblems involving linear, nonhomogeneous ODEs depends critically on the principle ofsuperposition.)

Example: mx+ cx+ kx = f(t)

When solving an ODE, the constants of integration that arise in the general solution arefixed by applying initial conditions or boundary conditions.

Definition: An equation of the form (1.1) is called an initial value problem (IVP) if it isaccompanied by initial conditions (ICs), i.e., y, y′,. . . , y(n−1) given at a single point x0:

y(x0) = a0

y′(x0) = a1...

y(n−1)(x0) = an−1. (1)

Definition: (1.1) is called a boundary value problem (BVP) if it comes with boundaryconditions (BCs), i.e., n quantities given at two or more points x0, x1, . . .xk.

Example: y(x0) = a0, y(x1) = a1, y′′(x1) = a2, y

2(x0) + 2y′′′(x3) = a3

Just as linear ODEs are ”simpler” than nonlinear ODEs in that we can make more generalstatements concerning existence and uniqueness, IVPs are ”simpler” than BVPs for the samereason.

Theorem: If F (x, y, y′, . . . , y(n−1)) from (1.1) is continuous and bounded in all argumentswithin some neighbourhood of x0, a0, a1, . . . , an−1 then the IVP with ICs given by (1) has asolution (i.e., a solution exists).

Theorem: If, in addition to the above, F has continuous and bounded partial derivativeswith respect to y, y′, . . . , y(n−1), then the solution found above is unique.

Proof: By Picard iteration, c.f. Courant, Diff. & Int. Calculus, see Problem Set #1.

3

Page 4: m 651 Course Notes

Example: Consider y′ = F (x, y) with y(x0) = a0. Its formal solution is

y(x) = a0 +

∫ x

x0

F (t, y(t)) dt.

We can construct a sequence of functions y0(x), y1(x), . . . satisfying

yj+1(x) = a0 +

∫ x

x0

F (t, yj(t)) dt, (2)

then ask: Does this sequence converge? You’ll see this in an exercise.

Example: y′ = y1/3 with y(0) = 0. Here, F (x, y) is continuous and bounded on anyinterval, so we expect existence. However, ∂F/∂y is neither continuous nor bounded on anyinterval containing the origin, so we don’t expect uniqueness. Solving gives us three possiblesolutions: y = ±(2x/3)3/2 or y ≡ 0.

Example: y′ = x−1/2 with y(0) = 1. Here, F is neither continuous nor bounded on intervalscontaining x = 0 so we don’t expect either existence or uniqueness. Nevertheless, there isa unique solution y = 2x1/2 + 1. Moral: the above theorems give sufficiency conditions forexistence/uniqueness, not necessary ones!

For the linear ODE (1.1), our criteria for existence and uniqueness become conditions onthe coefficients pj(x) that they be continuous and bounded on a given interval. Now let’sturn to the harder case of BVPs. Rather than focusing on a small interval around x0 as forthe IVPs, we now have to worry about an entire interval between boundary points x1 andx2, for example.

Example: y′′ + y = 0. General solution y = A cosx+B sinx.Case I: y(0) = 0, y′(π/2) = 1. Then, A = 0, B =?. No solution!Case II: y(0) = 0, y(π/2) = 3. Then, A = 0, B = 3. Unique solution!Case III: y(0) = 0, y(π) = 0. Then, A = 0, B = B. Infinitely many solutions!

Thus, even with a ”well-behaved” ODE, BVPs can yield zero, one or multiple solutions,and we can make no general statements. How does this relate to what you know aboutsolutions to AX = B in linear algebra?

1.2 Linear homogeneous ODEs

Consider

Ly = 0 with L := p0(x) + p1(x)d

dx+ · · ·+ pn−1(x)

dn−1

dxn−1+

dn

dxn. (3)

4

Page 5: m 651 Course Notes

Clearly, any two solutions of (3) can be added in linear combination and the result willbe another solution. Moreover, the general solution to this equation takes the form

y =J∑j=1

cjyj(x)

where yj(x) is a linearly independent set of solutions to (3). Any solution to (3) can thenbe expressed in this form through a unique choice of constants cj, j = 1, . . . , J .

Definition: A set of functions yj(x) is said to be linearly dependent if there exists anontrivial solution (i.e., a solution with at least one nonzero constant) to

J∑j=1

cjyj(x) = 0.

A set of functions that is not linearly dependent is said to be linearly independent.

Example: L = 1 + d2

dx2, i.e., y′′ + y = 0. Possible general solutions:

y = c1 sinx+ c2 cosx (4)

y = c1eix + c2e

−ix (5)

y = c1 sinx+ c2 sin(x+ π/3). (6)

Coefficients c1, c2 are then determined by ICs (or BCs, if possible).

Definition: If n functions y1(x), . . . , yn(x) each have n-1 derivatives, then their Wron-skian is given by

W (x) = W [y1(x), . . . , yn(x)] := det

y1(x) . . . yn(x)y′1(x) . . . y′n(x)

...yn−11 (x) . . . yn−1n (x)

Theorem: If n functions y1(x), . . . , yn(x) have n− 1 derivatives and are linearly depen-dent on an interval I, then their Wronskian is identically zero on I.

Proof: Suppose y1, . . . , yn are linearly dependent. Then

c1y1 + c2y2 + · · ·+ cnyn = 0

has a nontrivial solution, and so therefore do

c1y′1 + c2y

′2 + · · ·+ cny

′n = 0, . . . (7)

c1yn−11 + c2y

n−12 + · · ·+ cny

n−1n = 0, (8)

which is only possible if W = 0.

5

Page 6: m 651 Course Notes

Example: Calculate Wronskian for above solutions to y′′ + y = 0.

Theorem: If n functions y1, . . . , yn all satisfy the same nth order homogeneous linearODE on an interval I then if W ≡ 0 on I, the functions are linearly dependent.

Theorem: Abel’s Theorem: Suppose y1, . . . , yn satisfy 3. Then

dW

dx= −pn−1(x)W.

Proof: Insert ynj = −p0yj − p1y′j − · · · − pn−1yn−1j into dW/dx. Thus,

W = W0 exp

[−∫ x

x0

pn−1(s) ds

].

This is Abel’s formula, and demonstrates that if all coefficients pj(x) are continuous, thenW is either never zero (LI) or always zero (LD).

Example: y′′ − [(1 + x)/x]y′ + (1/x)y = 0, with solutions y1 = 1 + x, y2 = ex. W = xex

by direct calculation and by Abel’s formula. We’ll see later that x = 0 is a regular singularpoint .

Theorem: Equation (3) has exactly n linearly independent solutions y1(x), . . . , yn(x) inany interval where the p0(x), . . . , pn−1(x) are continuous.

Proof: Let yj(x) satisfy (3) and the IC yj(x0) = 1, y(x0) = y′(x0) = · · · = yj−1(x0) =yj+1(x0) = · · · = yn−1(x0) = 0. By the existence theorem, this IVP has a unique solution.But W [y1, . . . , yn] = det I = 1, so their Wronskian is never zero and they are therefore LI.Now suppose (3) has an additional solution y∗. Clearly, the resulting (n + 1) × (n + 1)Wronskian evaluated at x0

1.2.1 Constant coefficient ODEs

Suppose coefficients pj(x) in 3 are constant . The ODE is then referred to as constant-coefficient , and the solutions are found by substituting y = erx into the differential operatorL[y], yielding a characteristic polynomial P (r). The roots of P (r) correspond to solutionsof the ODE. We consider two cases:

Case I: All roots r1, . . . , rn of P (r) are distinct. In this case, we immediately have n linearlyindependent solutions, so that the general solution is

y = c1er1x + c2e

r2x + · · ·+ cnernx.

6

Page 7: m 651 Course Notes

If the ODE has real coefficients pj(x), then any complex roots of P (r) will occur incomplex conjugate pairs r1, r2 = a + ib, a − ib, in which case we can write thecorresponding pair of solutions in the form

y = eax(d1 cos(bx) + d2 sin(bx)).

Case II: P (r) has repeated roots. Suppose r1 is a root of order m. Then P (r) = (r− r1)mQ(r)where Q(r1) 6= 0 and

L(erx) = (r − r1)mQ(r)erx.

Clearly, y = er1x satisfies Ly = 0, but we need to find another m− 1 solutions to forma complete set of solutions. Note that

L(xerx) = m(r − r1)m−1Q(r)erx + (r − r1)mQ′(r)erx + (r − r1)mQ(r)xerx,

so L(xer1x = 0 if m > 1. We can build all m solutions by multiplying by successivedegrees of monomial to give er1x, xer1x, . . . , xm−1er1x.

Example: Find the general solution of yiv + 2y′′ + y = 0.

1.2.2 Equidimensional equations

Definition: The simplest non-constant coefficient ODEs are equidimensional or Eulerequations satisfying

Ly := xndny

dxn+ αn−1x

n−1 dn−1y

dxn−1+ · · ·+ α1x

dy

dx+ α0y = 0.

The name “equidimensional” comes from the fact that the ODE is invariant under the scalingtransformation x→ ax for any a 6= 0.

Just as exponential functions reduced constant-coefficient ODEs to algebraic charac-teristic equations, monomials reduce equidimensional ODEs to algebraic indicial equationsL(xs) = P (s)xs where

P (s) = s(s− 1) . . . (s− (n− 1)) + αn−1s(s− 1) . . . (s− (n− 2)) + · · ·+ α1s+ α0.

Again, we have the cases of distinct and repeated roots to consider.

Case I: Distinct roots. The solution is simply

y = c1xs1 + c2x

s2 + · · ·+ cnxsn .

Complex roots again appear in conjugate pairs s1,2 = a± ib, yielding solutions

c1xa+ib + c2x

a−ib = xa(d1 cos(b lnx) + d2 sin(b lnx)).

7

Page 8: m 651 Course Notes

Case II: Repeated roots. If s1 is a root of order m of the indicial equation P (s) = 0, thenL[xs] = (s− s1)mQ(s)xs, so

L[(lnx)xs] = L[d

dsxs] = m(s−s1)m−1Q(s)xs+(s−s1)mQ′(s)xs+(s−s1)mQ(s)(lnx)xs,

so again, L[(lnx)xs1 ] = 0 providedm > 1. We therefore havem solutions xs1 , (lnx)xs1 , . . . , (lnx)m−1xs1.

Example: 2x2y′′ − 4xy′ + 6y = 0, x2y′′ − 5xy′ + 9y = 0.

Note: If we divide the equidimensional equation by xn to put it in the form (3), we seethat p0 = α0/x

n, p1 = α1/xn−1, etc., so the coefficients are singular on intervals containing

x = 0, suggesting that Abel’s formula breaks down.If we happen to know some but not all of the n linearly independent solutions to an

nth order linear homogeneous ODE, we can derive the others using a technique known asreduction of order . Letting y(x) = u(x)y1(x), where y1(x) is the known solution, results inan (n-1)st order ODE for unknown u(x).

Example: Ly := y′′+ p(x)y′+ q(x)y = 0. Find a second solution using reduction of order.Answer (not particularly useful as a formula):

y2(x) = y1(x)

∫ x 1

y21(s)e−

∫ s p(r) dr ds

1.3 Linear inhomogeneous ODEs

Consider Ly = f(x) where L is the nth order linear differential operator given in (??).Clearly, any two solutions to this ODE will have a difference that satisfies Ly = 0, whichwe already know has a complete set y1(x), . . . , yn(x) of n linearly independent solutions.Thus, all solutions to Ly = f must have the form

y(x) = yp(x) + c1y1(x) + · · ·+ cnyn(x)

where yp(x) is a (any) particular solution to the inhomogeneous ODE Lyp = f .A first order linear ODE is always solvable by finding a suitable integrating factor , i.e.,

a multiplicative factor that allows the ODE to be integrated directly. Consider

y′ + p(x)y = f(x).

Multiplying both sides by exp(∫ x

p(s) ds) results in[e∫ x p(s) dsy

]′= e

∫ x p(s) dsf(x),

which is integrated to give

y(x) =

∫ x

e∫ sx p(r) drf(s) ds.

8

Page 9: m 651 Course Notes

Example: y′ = y/(x + y). Not linear in y! Solve to get x = y ln y + cy. Can we invert?Depends on what c is.

1.3.1 Variation of parameters

More generally, inhomogeneous ODEs are solved using variation of parameters , which relieson the solutions of the homogeneous ODE to produce a first order system of inhomogeneousODEs that we can solve by direct integration. Consider

Ly := y′′ + p1(x)y′ + p0(x)y = f(x)

and suppose that Ly1 = Ly2 = 0 (i.e., we already have both linearly independent solutions tothe homogeneous problem). To find a particular solution (to the inhomogeneous problem),we let

yp(x) = u1(x)y1(x) + u2(x)y2(x),

where u1 and u2 are unknown functions to be determined. Differentiating, we get

y′p = u′1y1 + u1y′1 + u′2y2 + u2y

′2.

Before taking another derivative, we set the terms involving derivatives of u1 and u2 to zero:

u′1y1 + u′2y2 = 0.

We’ll see why we can do this in a second. Taking another derivative then gives

y′′p = u′1y′1 + u1y

′′1 + u′2y

′2 + u2y

′′2 .

Recalling that Ly1 = Ly2 = 0, we arrive at

Lyp = u′1y′1 + u′2y

′2 = f.

We can rewrite the two first-order ODEs that we ended up with as a system:(y1 y2y′1 y′2

)(u′1u′2

)=

(0f

).

Note that the determinant of the matrix on the left is the Wronskian W (x) of y1 and y2, andis nonzero by the assumption that these functions are linearly independent. Solving, eitherby inverting this matrix or by elimination, gives

u′1(x) = −y2(x)f(x)

W (x)

u′2(x) =y1(x)f(x)

W (x),

leading to a particular solution of the form

yp = −y1(x)

∫ x y2(s)f(s)

W (s)ds+ y2(x)

∫ x y1(s)f(s)

W (s)ds.

Clearly, yp is only defined up to the addition of an arbitrary linear combination of y1 andy2, reflected by the constants of integration in the two integrals.

9

Page 10: m 651 Course Notes

Note: You’ll note the similarity of spirit between variation of parameters and the methodof reduction of order.

Example: y′′ − y′ − 2y = x/ lnx.

1.3.2 Undetermined coefficients

Variation of parameters is somewhat cumbersome to implement for higher-order differen-tial equations. A simpler method involving “educated guessing” to determine a particularsolution to Ly = f is often applied when the following conditions are met:

1. the differential operator L has constant coefficients , and

2. the inhomogeneous term f(x) is a linear combination of simple exponentials and poly-nomials. (More broadly, when f consists of functions that repeat or terminate uponrepeated differentiation.

The basic rules for forming a trial form of particular solution are as follows:

1. If f(x) and its derivatives do not contain any solutions to Ly = 0, then set yp equalto a linear combination of the linearly independent terms contained in f and its firstn derivatives.

2. If f(x) or its derivatives contain terms that satisfy Ly = 0, then multiply those termsin the trial solution by xs, where s is the smallest nonzero integer such that the trialsolution does not include solutions to Ly = 0.

Example: y′′ − 3y′ + 2y = 3e−x − 10 cos 3x, y(0) = 1, y′(0) = 2.

Example: y′′ + 4y = x sin 2x.

Note: The method of undetermined coefficients is often simplified by using complex expo-nentials. (Redo last example.)

Example: Simple harmonic oscillator: mx + cx + kx = F0 cos(Ωt) with IC x(0) = x0,x(0) = v0. Cases:

1. undamped, unforced: natural frequency

2. damped, unforced: underdamped, overdamped, critically damped

3. undamped, non-resonant forcing

4. damped, resonant forcing

10

Page 11: m 651 Course Notes

1.4 Singular points and power series solutions

Recall our general formulation for an nth order homogeneous linear ODE (3):

Ly = 0 with L :=dn

dxn+ pn−1(x)

dn−1

dxn−1+ · · ·+ p1(x)

d

dx+ p0(x).

We saw that if the pj(x) are continuous (and therefore nonsingular), then (3) has n linearlyindependent solutions yj(x) and the general solution is y =

∑nj cjyj. We now consider

what happens when one or more of the pj has a singularity.

Definition: If, at x = x0 ∈ C, the coefficients p0(x), . . . , pn−1(x) are analytic (i.e.,differentiable) in a neighborhood of x0, then x0 is said to be an ordinary point of (3).

Note: If x0 is an ordinary point of (3), then all solutions of (3) can be expanded in a Taylorseries about x0. The radius of convergence of this Taylor series will be at least as large asthe distance (in C) to the nearest singularity of the coefficients pj(x).

Example: (x2 + 1)y′ + 2xy = 0. Simple poles at x = ±i. Maclaurin series (x = 0 isordinary point) of y = C/(x2 + 1) converges for |x| < 1.

Definition: A point x0 that is not an ordinary point is said to be a singular point. Thesecan be regular or irregular.

Definition: A singular point x0 is said to be regular if (x − x0)jpn−j(x) is analytic in aneighborhood of x = x0 for each j = 1, . . . , n. In other words, pn−j(x) can be no moresingular than 1/(x− x0)j.

Example: Bessel’s equation of order ν:

x2y′′ + xy′ + (x2 − ν2)y = 0.

Then p0 = 1 − ν2/x2 and p1 = 1/x, so all points are ordinary except x0 = 0, which is aregular singular point for all ν.

Note:

1. If x0 is a RSP, then solutions of (3) may be analytic. If a solution is not analytic, thenits singularity is either a pole or a branch point (which can be algebraic or logarithmic).

2. If x0 is a RSP, then (3) always has at least one solution of the form y1 = (x−x0)αA(x)where α (typically real) is referred to as the indicial exponent and A is analytic at x0(with radius of convergence at least as large as the distance to the nearest singularityof the coefficients pj in C. Clearly, this solution is

11

Page 12: m 651 Course Notes

(a) analytic at x0 if α is a nonnegative integer;

(b) singular with a pole if α is a negative integer; or

(c) singular with an algebraic branch point if α 6= Z.

3. If (3) is at least 2nd order, then there is a second solution that has the form either of

y2 = (x− x0)βB(x)

ory2 = (x− x0)αA(x) ln(x− x0) + (x− x0)βC(x),

where α and A are the same as above, and B and C are analytic at x0. This can begeneralized to continue to the nth solution of an nth order ODE, of the form

yn = (x−x0)γ[A0(x) + A1(x) ln(x− x0) + A2(x) (ln(x− x0))2 + · · ·+ An−1(x) (ln(x− x0))n−1

].

Definition: A point that is neither ordinary nor singular and regular is referred to as anirregular singular point.

Note: Not much can be said about solutions near ISPs, except that at least one solutiondoes not have one of the forms above. Solutions typically have an essential singularity, e.g.,y = e1/x.

Finding series solutions to linear ODEs near ordinary points is straightforward. Simplymatch terms in a Taylor expansion:

y =∞∑n=0

an(x− x0)n.

Example: Legendre’s equation.

(1− x2)y′′ − 2xy′ + α(α + 1)y = 0. (9)

Show that this has regular singular points (simple poles) at x = ±1 and ordinary pointseverywhere else. Expand about x0 = 0:

∞∑n=0

(n+ 2)(n+ 1)an+2xn −

∞∑n=2

n(n− 1)anxn − 2

∞∑n=1

nanxn + α(α + 1)

∞∑n=0

anxn = 0.

Recursion relation:

an+2 = −(α− n)(α + n+ 1)

(n+ 2)(n+ 1)an.

From n even:

y1(x) = a0

∞∑n=0

(−1)nbnx2n,

12

Page 13: m 651 Course Notes

bn =α(α− 2) . . . (α− 2n+ 2)(α + 1)(α + 3) . . . (α + 2n− 1)

(2n)!

From n odd:

y2(x) = a1

∞∑n=0

(−1)ncnx2n+1,

cn =(α− 1)(α− 3) . . . (α− 2n+ 1)(α + 2)(α + 4) . . . (α + 2n)

(2n+ 1)!.

Of course, a0 and a1 are arbitrary since the ODE is linear; we can therefore arbitrarily setthem equal to 1.

To demonstrate how to find a series solution about a regular singular point, we focus onthe 2nd order case, i.e.,

y′′ +p(x)

x− x0y′ +

q(x)

(x− x0)2y = 0

with p and q analytic at x0, i.e.,

p(x) =∞∑n=0

pn(x− x0)n, q(x) =∞∑n=0

qn(x− x0)n.

We pose a solution of the form (referred to as Frobenius form):

y1(x) = (x− x0)αA(x) = (x− xα0∞∑n=0

(x− x0)n,

with A analytic at x = x0. Substituting, we get

∞∑n=0

(α+n)(α+n−1)an(x−x0)α+n−2+∞∑

m,n=0

(α+n)anpm(x−x0)α+m+n−2+∞∑

m,n=0

anqm(x−x0)α+m+n−2 = 0.

Equating coefficients of powers of x− x0, we get:

P (α)a0 = 0 with P (α) := α2 + (p0 − 1)α + q0.

As we saw for equidimensional equations, the roots of indicial polynomial P determine α(up to two possibilities since P is quadratic). The next order provides recurrence relation

P (α + n)an = −n−1∑m=0

((α +m)pn−m + qn−m) am. (10)

Recall that P has two roots, giving rise to the possibility that the left-hand side of (10) willvanish, indicating that no solution of this form is possible unless the right-hand side vanishessimultaneously.

Assuming α1 ≥ α2, we have cases

13

Page 14: m 651 Course Notes

Case I: α1 − α2 6= 0, 1, 2, 3, . . . . Then (10) provides a well-defined recurrence relation and twosolutions of Frobenius form exist.

Case II: α1 = α2. One solution of Frobenius form exists, but the other does not. We knowfrom experience with equidimensional equations that the second solution with containln(x− x0).

Case III: α1 − α2 = N ≥ 1. This case breaks into two subcases:

IIIa: The RHS of (10) is nonzero for n = N . Then only one solution can be of Frobeniusform and the other has a ln(x− x0) term.

IIIb: The RHS of (10) vanishes for n = N . Then the first (Frobenius) solution truncatesand a second solution is found in Frobenius form involving the arbitrary coefficientaN .

Example: Modified Bessel equation of order ν:

x2y′′ + xy′ − (x2 + ν2)y = 0.

Regular singular point at x0 = 0, with indicial equation

P (α) = α2 − ν2

and recurrence relationP (n+ α)an = an−2.

These give at least one solution of Frobenius form,

Iν(x) =∞∑n=0

(x/2)2n+ν

n!Γ(n+ ν + 1).

(Recall that Γ(z) =∫∞0

exp(−t)tz−1 dt.) In this example, a second solution in Frobenius formcertainly exists if ν 6= N/2 for some integer N (for other ODEs, it is sometimes possible tohave a second Frobenius solution even when the two roots of the indicial equation differ byan integer). To find the second solution in these cases, we need a logarithmic term similarto that found solutions to equidimensional equations whose indicial polynomial has repeatedroots.

Consider ν = 0 (repeated roots case). First (Frobenius) solution:

I0(x) =∑n=0

(x/2)2n

(n!)2.

To find second solution, consider how we found the first solution

y(x;α) =∞∑n=0

an(α)xn+α. (11)

14

Page 15: m 651 Course Notes

Letting

L =d2

dx2+

1

x

d

dx− 1,

suppose we define the functions an(α) so that they satisfy the recurrence relation, therebykilling off all but the leading order term in

Ly(x;α) = a0xα−2P (α). (12)

Then we find the first solution to Ly = 0 by simply choosing α = α1 = 0 in (11). Nowsuppose we differentiate (12) to get

Ldy

dα=da0dα

xα−2P (α) + a0 ln(x)xα−2P (α) + a0xα−2P ′(α).

If α1 is a multiplicity-2 root of P , then P ′(α1) = 0 and we have our second solution

y2(x) =dy

dα|α1 .

Turning the crank as before, we eventually find

K0(x) = −(γ + ln(x/2))I0(x) +∞∑n=1

(x/2)2n

(n!)2

n∑j=1

1

j.

Other cases in which the second solution is not of Frobenius form (e.g., where the indicialroots are separated by a nonzero integer) can be found using similar methods.

1.5 Boundary and eigenvalue problems

So far, we have considered mostly initial value problems (IVPs), i.e., ODEs with initialconditions (ICs). Now we will consider what happens when the conditions are imposed atdifferent value of x, giving boundary conditions (BCs).

Definition: A BVP Lu = f is referred to as linear and homogeneous if

1. L is a linear differential operator,

2. f = 0, i.e., the ODE is homogeneous, and

3. the BCs are linear and homogeneous (i.e., any two functions satisfying the BCs can beadded to form a new function that also satisfies the BCs.)

15

Page 16: m 651 Course Notes

Example:

• u′′ + u = 0 with BCs u(0) = u(1) = 0 is a linear homogeneous BVP.

• u′′ + u = 0 with BCs u(0) = 0, u(1) = 1 is not a homogeneous BVP.

• u′′ + u = 0 with BCs u2(0) = 0, cos(u(1)) = 0 is not a linear BVP.

As we discussed earlier, BVPs can have no solution, a single solution, or an infinitenumber of solutions in general. For linear homogeneous BVPs, we are guaranteed to havethe trivial solution, so these problems either have a single trivial solution or an infinite familyof solutions.

Example: Consider u′′ + λu = 0 with u(0) = u(π) = 0. Only has trivial solution unlessλ = n2, n = 1, 2, . . . . Also consider more complicated BCs u(0) = 0, u′(π) + hu(π) = 0.

Definition: Values of parameter λ for which linear homogeneous BVPs similar to theexample above have nontrivial solutions are called eigenvalues. The nontrivial solutionsassociated with each of the eigenvalues are called eigenfunctions.

Much of the motivation for studying eigenvalue problems stems from their usefulness insolving partial differential equations. Consider a PDE that we’ll see later:

ρ(x)c(x)∂u

∂t=

∂x

(K0(x)

∂u

∂x

)+ α(x)u

with boundary conditions u(0, t) = u(L, t) = 0. We could try looking for solutions thatseparate variables, i.e.,

u(x, t) = φ(x)h(t)

and we’d arrive ath

h=

ddx

(K0dφdx

) + α(x)φ

ρ(x)c(x)φ= −λ

where λ is an arbitrary constant (i.e., independent of both x and t). We therefore want tofind all possible values of λ that satisfy the BVP

d

dx(K0

d

dx) + (α + λρc)φ = 0

with BCs φ(0) = φ(L) = 0. This is an eigenvalue problem.We will focus on the eigenvalues of a particularly useful form of operator in this regard:

Sturm-Liouville operators.

16

Page 17: m 651 Course Notes

Definition: A Sturm-Liouville eigenvalue problem has the form

Lu+ λσu = 0, (13)

for a ≤ x ≤ b where L is the differential operator of the form

L :=d

dx

(p(x)

d

dx

)− q(x)

and (13) is accompanied by linear homogeneous BCs at x = a and x = b. The Sturm-Liouville problem is referred to as regular if the following hold:

1. p, q, σ are real and continuous on [a, b],

2. p > 0, σ > 0 on [a, b], and

3. BCs are not mixed (i.e., each BC applies only to x = a or x = b).

Note: The following are examples of regular BCs:

1. u(a) = 0 is a Dirichlet (1st-kind) BC

2. u′(a) = 0 is a Neumann (2nd-kind) BC

3. u′(a) + hu(a) = 0 is a Robin (3rd-kind) BC

The following are examples of irregular BCs:

1. u(a) + u(b) = 0 is a mixed BC

2. u(a) = u(b), u′(a) = u′(b) are periodic BCs (for a 2nd-order eigenvalue problem)

3. u bounded as x→ a+ is a typical singularity BC applied when either p or σ vanishesat x = a

Definition: The inner product of two functions f(x) and g(x) on [a, b] defined with respectto positive-definite weight function σ(x) > 0 is given by

(f, g) :=

∫ b

a

σ(x)f(x)g(x) dx.

Functions f and g are said to be orthogonal if (f, g) = 0. The standard L2 inner producthas weighting function σ(x) ≡ 1. If we wish to extend this definition to complex-valuedfunctions, the standard (weighted) inner product with real, positive σ(x) becomes

(f, g) :=

∫ b

a

σ(x)f(x)g(x) dx.

17

Page 18: m 651 Course Notes

Definition: The formal adjoint of linear operator L is the operator L† satisfying

(Lu, v) = (u, L†v) + boundary terms(B.T.)

If u and V satisfy BCs that cause the boundary terms in (1.5) to vanish, then Lu = f andL†v = g are said to satisfy adjoint BVPs. Operator L is said to be self-adjoint if L† = L.

Theorem: Consider the eigenvalue problem given by Sturm-Liouville operator L in (13)accompanied by regular BCs. Then

1. the eigenvalues are real and form an ordered countably infinite sequence λ1 < λ2 < . . . ;

2. the corresponding eigenfunctions φn(x) form a complete and orthogonal system, i.e.,any continuous function f(x) with piecewise 1st and 2nd derivatives that satisfies theBCs can be expanded in an absolutely and uniformly convergent series

f(x) =∞∑n=1

cnφn(x)

where

cn =

∫ b

a

σ(x)f(x)φn(x) dx;

3. the nth eigenfunction φn(x) has n − 1 nodes, i.e., it divides the domain into n subin-tervals; and

4. each eigenvalue can be computed directly from its eigenfunction using the Rayleighquotient:

λn =−pφnφ′n|ba +

∫ ba

[p(φ′n)2 − qφ2n] dx∫ b

aσφ2

n dx.

Example: Show above properties for u′′ + λu = 0, u(0) = u(π) = 0.

Example: Show how above properties change for periodic boundary conditions, e.g., u′′+λu = 0, u(0) = u(π), u′(0) = u′(π).

A variety of eigenvalue problems arise in mathematical physics, resulting in useful expan-sions in terms of Legendre, Hermite, Jacobi, Laguerre, and Chebyshev polynomials. Let uslook in more detail at another singular Sturm-Liouville problem, Bessel’s equation of ordern = 0:

x2u′′ + xu′ + x2λu = 0, u bounded as x→ 0, u(a) = 0. (14)

Here the coefficients are p(x) = x, q(x) = 0, σ(x) = x, so the singularity in the coefficientsoccurs at x = 0, and if we take as our domain 0 ≤ x ≤ b, this implies that we need only

18

Page 19: m 651 Course Notes

require boundedness as x→ 0 instead of the usual BC at x = 0. Let us consider a Dirichletcondition at x = b, i.e., u(b) = 0. From the Rayleigh quotient we see that all eigenvalues arenonnegative, and in fact they must be positive since trivial eigenfunctions are not permitted.Note then that if we perform a change of variables

z =√λx, z

d

dz= x

d

dx,

with u(x) = v(z), we arrive atz2v′′ + zv′ + z2v = 0.

We must now understand the solutions to this equation.From local analysis near z = 0, we know that of the two linearly independent solutions

to (14), one approaches a constant as z → 0 and one has a logarithmic singularity. Wedenote the first solution by J0(z) and the second one by Y0(z). Note also that, for z 1and assuming v and v′ remain bounded, the dominant terms in (14) give

v′′ + v ≈ 0,

so that, as z →∞, we expect sinusoidal oscillations in v. This picture is enough informationto understand the eigenfunctions we are looking for, as we let z1 < z2 < z3 < . . . denote theroots (zeros) of J0(z). These zeros are tabulated in standard texts (and online): z1 = 2.40,z2 = 5.52, z3 = 8.65, etc. Note that z3 − z2 = 3.13 ≈ π.

Returning to (14), application of the second BC gives

u(a) = v(√λa) = J0(

√λa) = 0,

so that√λa = zn for some n ≥ 1. The eigenvalues are therefore infinitely countable, with

λn = (zn/a)2.

The associated eigenfunctions are simply “stretched” copies of J0(z), i.e.,

φn(x) = J0(√λnx) = J0((x/a)zn).

Each of these eigenfunctions has n − 1 zeros, and is related to its eigenvalue through theRayleigh quotient. Any piecewise-smooth function f(x) on [0, a] can be expressed as aneigenfunction expansion

f(x) =∞∑n=1

cnφn(x)

where the coefficients cn can be determined using the orthogonality relation

(φm(x), φn(x)) =

∫ a

0

σ(x)φm(x)φn(x) dx =

∫ a

0

xJ0(√λnx)J0(

√λmx) dx =

∫ a

0

xJ20 (√λx) dx δmn

where the Kronecker delta δmn = 1 if m = n and zero otherwise. Thus,

cn = (f(x), φ(n)(x))/(φn(x), φn(x))2 =

∫ a

0

xf(x)J0((x/a)zn) dx/

∫ a

0

xJ20 ((x/a)zn) dx.

19

Page 20: m 651 Course Notes

Example: Expand f(x) = x in terms of these eigenfunctions.

1.6 Inhomogeneous BVPs and Green’s functions

We have already seen two methods for solving inhomogeneous ODEs Ly = f , the methodof variation of parameters and the method of undetermined coefficients. These methods areequally applicable to IVPs or BVPs, and are very dependent on the functional form of f(x).Many physics and engineering problems take the form of BVPs that require the response ofa system to a variety of different functions f(x), where the BCs and the linear operator Lare the same throughout. For these problems, an alternative form of solution is appropriate,known as the method of Green’s functions.

Definition: The Dirac delta δ(x) is a unit impulse “function” satisfying

1. δ(x) = 0 for x 6= 0 and

2.∫∞−∞ δ(x) dx = 1.

From these properties it can immediately be shown that∫ ∞−∞

δ(x− a)f(x) dx =

∫ a+

a−δ(x− a)f(x) dx = f(a)

and that

δ(x) =d

dxh(x)

where h(x) is the Heaviside function defined by

h(x) =

0 if x < 0

1/2 if x = 0

1 if x > 0.

The Dirac delta is not technically a function but can be defined in many ways as a limit ofa sequence of functions, for example

δ(x) = limε→0

h(x− ε)− h(x+ ε)

2ε.

Definition: The Green’s function for boundary value problem Lu = f with linear homo-geneous BCs B1[u] = 0 and B2[u] = 0 is the solution to the associated BVP:

LG(x, ξ) = δ(x− ξ), a < x, ξ < b

with B1[G] = B2[G] = 0.

20

Page 21: m 651 Course Notes

Theorem: Let G(x, ξ) be the Green’s function for the boundary value problem Lu = fwith homogeneous BCs B1[u] = B2[u] = 0. Then the BVP solution is given by

u(x) =

∫ b

a

G(x, ξ)f(ξ) dξ.

Example: Solve y′′ = f(x) with BCs y(0) = 0, y′(1) = 1.

If the BVP for u involves inhomogeneous BCs, it is more useful to consider the adjointGreen’s function.

Definition: Let u satisfy Lu = f with boundary conditions B1[u] = c1, B1[u] = c2. Theadjoint Green’s function H(x, ξ) satisfies

L†H(x; ξ) = δ(x− ξ)

with boundary conditions B†1[H] = B†2[H] = 0, where L† is the formal adjoint of L and B†1and B†2 are the adjoint boundary operators, chosen to force any unknown boundary termsin (1.5) to vanish.

Theorem: Let H be the adjoint Green’s function to the BVP given above. Then thesolution of the BVP is given by

u(ξ) = (f,H(·; ξ)) :=

∫ b

a

f(x)H(x; ξ) dx+ boundary terms.

Proof: Note that

(f,H(·; ξ) = (Lu,H(·; ξ) = (u, L†H(·; ξ) + B.T. =

∫ b

a

u(x)δ(x− ξ) dx+ B.T. = u(ξ) + B.T.

Theorem: The Green’s function G(x, ξ) and adjoint Green’s function H(x, ξ) satisfy

H(x, ξ) = G(ξ, x).

Proof: Note that

(G(·; ξ), L†H(·; ξ′)) = (G(·; ξ), δ(· − ξ′)) = G(ξ′, ξ)

= (LG(·; ξ), H(·; ξ′)) = (δ(· − ξ), H(·, ξ′)) = H(ξ, ξ′)

where the boundary terms have all dropped out due to the homogeneous forward and adjointconditions on G and H.

This means that the original Green’s function can still be used to find the solution toLu = f with inhomogeneous boundary conditions, by finding that

u(ξ) = (f,G(ξ, ·)) =

∫ b

a

f(x)G(ξ;x) dx+ B.T. (15)

21

Page 22: m 651 Course Notes

Theorem: If the boundary problem Lu = f with boundary conditions B1[u] = c1 andB2[u] = c2 is self-adjoint, including both L and the BCs, then H(x; ξ) = G(x; ξ), i.e.,

u(ξ) =

∫ b

a

f(x)G(x; ξ) dx+ B.T. (16)

Example: y′′ + y′ = x, y(0) = 1, y′(1) = 0.

1.7 Phase plane and nonlinear ODEs

c.f. Strogatz, Ch. 6

The solution methods discussed above are for linear ODEs, for which we can guaranteethe existence and uniqueness of solutions (for IVPs, at least) under fairly broad conditions.We now briefly consider techniques for understanding the solutions of nonlinear ODEs.

We start by considering the trajectories traced by solutions of 2nd-order IVPs on thephase plane, plotting curves of y = x versus x that are parametrized by time t.

Example: Consider mx + cx + kx = 0 with IC x(0) = 1 and x(0) = 0. Letting u =(x y)T := (x x)T we obtain the 1st-order system

du

dt= Au :=

(0 1

−k/m −c/m

)u

with u(0) = (1 0)T . The behaviour of solutions will depend on the eigenvalues and eigenvec-tors of A:

Auj = λjuj → dujdt

= λu → uj(t) = uj(0)eλjt.

Recall that the eigenvalues of A are

λ1,2 = − c

2m±√( c

2m

)2− k

m= − c

2m±√( c

2m

)2− ω2

0, (17)

from which we can immediately see the categories of behaviour we saw earlier:

1. undamped: c = 0. All solutions revolve clockwise around origin x = y = 0.

2. overdamped: c/2m > ω20. All solutions decay to the origin without oscillations.

3. underdamped: c/2m < ω20. All solutions spiral (clockwise) into the origin.

4. critically damped: c/2m = ω20. Repeated roots. Initial algebraic growth in one direc-

tion but ultimate decay to the origin.

22

Page 23: m 651 Course Notes

More generally, ws we saw earlier how we can express any nth-order ODE as an n-dimensional, 1st-order ODE

d~x

dt= ~F (~x, t). (18)

We would like to understand how state ~x(t) evolves in time t, i.e., its trajectory in the phaseplane (of which the generalization to several dimensions is often called phase space).

Definition: ODE (18) is referred to as autonomous if ∂ ~F/∂t ≡ 0. Otherwise, it is referredto as nonautonomous. A planar autonomous system can be written

x = f(x, y), y = g(x, y). (19)

Theorem: Trajectories of an autonomous system of ODEs never cross.

Proof: Suppose two trajectories cross at (x0, y0) at times t1 and t2, respectively. Thenletting τ1 = t1 − t and τ2 = t2 − t defines the same IVP in backward times τ1 and τ2 withIC (x0, y0). Thus the two trajectories give different solutions for the same IVP, which musthave a unique solution. Note that if the system is not autonomous, the backward ODEs arenot identical and can therefore have different solutions.

Definition: A fixed point or equilibrium point (x0, y0) of (19) satisfies

f(x0, y0) = 0, g(x0, y0) = 0. (20)

The linearization of (19) about fixed point (x0, y0) is the linear system(ξη

)= A

(ξη

):=

(∂f∂x

(x0, y0)∂f∂y

(x0, y0)∂g∂x

(x0, y0)∂g∂y

(x0, y0)

)(ξη

), (21)

where A is the Jacobian of the velocity field , i.e., the transformation from (x, y) to (x, y).The linearization (21) is obtained by letting

x = x0 + ξ, y = y0 + η

and expanding f and g in Taylor series, keeping only 1st-order terms in ξ and η. For valuesof ξ and η small, i.e., near the fixed point, the system behaves nearly linearly and is thereforedictated by the eigenvalues (or, for a 2-d system, the trace and determinant) of A. Recallthat λ1λ2 = detA and λ1 + λ2 = trA, so that

λ1,2 =1

2trA±

√(1

2trA)2 − detA. (22)

Generic cases:

23

Page 24: m 651 Course Notes

1. λ1 > λ2 > 0 (trA > 0, (trA)2 > 4 detA. Unstable node.

2. 0 > λ1 > λ2 > 0 (trA < 0, (trA)2 > 4 detA. Stable node.

3. λ1 > 0 > λ2 (detA < 0). Saddle.

4. λ1 = λ2, Re(λ1) > 0 (trA > 0, (trA)2 < 4 detA). Unstable spiral.

5. λ1 = λ2, Re(λ1) < 0 (trA < 0, (trA)2 < 4 detA). Stable spiral.

Degenerate cases:

1. λ1 = λ2, Re(λ1) = 0 (trA = 0, (trA)2 < 4 detA. Center.

2. λ1 = λ2 < 0. Degenerate node or star node, depending on the geometric multiplicityof λ (i.e., how many eigenvectors are associated with λ).

Example: Find and classify the fixed points for the damped pendulum equation θ + cθ +sin θ = 0.

1.7.1 Hamiltonian systems

The linearization gives a local picture of what happens near the fixed points. Global infor-mation requires the existence of additional structure such as conserved quantities or integralsof motion. An example of this is ODEs for which a conserved quantity known as the Hamil-tonian can be formulated.

Definition: Autonomous system (19) is said to be Hamiltonian if there exists a functionH(x, y) (called the Hamiltonian) such that

f(x, y) =∂H

∂y, g(x, y) = −∂H

∂x. (23)

Such a function can be found if ∂f∂x

+ ∂g∂y

= 0.

If (19) is Hamiltonian, then on every trajectory (x(t), y(t)), we have

dH

dt=∂H

∂xx+

∂H

∂yy = −gf + fg = 0.

Trajectories of (19) are then level curves of H(x, y).

Definition: A separatrix is a trajectory that separates qualitatively different regions inphase space. Two common types of separatrix are:

1. homoclinic orbits connecting a fixed point to itself, and

2. heteroclinic orbits connecting two different fixed points.

24

Page 25: m 651 Course Notes

Example: Show that the undamped (nonlinear) pendulum equation is Hamiltonian andfind the separatrices. Show what happens when damping is added.

2 Partial differential equations

Most physically realistic models require the study of quantities that depend on several vari-ables (e.g., space and time). A partial differential equation (PDE) expresses a relationshipbetween the quantity of interest and its instantaneous rates of change with respect to thevariable upon which it depends, as specified by a model for its behavior. PDEs can obviouslymanifest more complex behavior than ODEs, but the two are intimately related in variousways:

• many (but not all) solution methods for PDEs involve breaking the problem into a setof ODEs that one can solve using the methods we saw earlier;

• a PDE can often be regarded as an ”infinite-dimensional ODE” in a similar senseto how we equated an nth order ODE in one dimension to a first-order ODE in ndimensions.

2.1 Characteristics and classification

We begin by considering by solving a simple PDE using the ODEs satisfied along specialtrajectories known as characteristics (or characteristic curves) of the PDE.

Example: ∂u∂t

+ c∂u∂x

= 0 for x ∈ R, t > 0 and with initial condition u(x, 0) = f(x).

Example: ∂u∂t

+ 3t2 ∂u∂x

= 2tu with u(x, 0) = f(x).

Definition: The method of characteristics solves the ODEs that result from consideringthe evolution of u along characteristic curves u(t) = (xc(t), yc(t)). Parametrization of theCauchy data at t = 0 give u(s, t) in characteristic coordinates, which must then be invertedif possible to find u(x, y).

Example: x∂u∂x

+ y ∂u∂y

= (x+ y)u with u = 1 on x = 1, 1 < y < 2.More generally, we can consider

a(x, y)∂u

∂x+ b(x, y)

∂u

∂y= c(x, y)

with Cauchy data specified along a curve Γ in the xy−plane. Parametrizing characteristicswith t, we have

du

dt=∂u

∂xx+

∂u

∂yy

25

Page 26: m 651 Course Notes

giving the ODE

d

dt

xyu

=

a(x, y, u)b(x, y, u)c(x, y, u)

.

Picard’s theorem guarantees that if a, b, c Lipshitz, then (2.1) has a unique solution locally(i.e., near a given point). If the curve Γ on which Cauchy data are given is parametrizedby s then for values of x, y near Γ, the method of characteristics provides a solution u(s, t)along characteristics x(s, t), y(s, t).

Definition: The Jacobian (or Jacobian determinant) of a transformation x = x(s, t), y =y(s, t) is given by

J = det

(∂x∂s

∂x∂t

∂y∂s

∂y∂t

). (24)

Theorem: The inverse function theorem states that a continuously differentiable trans-formation x = x(s, t), y = y(s, t) is invertible near P ∈ R2 if the Jacobian is nonzero atP .

Along characteristics satisfying x = a and y = b, we must therefore have that

det

(∂x∂s

a∂y∂s

b

)6= 0. (25)

One way this determinant can vanish is for the characteristics to be tangential to the curveΓ. When this occurs, the partial derivatives of u are no longer uniquely specified and theCauchy problem is no longer well-posed.

Consider now the 2nd-order PDE

a∂2u

∂x2+ 2b

∂2u

∂x∂y+ c

∂2u

∂y2= f(x, y, u,

∂u

∂x,∂u

∂y). (26)

The Cauchy data for this problem are given by specifying on curve Γ ⊂ R2 that x = x0(s),y = y0(s), u = u0(s) and ∂u

∂n= v0(s), where Γ is parametrized by s1 ≤ s ≤ s2 and ∂u

∂nis the

normal derivative to curve Γ. The second-order PDE (26) with Cauchy data as prescribedhas characteristics defined by

ay2 − 2bxy + cx2 = 0

where (x(t), y(t)) is a characteristic parametrized by t. The discriminant ∆ = b2 − acdetermines how many characteristics the PDE has, which determines its classification. APDE is classified as:

1. hyperbolic if it has two real characteristics. This occurs when b2 > ac, e.g., uxx−uyy = 0.

2. parabolic if it has a single degenerate characteristic. This occurs when b2 = ac, e.g.,uy − uxx = 0.

26

Page 27: m 651 Course Notes

3. elliptic if it has no real characteristics. This occurs when b2 < ac, e.g., uxx + uyy = 0.

It is helpful to bear in mind as we explore the solutions of different PDEs that “informa-tion propagates along characteristics”. Thus, hyperbolic equations often exhibit wave-likebehavior where a solution at (x, t1) will depend on information at earlier time t0 < t1 definedon a set of points bounded by the two characteristics passing through (x, t1). Parabolicequations, with a single degenerate real characteristic, retain some aspect of informationtraveling in a given direction, but the information spreads, or diffuses, with infinite speedas it propagates. Finally, solutions of elliptic equations, with no real characteristics, do notexhibit wave-like behavior at all but rather form surfaces that are inherently global in thatthey are determined largely by boundary conditions.

2.2 The wave equation and D’Alembert’s solution

The use of characteristics as a method to solve PDEs is typically restricted to hyperbolicequations such as the wave equation,

∂2u

∂t2− v2∂

2u

∂x2= 0 with x ∈ R, t > 0. (27)

The characteristics are defined bydx

dt= ±v,

and one can formally express the PDE in characteristic coordinates given by the solutionsto these equations x±(t) respectively satisfying x+(0) = ξ and x−(0) = η, i.e.,

x = vt+ ξ x = −vt+ η. (28)

Inverting, we have ξ = x− vt and η = x+ vt, which we use to obtain

∂2u

∂ξ∂η= 0. (29)

We can integrate this directly to obtain

u = F (η) +G(ξ) = F (x+ vt) +G(x− vt). (30)

Applying boundary conditions (i.e., Cauchy data)

u(x, 0) = µ(x) (31)

∂u

∂t(x, 0) = ν(x) (32)

gives D’Alembert’s solution,

u(x, t) =1

2[µ(x+ vt) + µ(x− vt)] +

1

2v

∫ x+vt

x−vtν(s) ds. (33)

27

Page 28: m 651 Course Notes

Example: Consider D’Alembert’s solution with IC u(x, 0) = h(1 − |x|), ∂u∂t

(x, 0) = 0.Consider the solution when these BCs are swapped. Consider D’Alembert on the half-planex ≥ 0, t ≥ 0 with Neumann BC at x = 0.

2.3 The heat equation

The simplest parabolic PDE is the diffusion, or heat, equation. This models the evolutionof heat, for instance along a one-dimensional rod:

ρc∂T

∂t=

∂x

(K0

∂T

∂x

)+Q for 0 < x < L, t > 0 (34)

where ρ(x) is the density, c(x) is the heat capacity, K0(x) is the thermal conductivity, andQ(x, t) is a source term, such as external heating, applied throughout the rod. This PDErequires initial and boundary conditions, for example,

u(0, t) = 0, u(L, t) = 0, u(x, 0) = f(x). (35)

Note that this PDE is linear and homogeneous, so that any two solutions can be addedto form a third solution, by the principle of superposition. This motivates the approach wewill be using for the next few classes.

2.4 Separation of variables

Definition: The method of separation of variables is a technique for solving linear differ-ential equations that looks for solutions of separable form, e.g., u(x, t) = φ(x)h(t), in orderto identify a family of solutions that provides a complete set of basis functions that canbe used to represent all relevant initial conditions. This typically involves identifying theeigenfunctions of a convenient linear operator, often of Sturm-Liouville form.

Example: Solve the above Dirichlet problem for square integrable f(x).

Definition: Many of these problems require the eigenvalues and eigenfunctions of theoperator L := d2/dx2 (or, in higher dimensions, the Laplacian operator). On finite domain[0, L], these eigenfunctions are referred to as Fourier modes, and the resulting series for f(x)a Fourier series. Depending on the boundary conditions, one might obtain a Fourier sineseries or a Fourier cosine series.

Example: Find the Fourier modes for u′′ + λu = 0 with Dirichlet, Neumann and periodicboundary conditions. Use them to solve the heat equation in each case.

2.5 Inhomogeneities and eigenfunction expansions

Inhomogeneities in the heat equation can often be handled by subtracting off the steadystate solution and expanding everything that remains in the appropriate eigenfunctions.

28

Page 29: m 651 Course Notes

Example: Solve∂u

∂t=∂2u

∂x2+ e−t sin(3x)

with u(0, t) = 0, u(π, t) = 1, u(x, 0) = f(x).A more general and elegant approach, however, is to derive ODEs for the Fourier coef-

ficients using their definitions as projections of the solution u(x, t). This is referred to as aGalerkin method. In the example above with Dirichlet BCs, for instance, the appropriateeigenfunction expansion for u would be

u(x, t) =∞∑n=1

an(t) sin(nx).

Rather than inserting this directly into the PDE as we did for the method of separation ofvariables, we now recall that the coefficients are defined by

an(t) =2

π

∫ π

0

u(x, t) sin(nx) dx.

Differentiating with respect to t,

an(t) =2

π

∫ π

0

∂u

∂tsin(nx) dx.

Substituting for ∂u∂t

using the heat equation and integrating by parts gives ODEs for thecoefficients an(t), whose initial condition is simply given by the Fourier sine series of f(x),i.e.,

an(0) =2

π

∫ π

0

f(x) sin(nx) dx.

2.6 Laplace’s equation

In higher dimensions, the heat equation becomes

∂u

∂t= ∇2u,

and one is often interested in knowing what happens to the solution as t→∞, i.e., does itapproach a steady state and, if so, what is it? The answer to this question involves solvingthe most commonly encountered elliptic PDE, Laplace’s equation. Note that in steady state,the solution is entirely dictated by boundary conditions and initial conditions do not play arole. We will consider solutions to Laplace’s equation for a few different geometries.

29

Page 30: m 651 Course Notes

Example: Consider the heat equation

∂u

∂t= k∇2u on the rectangle 0 < x < L, 0 < y < H

with initial condition u(x, y, t) = f(x, y) and boundary conditions u(0, y, t) = u(L, y, t) =u(x, 0, t) = 0 and u(x,H, t) = g(x). Identify the BVP satisfied by the steady state and solve.

Note that, in the above problem, we chose the eigenfunctions corresponding to the vari-able (x, in this case) satisfying homogeneous boundary conditions. More complicated bound-ary data can be handled by appealing to the principle of superposition, as shown by theexample below.

Example: Solve ∇2u = 0 on the same rectangular box as above, but with BCs u(x, 0) =f1(x), u(x,H) = f2(x), u(0, y) = g1(y), u(L, y) = g2(y).

2.6.1 Laplace’s equation in cylindrical coordinates

In some cases, the boundary conditions are imposed by geometry rather than by physicalconsiderations. Consider Laplace’s equation on a circular disk of radius a, where the onlyobvious boundary to consider is that imposed at r = a. It is obviously convenient in thisgeometry to use polar coordinates r =

√x2 + y2 and θ = tan−1(y/x) to look for a solution

u(r, θ). The obvious BC is imposed at r = a, but since this is a second order PDE in twodimensions, we expect the equivalent of four BCs, two on r and two on θ, to have a well-posed boundary value problem. It will turn out that the second condition on r is simplyboundedness of u as r → 0, as we saw for Bessel’s equation, while the natural BCs on θ arethat u must be periodic in θ, i.e., u(r, θ + 2π) = u(r, θ) for all r and θ. The example belowmakes these ideas more concrete.

Example: Find solution u(r, θ) to Laplace’s equation on a circular disk, with BC u(a, θ) =f(θ).

Note from the solution that

u|r=0 =1

π

∫ π

−πf(θ) dθ,

i.e., that the solution at the center of the disk is equal to the average of the boundarydata on the circle. This averaging principle applies to all circles inside domains on whichu satisfies Laplace’s equation, precluding the existence of local extrema of u interior to thedomain. It also implies uniqueness of solutions to Laplace’s equation (or Poisson’s equation,the nonhomogeneous version of Laplace’s equation) with a Dirichlet boundary condition, asyou will show in your assignment.

Note that the eigenfunctions are defined by the homogeneous boundary conditions, i.e.,

u(r, θ) = φ(r)ψ(θ)

30

Page 31: m 651 Course Notes

gives eigenvalue problemψ′′(θ) + λψ = 0

with BCs ψ(−π) = ψ(π) and ψ′(−π) = ψ′(π) with eigenvalues λn = n2 and eigenfunc-tions ψ0 = 1, ψn1 = cos(nθ) and ψn2 = sin(nθ). Unlike regular Sturm-Liouville eigenvalueproblems, here we have two linearly independent eigenfunctions for each nonzero eigenvalueλn.

Definition: Solutions to Laplace’s equation satisfy the averaging principle, i.e., at anypoint (r, θ), the value of u at that point is equal to the average value of u on any circlecentered at (r, θ). In particular, this implies that u can have no interior maxima or minima,i.e., that all extrema of u must occur on the boundary. It also implies uniqueness of solutionsto Laplace’s (and Poisson’s) equation with Dirichlet boundary conditions.

Example: Find solution u(r, θ, z) to Laplace’s equation on a cylinder, with BC u(a, θ, z) =0, u(r, θ, 0) = f(r, θ), u(r, θ,H) = 0.

In this example, it is convenient to identify 2-dimensional eigenfunctions, with a separa-tion of variables according to

u(r, θ, z) = φ(r, θ)ψ(z),

giving an eigenvalue problem of∇2

2φ+ λφ = 0

with homogeneous Dirichlet BC at φ(a, θ) = 0 (in addition to the usual natural BCs ofboundedness as r → 0 and periodicity in θ). For convenience, we have written the 2-dimensional Laplacian as ∇2

2, so that ∇2 = ∇22 + ∂2

∂z2. This allows us to write the formal

solution immediately and save finding the eigenfunctions for later, i.e.,

u(r, θ, z) =∑n

cnφn(r, θ) sinh(√λn(H − z)).

Now let us turn back to the eigenvalue problem. Recall some properties of the divergenceand Laplacian operators over domain D ⊂ R2:∫∫

D

(u∇22v − v∇2

2u) dA =

∮∂D

(u∂v

∂n− v ∂u

∂n) ds∫∫

D

u∇22u dA =

∮∂D

u∂u

∂nds−

∫∫D

‖∇u‖2 dA,

where the outward normal derivative to domain D is defined as

∂u

∂n= ∇u · n

31

Page 32: m 651 Course Notes

with n the unit outward normal to D. The multi-dimensional equivalent to the Rayleighquotient relating an eigenvalue to its eigenfunction therefore states

λn =−∮∂Dφn

∂φn∂n

ds+∫∫

D‖∇φn‖2 dA∫∫

Dφ2n dA

,

from which we can make a priori statements regarding the eigenvalues corresponding toparticular boundary conditions. For example, homogeneous Dirichlet or Neumann conditionson φ would cause the first term in the numerator to disappear, allowing us to concludeimmediately that all eigenvalues are nonnegative. In this case, we also have an orthogonalitycondition as we did for Sturm-Liouville eigenvalue problems, with inner product

(u, v) :=

∫∫D

uv dA.

Recall that, for polar coordinates, dA = r dr dθ, so the weight function for Bessel’s equationcorrectly appears in the inner product.

Returning to the example, we need only consider nonnegative eigenvalues, so let λ = κ2

with κ ≥ 0. The given boundary condition is homogeneous Neumann data on the circle ofradius a, and the natural boundary conditions are boundedness as r → 0 and periodicity inθ. Some analysis shows that the eigenvalues are most conveniently indexed by two indicesn = 0, 1, 2, . . . and m = 1, 2, . . . with

λnm = (znm/a)2

where znm is themth zero of the nth Bessel function of the first kind, Jn(z). The eigenfunctionsare

φn0 = J0(√λ0mr)

andφnm1 = Jn(

√λnmr) cos(nθ), φnm2 = Jn(

√λnmr) sin(nθ).

The solution to Laplace’s equation on a cylinder with homogeneous BCs everywhere but onthe base is thus

u(r, θ, z) =∞∑m=1

c0mJ0(√λ0mr) sinh

√λ0m(H − z)

+∞∑n=1

∞∑m=1

Jn(√λnmr)(cnm cosnθ + dnm sinnθ) sinh

√λnm(H − z).

32

Page 33: m 651 Course Notes

Applying the condition at z = 0 and using orthogonality of the eigenfunctions, we have

c0m = (φn0, f)/‖φn0‖2

=

∫ π

−π

∫ a

0

J0(√λ0mr)f(r, θ)r dr dθ/

∫ π

−π

∫ a

0

J20 (√λ0mr)r dr dθ

cnm = (φnm1, f)/‖φnm1‖2

=

∫ π

−π

∫ a

0

J0(√λnmr) cos(nθ)f(r, θ)r dr dθ/

∫ π

−π

∫ a

0

J20 (√λ0mr) cos2(nθ)r dr dθ

=

∫ π

−π

∫ a

0

J0(√λnmr) cos(nθ)f(r, θ)r dr dθ/π

∫ a

0

J20 (√λ0mr)r dr

dnm = (φnm2, f)/‖φnm2‖2

=

∫ π

−π

∫ a

0

J0(√λnmr) sin(nθ)f(r, θ)r dr dθ/

∫ π

−π

∫ a

0

J20 (√λ0mr) sin2(nθ)r dr dθ

=

∫ π

−π

∫ a

0

J0(√λnmr) sin(nθ)f(r, θ)r dr dθ/π

∫ a

0

J20 (√λ0mr)r dr

Simplifications of the integrals involving Bessel’s functions can often be made using propertiesfound in reference texts such as Abramowitz & Stegun or Gradshteyn & Ryzhik, but we willforego such simplifications for the purpose of these notes.

2.7 Vibrating membranes (wave equation in higher dimension)

The two-dimension eigenvalue problem defined above prepares us for a return to the waveequation, now over two dimensional membranes. As we will see, the musical notes that sucha membrane produces are completely defined by the geometry and boundary conditionsimposed at the membrane’s boundary.

Example: Solve the wave equation,

∂2u

∂t2= c2∇2u

on the rectangular domain 0 ≤ x ≤ L, 0 ≤ y ≤ H with homogeneous Dirichlet BCs andinitial condition u(x, y, 0) = f(x, y), ∂u

∂t(x, y, 0) = g(x, y).

Example: Find the natural frequencies of a snare drum (i.e., the wave equation on acircular domain with homogeneous Dirichlet BCs).

2.8 Transform methods

The method of separation of variables is predicated on the fact that geometry and boundaryconditions identify a suitable set of eigenfunctions in which we can expand the solution. If

33

Page 34: m 651 Course Notes

the domain is infinite or semi-infinite, then we no longer expect a countably infinite set offunctions that form a complete set. Rather, we expect a family of functions that dependcontinuously on a parameter, as illustrated by the following sections.

2.8.1 The Fourier transform

WARNING: In this section, we adopt the physics convention of assuming limits of ±∞ whenno limits appear on an integral. Thus,∫

f(x) dx :=

∫ ∞−∞

f(x) dx.

Example: Consider the 1-d heat equation with Dirichlet BCs on domain (−L,L) and letL→∞. Use IC u(x, 0) = f(x). Note that we can define

c(λn) = cn =1

2L

∫ L

−Lf(x)e−inπx/L.

Definition: We say f(x) is absolutely integrable if∫|f(x)| dx <∞.

Definition: We say f(x) is piecewise continuous if on any finite interval [a, b], f has atmost a finite number of discontinuities, i.e., points x such that

limy→x

f(y) 6= f(x).

Definition: Suppose f is absolutely integrable and piecewise continuous. Then the Fouriertransform of f is given by

F (ω) := F [f ] =

∫f(x)e−iωx dx

and satisfies

1. F (ω) <∞ for all ω ∈ R,

2. F (ω) continuous on R,

3. F (ω)→ 0 as ω → ±∞.

The inverse Fourier transform is then given by

F−1[F ] =1

∫F (ω)eiωx dω

where

F−1[F ] =

f(x) where f is continuous and12

(limy→x− f(y) + limy→x+ f(y)) where f has a discontinuity.

34

Page 35: m 651 Course Notes

Note: The Fourier transform maps derivatives to multipliers. Suppose f → 0 as x→ ±∞.Then by integrating by parts, we have that

F [f ′] =

∫f ′(x)e−iωx dx = iω

∫f(x)e−iωx dx = iωF (ω).

Moreover,F [f (n)] = (iω)nF (ω).

Thus, with its power to map derivatives to multiplers, the Fourier transform can changePDEs to ODEs, and ODEs to algebraic equations.

Example: Solve the heat equation on the line, i.e.,

∂u

∂t= k

∂2u

∂x2, with u(x, 0) = f(x).

Definition: The convolution of functions f(x) and g(x) is

(f ∗ g)(x) =

∫f(x− ξ)g(ξ) dξ.

Note: The Fourier transform maps convolutions to products, i.e., if

F (ω) := F [f ] and G(ω) := F [g], then

F [f ∗ g] = F (ω)G(ω).

Thus, we can immediately write the solution to the heat equation for t > 0 in terms of aconvolution between the initial condition f(x) and the heat kernel

h(x, t) = F−1[e−kω2t] =1√

4πkte−x

2/4kt.

As mentioned, Fourier transforms are also useful for solving PDEs on infinite domains.

Example: Solve Laplace’s equation on the upper half-plane −∞ < x < ∞, y ≥ 0 with ubounded and u(x, 0) = f(x).

2.9 The Laplace transform

A related technique that is more suited to semi-infinite spaces is the Laplace transform. Wejump straight to its definition.

Definition: A function f(t) is said to be of exponential order if |f(t)| ≤ Mebt for some bfor all t ≥ 0.

35

Page 36: m 651 Course Notes

Definition: Suppose f(t) is of exponential order. Then the Laplace transform of f(t) is

F (s) := L[f ] :=

∫ ∞0

f(t)e−st dt.

It satisfies the following properties:

1. |F (s)| ≤M/(s− b) for s ∈ R, i.e., F (s) exists if s > b,

2. F (s)→ 0 as s→∞,

3. L[f ′] = sF (s)− f(0),

4. L[f (n)] = snF (s)− sn−1f(0)− sn−2f ′(0)− · · · − fn−1(0),

5. L[tf ] = −F ′(s).

Note: Laplace transforms have a similar convolution theorem as do Fourier transforms,but where the convolution is defined a little differently:

(f ∗ g)(t) :=

∫ t

0

f(τ)g(t− τ) dτ.

Then, if F (s) = L[f ] and G(s) = L[g], we have that

L[f ∗ g] = F (s)G(s).

Whereas for Fourier transforms the inverse transform is entirely similar to the forwardtransform, the inverse Laplace transform generally requires performing a contour integralin the complex plane, which is beyond the scope of this course. We will simply identifysome useful inverse transforms as a lookup table to use in conjunction with the convolutiontheorem. Assume for the following that L[f ] = F (s).

L[eatf(t)] = F (s− a)

L[f(t− b)] = e−sbF (s) (b ≥ 0)

L[eat] =1

s− a(s > a)

L[δ(t− t0)] = e−st0 (t0 > 0)

L[tn] =n!

sn+1(n = 1, 2, 3, . . . )

36

Page 37: m 651 Course Notes

Example: Use Laplace transforms to solve the signalling problem, i.e., the wave equation

∂2u

∂t2= c2

∂2u

∂x2on x > 0, t > 0

with BC u(0, t) = f(t) and IC u(x, 0) = ∂u∂t

(x, 0) = 0. Now add an inhomogeneity (i.e.,forcing term) to find the dynamics of a string falling under its own weight:

∂2u

∂t2= c2

∂2u

∂x2− g on x > 0, t > 0,

with BCs u(0, t) = limx→∞∂u∂x

= 0 and ICs u(x, 0) = ∂u∂t

(x, 0) = 0.In engineering applications, the Laplace transform is most useful for characterizing the

impact on a system of different forcing terms “turning on” at different times. Consider thefollowing ODE example:

Example: Find the solution to

y′′ − 7y′ + 6y = et + δ(t− 2) + δ(t− 4), y(0) = 0, y′(0) = 0.

Compare to the solution using Green’s functions.

2.10 Green’s functions for PDEs

One of the most convenient features of solving inhomogeneous differential equations withtransform techniques is the ability to express the answer as a convolution. This is reminiscentof solving inhomogeneous ODEs using Green’s functions, and for good reason, since we canapply the same method to the solution of inhomogeneous PDEs.

Suppose, for example, we wish to solve Poisson’s equation on some bounded domain withDirichlet BCs, i.e.,

∇2u = f(x), in D

with Dirichlet boundary data given on ∂D. It would be highly desirable to be able to expressthe solution as a simple expression involving the boundary data.

Definition: The Dirac delta in 2d (i.e., for x, ξ ∈ R2) is the “function” δ(x−ξ) satisfying

δ(x− ξ) = 0 for x 6= ξ,∫∫R2

f(x)δ(x− ξ) dA = f(ξ)

for all continuous functions f . Note that in rectilinear coordinates in 2d, δ(x − ξ) = δ(x −x0)δ(y − y0).

37

Page 38: m 651 Course Notes

Definition: The Green’s function for Poisson’s equation (i.e., for the Laplacian operator)in 2-d is the function G(x, ξ), with x, ξ ∈ R2, satisfying

∇2G = δ(x− ξ) in D

and satisfying BC G(x, ξ) = 0 on boundary ∂D.

Theorem: The boundary value problem given by

∇2u(x) = f(x) in D

with u = h(x) on boundary ∂D can be expressed in the form

u(x) =

∫∫D

f(ξ)G(x, ξ) dA0 +

∮∂D

h(ξ)∇ξG(x, ξ) · n ds0

Proof: Recall Green’s formula in 2d:∫∫D

(u∇2G−G∇2u) dA =

∮∂D

(u∇G−G∇u) · n ds.

Example: Solve Poisson’s equation on the rectangle, i.e.,

∇2u = f(x, y) on 0 < x < L, 0 < y < H,

first with homogeneous Dirichlet boundary data then with inhomogeneous Dirichlet bound-ary data. Use eigenfunction expansions either in 1d or 2d to find the Green’s function.

2.10.1 Free-space Green’s functions and the Method of Images

We have thus far considered Green’s functions designed (i.e., solved) for specific partialdifferential operators with specific boundary conditions on specific geometries. A widely usedalternative to the above approach is to identify free-space Green’s functions that are definedover all space (e.g., R3) with simple decay conditions as ‖x‖ → ∞. Boundary conditionson bounded geometric regions are then specified by strategically positioning image points inthe region outside the domain of interest.

Definition: The free-space Green’s function for the Laplacian operator in Rn satisfies

∇2G = δ(x− ξ)

with no boundary condition.

Example: Consider n = 2. Find the free-space Green’s function for the Laplacian operatoron the plane. Start by finding G(x,0) and using the divergence theorem to solve for theconstant.

38

Page 39: m 651 Course Notes

Example: Consider n = 3. Find the free-space Green’s function for the Laplacian in3-dimensional space. (Follow the same procedure as above.)

Theorem: Let u satisfy Poisson’s equation in Rn, i.e.,

∇2u = f(x)

and G(x, ξ) be the corresponding free-space Green’s function. Then

u(x) =

∫∫f(ξ)G(x, ξ) dAξ if n = 2 and

u(x) =

∫∫∫f(ξ)G(x, ξ) dVξ if n = 3.

Interestingly, the real power in the free-space Green’s function lies in its ability to imple-ment boundary conditions through image points.

Example: Consider Poisson’s equation on semi-infinite domain y > 0 with BC u(x, 0) =h(x). (pp. 320-321 in Haberman)

Example: Consider Poisson’s equation on the rectangular box 0 < x < L, 0 < y < H withDirichlet BCs. Set up the image points necessary to find the Green’s function (but don’tsolve).

Example: Find the Green’s function for the Laplacian with Dirichlet BCs on a circulardisk of radius a. Show that

G(x, ξ) =1

4πln

(a2

r2 + r20 − 2rr0 cosφ

r2r20 + a4 − 2rr0a2 cosφ

)where r = |x|, r0 = |ξ| and φ is the angle between x and ξ.

39