lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension...

54
lecture 19: root-finding and minimization STAT 545: Intro. to Computational Statistics Vinayak Rao Purdue University November 13, 2019

Transcript of lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension...

Page 1: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

lecture 19: root-finding andminimizationSTAT 545: Intro. to Computational Statistics

Vinayak RaoPurdue University

November 13, 2019

Page 2: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Root-finding in one-dimension

Given some nonlinear function f : R → R, solve

f(x) = 0

Invariably need iterative methods.

Assume f is continuous (else things are really messy).

More we know about f (e.g. gradients), better we can do.

Better: faster (asymptotic) convergence.

1/21

Page 3: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Root bracketing

f(a) and f(b) have opposite signs→ root lies in (a,b).

a and b bracket the root.

Finding an initial bracketing can be non-trivial.Typically, start with an initial interval and expand or contract.

Below, we assume we have an initial bracketing.

Not always possible e.g. f(x) = (x− a)2 (in general, multipleroots/nearby roots lead to trouble).

2/21

Page 4: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Root bracketing

f(a) and f(b) have opposite signs→ root lies in (a,b).

a and b bracket the root.

Finding an initial bracketing can be non-trivial.Typically, start with an initial interval and expand or contract.

Below, we assume we have an initial bracketing.

Not always possible e.g. f(x) = (x− a)2 (in general, multipleroots/nearby roots lead to trouble).

2/21

Page 5: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Bisection method

Simplest root-finding algorithm.Given an initial bracketing, cannot fail.But is slower than other methods.

Successively halves the bracketing interval (binary search):

• Current interval = (a,b)• Set c = a+b

2• New interval = (a, c) or (c,b)(whichever is a valid bracketing) 3/21

Page 6: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Bisection method

Simplest root-finding algorithm.Given an initial bracketing, cannot fail.But is slower than other methods.

Successively halves the bracketing interval (binary search):

• Current interval = (a,b)• Set c = a+b

2• New interval = (a, c) or (c,b)(whichever is a valid bracketing) 3/21

Page 7: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Bisection method

Simplest root-finding algorithm.Given an initial bracketing, cannot fail.But is slower than other methods.

Successively halves the bracketing interval (binary search):

• Current interval = (a,b)• Set c = a+b

2• New interval = (a, c) or (c,b)(whichever is a valid bracketing) 3/21

Page 8: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Bisection method

Simplest root-finding algorithm.Given an initial bracketing, cannot fail.But is slower than other methods.

Successively halves the bracketing interval (binary search):

• Current interval = (a,b)• Set c = a+b

2• New interval = (a, c) or (c,b)(whichever is a valid bracketing) 3/21

Page 9: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Bisection method (contd)

Let ϵn be the interval length at iteration n.Upperbounds error in root.

ϵn+1 = 0.5 ϵn (Linear convergence)

Linear convergence:

• each iteration reduces error by one significant figure.• every (fixed) k iterations reduces error by one digit.• error reduced exponentially with the number of iterations.

Superlinear convergence:

limn→∞

|ϵn+1| = C× |ϵn|m (m > 1)

Quadratic convergence:Number of significant figures doubles every iteration.

4/21

Page 10: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Bisection method (contd)

Let ϵn be the interval length at iteration n.Upperbounds error in root.

ϵn+1 = 0.5 ϵn (Linear convergence)

Linear convergence:

• each iteration reduces error by one significant figure.• every (fixed) k iterations reduces error by one digit.• error reduced exponentially with the number of iterations.

Superlinear convergence:

limn→∞

|ϵn+1| = C× |ϵn|m (m > 1)

Quadratic convergence:Number of significant figures doubles every iteration.

4/21

Page 11: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Bisection method (contd)

Let ϵn be the interval length at iteration n.Upperbounds error in root.

ϵn+1 = 0.5 ϵn (Linear convergence)

Linear convergence:

• each iteration reduces error by one significant figure.• every (fixed) k iterations reduces error by one digit.• error reduced exponentially with the number of iterations.

Superlinear convergence:

limn→∞

|ϵn+1| = C× |ϵn|m (m > 1)

Quadratic convergence:Number of significant figures doubles every iteration. 4/21

Page 12: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Secant method and bisection method

Linearly approximate f to find new approximation to root.

Secant method:

• always keep the newest point• Superlinear convergence (m = 1.618, the golden ratio)

limn→∞

|ϵn+1| = C× |ϵn|1.618

• Bracketing (and thus convergence) not guaranteed.

False position:

• Can choose an old point that guarantees bracketing.• Convergence analysis is harder.

5/21

Page 13: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Secant method and bisection method

Linearly approximate f to find new approximation to root.

Secant method:

• always keep the newest point• Superlinear convergence (m = 1.618, the golden ratio)

limn→∞

|ϵn+1| = C× |ϵn|1.618

• Bracketing (and thus convergence) not guaranteed.

False position:

• Can choose an old point that guarantees bracketing.• Convergence analysis is harder.

5/21

Page 14: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Secant method and bisection method

Linearly approximate f to find new approximation to root.

Secant method:

• always keep the newest point• Superlinear convergence (m = 1.618, the golden ratio)

limn→∞

|ϵn+1| = C× |ϵn|1.618

• Bracketing (and thus convergence) not guaranteed.

False position:

• Can choose an old point that guarantees bracketing.• Convergence analysis is harder.

5/21

Page 15: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Secant method and bisection method

Linearly approximate f to find new approximation to root.

Secant method:

• always keep the newest point• Superlinear convergence (m = 1.618, the golden ratio)

limn→∞

|ϵn+1| = C× |ϵn|1.618

• Bracketing (and thus convergence) not guaranteed.

False position:

• Can choose an old point that guarantees bracketing.• Convergence analysis is harder.

5/21

Page 16: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Practical root-finding

In practice, people use more sophiticated algorithms.

Most popular is Brent’s method.

Maintains bracketing by combining bisection method with aquadratic approximation.

Lots of book-keeping.

6/21

Page 17: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Newton’s method (a.k.a. Newton-Raphson)

At any point uses both function evaluation as well asderivative to form a linear approximation.

Taylor expansion: f(x+ δ) = f(x) + δf′(x) + δ2

2 f′′(x) + · · ·

Assume second- and higher-order terms are negligible.Given xi, choose xi+1 = xi + δ so that f(xi+1) = 0:

0 = f(xi) + δf′(xi)

xi+1 = xi − f(xi)/f′(xi)

7/21

Page 18: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Newton’s method (a.k.a. Newton-Raphson)

At any point uses both function evaluation as well asderivative to form a linear approximation.

Taylor expansion: f(x+ δ) = f(x) + δf′(x) + δ2

2 f′′(x) + · · ·

Assume second- and higher-order terms are negligible.Given xi, choose xi+1 = xi + δ so that f(xi+1) = 0:

0 = f(xi) + δf′(xi)

xi+1 = xi − f(xi)/f′(xi)

7/21

Page 19: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Newton’s method (a.k.a. Newton-Raphson)

At any point uses both function evaluation as well asderivative to form a linear approximation.

Taylor expansion: f(x+ δ) = f(x) + δf′(x) + δ2

2 f′′(x) + · · ·

Assume second- and higher-order terms are negligible.Given xi, choose xi+1 = xi + δ so that f(xi+1) = 0:

0 = f(xi) + δf′(xi)

xi+1 = xi − f(xi)/f′(xi)

7/21

Page 20: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Newton’s method (a.k.a. Newton-Raphson)

At any point uses both function evaluation as well asderivative to form a linear approximation.

Taylor expansion: f(x+ δ) = f(x) + δf′(x) + δ2

2 f′′(x) + · · ·

Assume second- and higher-order terms are negligible.Given xi, choose xi+1 = xi + δ so that f(xi+1) = 0:

0 = f(xi) + δf′(xi)

xi+1 = xi − f(xi)/f′(xi)

7/21

Page 21: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Newton’s method (a.k.a. Newton-Raphson)

xi+1 = xi − f(xi)/f′(xi)

8/21

Page 22: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Newton’s method (a.k.a. Newton-Raphson)

xi+1 = xi − f(xi)/f′(xi)

8/21

Page 23: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Newton’s method (a.k.a. Newton-Raphson)

xi+1 = xi − f(xi)/f′(xi)

8/21

Page 24: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Convergence of Newton’s method

Letting x∗ be the root, we have

xi+1 − x∗ = xi − x∗ − f(xi)/f′(xi)ϵi+1 = ϵi − f(xi)/f′(xi)

Also since xi = x∗ + ϵi,

f(xi) ≈ f(x∗) + ϵif′(x∗) +ϵ2i2 f

′′(x∗)

This gives

ϵi+1 = − f′′(xi)2f′(xi)

ϵ2i

Quadratic convergence (assuming f′(x) is non-zero at the root)

9/21

Page 25: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Convergence of Newton’s method

Letting x∗ be the root, we have

xi+1 − x∗ = xi − x∗ − f(xi)/f′(xi)ϵi+1 = ϵi − f(xi)/f′(xi)

Also since xi = x∗ + ϵi,

f(xi) ≈ f(x∗) + ϵif′(x∗) +ϵ2i2 f

′′(x∗)

This gives

ϵi+1 = − f′′(xi)2f′(xi)

ϵ2i

Quadratic convergence (assuming f′(x) is non-zero at the root)

9/21

Page 26: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Convergence of Newton’s method

Letting x∗ be the root, we have

xi+1 − x∗ = xi − x∗ − f(xi)/f′(xi)ϵi+1 = ϵi − f(xi)/f′(xi)

Also since xi = x∗ + ϵi,

f(xi) ≈ f(x∗) + ϵif′(x∗) +ϵ2i2 f

′′(x∗)

This gives

ϵi+1 = − f′′(xi)2f′(xi)

ϵ2i

Quadratic convergence (assuming f′(x) is non-zero at the root)

9/21

Page 27: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Convergence of Newton’s method

Letting x∗ be the root, we have

xi+1 − x∗ = xi − x∗ − f(xi)/f′(xi)ϵi+1 = ϵi − f(xi)/f′(xi)

Also since xi = x∗ + ϵi,

f(xi) ≈ f(x∗) + ϵif′(x∗) +ϵ2i2 f

′′(x∗)

This gives

ϵi+1 = − f′′(xi)2f′(xi)

ϵ2i

Quadratic convergence (assuming f′(x) is non-zero at the root)

9/21

Page 28: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Pitfalls of Newton’s method

Away from the root the linear approximation can be bad.

Can give crazy results (go off to infinity, cycles etc.)

However, once we have a decent solution can be used torapidly ‘polish the root’.

O ten used in combination with some bracketing method.10/21

Page 29: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Root-finding for systems of nonlinear equations

Now have N functions F1, F2, · · · , FN of N variables x1, x2, · · · , xN

Find (x1, · · · , xN) such that:

Fi(x1, · · · , xN) = 0 i = 1 to N

Much harder than the 1-d case.

Much harder than optimization.

11/21

Page 30: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Newton’s method

Again, consider a Taylor expansion:

F(x+ δx) = F(x) + J(x) · δx+ O(δx2)

Here, J(x) is the Jacobian matrix at x, with Jij = ∂Fi∂xj .

Again, Newton’s method finds δx by solving F(x+ δx) = 0

J(x) · δx = −F(x)

Solve δx = −J(x)−1 · F(x) (e.g. by LU decomposition)

Iterate xnew = xold + δx until convergence.

Can wildly careen through space if not careful.

12/21

Page 31: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Newton’s method

Again, consider a Taylor expansion:

F(x+ δx) = F(x) + J(x) · δx+ O(δx2)

Here, J(x) is the Jacobian matrix at x, with Jij = ∂Fi∂xj .

Again, Newton’s method finds δx by solving F(x+ δx) = 0

J(x) · δx = −F(x)

Solve δx = −J(x)−1 · F(x) (e.g. by LU decomposition)

Iterate xnew = xold + δx until convergence.

Can wildly careen through space if not careful.

12/21

Page 32: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Newton’s method

Again, consider a Taylor expansion:

F(x+ δx) = F(x) + J(x) · δx+ O(δx2)

Here, J(x) is the Jacobian matrix at x, with Jij = ∂Fi∂xj .

Again, Newton’s method finds δx by solving F(x+ δx) = 0

J(x) · δx = −F(x)

Solve δx = −J(x)−1 · F(x) (e.g. by LU decomposition)

Iterate xnew = xold + δx until convergence.

Can wildly careen through space if not careful.12/21

Page 33: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Global methods via optimization

Recall, we want to solve F(x) = 0 (Fi(x) = 0, i = 1 · · ·N).

Minimize f(x) = 12∑N

i=1 |Fi(x)|2 = 12 |F(x)|2 =

12F(x) · F(x).

Note: It is NOT sufficient to find a local minimum of f.

13/21

Page 34: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Global methods via optimization

Recall, we want to solve F(x) = 0 (Fi(x) = 0, i = 1 · · ·N).

Minimize f(x) = 12∑N

i=1 |Fi(x)|2 = 12 |F(x)|2 =

12F(x) · F(x).

Note: It is NOT sufficient to find a local minimum of f.

13/21

Page 35: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Global methods via optimization

Recall, we want to solve F(x) = 0 (Fi(x) = 0, i = 1 · · ·N).

Minimize f(x) = 12∑N

i=1 |Fi(x)|2 = 12 |F(x)|2 =

12F(x) · F(x).

Note: It is NOT sufficient to find a local minimum of f.

13/21

Page 36: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Global methods via optimization)

We move along δx instead of ∇f = F(x)J(x).

This keeps our global objective in sight.

−5

0

5

10

15

−5 0 5 10 15

Note: ∇f · δx = (F(x)J(x)) · (−J−1(x)F(x)) = −F(x)F(x) < 0

Thus δx is a descent direction for f (a direction of decrease)

14/21

Page 37: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Global methods via optimization)

We move along δx instead of ∇f = F(x)J(x).

This keeps our global objective in sight.

−5

0

5

10

15

−5 0 5 10 15

Note: ∇f · δx = (F(x)J(x)) · (−J−1(x)F(x)) = −F(x)F(x) < 0

Thus δx is a descent direction for f (a direction of decrease)

14/21

Page 38: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Global and local minimum

Find minimum of some function f : RD → R.(maximization is just minimizing −f).

No global information (e.g. only function evaluations,derivatives).

Global minimum

Local minimum

Finding a global minimum is hard! Usually settle for finding alocal minimum (like the EM algorithm).

Conceptually (deceptively?) simpler than EM.

15/21

Page 39: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Global and local minimum

Find minimum of some function f : RD → R.(maximization is just minimizing −f).

No global information (e.g. only function evaluations,derivatives).

Global minimum

Local minimum

Finding a global minimum is hard! Usually settle for finding alocal minimum (like the EM algorithm).

Conceptually (deceptively?) simpler than EM. 15/21

Page 40: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Gradient descent (iterative method)

Let xold be our current value.

Update xnew as xnew = xold − η dfdx

∣∣∣xold

The steeper the slope, the bigger the move.

η: sometimes called the ‘learning rate’(from neural network literature)

Choosing η is a dark art:

Better methods adapt step-size according to the curvature of f.

16/21

Page 41: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Gradient descent (iterative method)

Let xold be our current value.

Update xnew as xnew = xold − η dfdx

∣∣∣xold

The steeper the slope, the bigger the move.

η: sometimes called the ‘learning rate’(from neural network literature)

Choosing η is a dark art:

Better methods adapt step-size according to the curvature of f.

16/21

Page 42: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Gradient descent (iterative method)

Let xold be our current value.

Update xnew as xnew = xold − η dfdx

∣∣∣xold

The steeper the slope, the bigger the move.

η: sometimes called the ‘learning rate’(from neural network literature)

Choosing η is a dark art:

Better methods adapt step-size according to the curvature of f.

16/21

Page 43: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Gradient descent (iterative method)

Let xold be our current value.

Update xnew as xnew = xold − η dfdx

∣∣∣xold

The steeper the slope, the bigger the move.

η: sometimes called the ‘learning rate’(from neural network literature)

Choosing η is a dark art:

Better methods adapt step-size according to the curvature of f.16/21

Page 44: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Gradient descent in higher-dimensions

Steepest descent also applies to higher dimensions too:

xnew = xold − η ∇f|xold

At each step, solve a 1-d problem along the gradient

Now, even the optimal step-size η can be inefficient:

17/21

Page 45: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Gradient descent in higher-dimensions

Steepest descent also applies to higher dimensions too:

xnew = xold − η ∇f|xold

At each step, solve a 1-d problem along the gradient

Now, even the optimal step-size η can be inefficient:

17/21

Page 46: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Gradient descent in higher-dimensions

Steepest descent also applies to higher dimensions too:

xnew = xold − η ∇f|xold

At each step, solve a 1-d problem along the gradient

Now, even the optimal step-size η can be inefficient:

−5

0

5

10

15

−5 0 5 10 15

17/21

Page 47: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Wolfe conditions

Rather than the best step-size each step, find a decent solution

Big steps with little decrease Small steps getting usnowhere

Avg. decrease at least some fraction of initial rate:

f(x+ λδx) ≤ f(x) + c1λ(∇f · δx), c1 ∈ (0, 1) e.g. 0.9

Final rate is greater than some fraction of initial rate:

∇f(x+ λδx) · δx ≥ c2∇f(x)δx, c2 ∈ (0, 1) e.g. 0.1

18/21

Page 48: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Wolfe conditions

Rather than the best step-size each step, find a decent solution

Big steps with little decrease Small steps getting usnowhere

Avg. decrease at least some fraction of initial rate:

f(x+ λδx) ≤ f(x) + c1λ(∇f · δx), c1 ∈ (0, 1) e.g. 0.9

Final rate is greater than some fraction of initial rate:

∇f(x+ λδx) · δx ≥ c2∇f(x)δx, c2 ∈ (0, 1) e.g. 0.1

18/21

Page 49: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Wolfe conditions

Rather than the best step-size each step, find a decent solution

Big steps with little decrease Small steps getting usnowhere

Avg. decrease at least some fraction of initial rate:

f(x+ λδx) ≤ f(x) + c1λ(∇f · δx), c1 ∈ (0, 1) e.g. 0.9

Final rate is greater than some fraction of initial rate:

∇f(x+ λδx) · δx ≥ c2∇f(x)δx, c2 ∈ (0, 1) e.g. 0.1 18/21

Page 50: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Wolfe conditions

19/21

Page 51: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Wolfe conditions

19/21

Page 52: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Wolfe conditions

A simple way to satisfy Wolfe conditions:

Set δx = −∇f, c1 = c2 = .5

Start with λ = 1, and while condition i is not satisfied, setλ = βit (for β1 ∈ (0, 1), β2 > 1 and β1 ∗ β2 < 1

20/21

Page 53: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Conjugate gradient descent

Consider minimizing 12xTAx− bTx:

Steepest descent can take manysteps to get to the minimumProblem: A ter minimizing along a di-rection, gradient is perpendicular toprevious direction (why)· Can ‘cancel’ out earlier gains

A popular algorithm is conjugate gradient descentSequentially updates along directions p1, · · ·pN:

xt+1 = xt + λt+1pt, where λt+1 = argminλ f(xt + λpt)

pt+1 = ∇f(xt+1) +⟨∇f(xt+1,∇f(xt+1⟩⟨∇f(xt,∇f(xt⟩

pt

21/21

Page 54: lecture 19: root-finding and minimization · 2019. 12. 11. · Root-finding in one-dimension Givensomenonlinearfunctionf: R! R,solve f(x) = 0 Invariablyneediterativemethods. Assumefiscontinuous(elsethingsarereallymessy).

Conjugate gradient descent

Consider minimizing 12xTAx− bTx:

Steepest descent can take manysteps to get to the minimumProblem: A ter minimizing along a di-rection, gradient is perpendicular toprevious direction (why)· Can ‘cancel’ out earlier gains

If f(x) = 12xTAx− bTx, x ∈ Rd, CG takes max d steps to converge

Can show the directions satisfy ⟨pt+1,pt⟩A := pTt+1Apt = 0

(this is unlike pTt+1pt = 0 for steepest descent)21/21