Simple Iteration - Boise State Universitycalhoun/teaching/Math...Direct Methods 2 We have so far...
Transcript of Simple Iteration - Boise State Universitycalhoun/teaching/Math...Direct Methods 2 We have so far...
Numerical Analysis
Iterative Methods
1
Simple Iteration
1Tuesday, October 29, 13
Direct Methods
2
We have so far focused on solving Ax = b using directmethods.
• Gaussian Elimination• LU Decomposition• Variants of LU, including Crout and Doolittle• Other decomposition methods including QR
Advantages • Algorithm terminates in a fixed number of
iterations• Can take advantage of sparsity (to some extent)
2Tuesday, October 29, 13
Direct Methods
3
Possible disadvantages?
• We have to store the matrix, or at least know what the entries are
• No obvious way to get a solution that is “close enough” to the exact solution for whatever purpose we have in mind
• No way to reduce the operation count unless we have a special sparse structure.
3Tuesday, October 29, 13
Iterative methods - scalar case
4
Can we apply what we know about root-finding to the problem of finding the solution to a linear system of equations?
Newton’s method, when applied to a scalar linear equation converges in one step:
r = b� ax
f(x) = b� ax
Newton : x1 = x0 �b� ax
�a
=b
a
4Tuesday, October 29, 13
Iterative methods - scalar case
5
f(x) = 5� 3x
g(x) =1
2x+
5
6
We can formulate a fixed point problem with the func-
tion g(x) defined as
g(x) =
1
m
(b� ax) + x =
⇣1� a
m
⌘x+
b
m
where we choose m so that |g0(x)| < 1 (choose m = 2a,
for example).
5Tuesday, October 29, 13
Iterative methods - matrix case
6
We can apply Newton’s Method to the function
F (x) = b�Ax
where x,b 2 Rn and A 2 Rn⇥n.
We get
xk+1 = xk � [F 0(xk)]�1
F (xk)
= xk +A�1 (b�Axk)
This also gives us the answer in exactly one step!
6Tuesday, October 29, 13
Iterative methods - matrix case
7
Appling the fixed point iteration to F (x) = b�Ax, we
get
g(x) = x+M�1(b�Ax)
where now M 2 Rn⇥n.
The fixed point iteration then looks like
xk+1 = xk +M�1 (b�Axk)
The choice of M will affect the convergence of the scheme.
7Tuesday, October 29, 13
Comparison
8
Newton step: xk+1 = xk +A�1 (b�Axk)
Fixed point step: xk+1 = xk +M�1 (b�Axk)
The Newton step is no different than using a direct method, since we have to invert A at each step.
The fixed point approach is promising, if we can understand how different choices for M affect convergence of the scheme.
8Tuesday, October 29, 13
A simple iteration
9
xk+1 = xk +M�1 (b�Axk)
Some classic schemes. Note that for each scheme, M approximates A.
M = D The Jacobi iteration
M = L The Gauss-Seidel iteration
M = !�1D � L The Successive Over Relaxation or SOR Method
where D is the diagonal of A and L is the lower triangular portion of A (including diagonal).
The matrix M for each of these choices is easy to invert!9Tuesday, October 29, 13
A simple iteration scheme
10
For the scalar case, we have that
g(x) =
1
m
(b� ax) + x =
⇣1� a
m
⌘x+
b
m
which will converge to a root of f(x) = b� ax if
|g0(x)| =���1�
a
m
��� < 1
and the closer m is to a, the faster the convergence. Infact, if m = a, we get convergence in one step.
10Tuesday, October 29, 13
A simple iteration scheme
11
Do we also have to have a condition like |g0(x)| < 1?
In the matrix case, we have
g(x) = x+M�1(b�Ax)
=
�I �M�1A
�x+M�1
b
we also want M to be as close to A as is feasible, while
still being easily invertible. If M = A�1, we converge
in one step.
11Tuesday, October 29, 13
Convergence?
12
Let A be an n ⇥ n matrix. The spectral radius of a
matrix is defined as
⇢(A) = max {|�| : � is an eigenvalue of A}
The following are equivalent
• ⇢(A) < 1
• Ak ! 0 as k ! 1, and
• Ax ! 0 as k ! 1 for any vector x.
If one of the above conditions is true, they are all true.
12Tuesday, October 29, 13
Convergence
13
Let ek = A�1b� xk. Then
ek = (I �M�1A)ek�1 = . . . = (I �M�1A)ke0
Then
kekk k(I �M�1)kk ke0k
And the iteration converges for every initial guess if and only if
⇢(I �M�1A) < 1
13Tuesday, October 29, 13
Preconditioned system
14
In a general splitting method, we write
A = P �Q
This leads to the iterative scheme
Pxk+1 = Qxk + b
xk+1 = P�1Qxk + P�1b
or
14Tuesday, October 29, 13
A simple iteration
15
Our method
M�1Ax = M�1b
applied to the preconditioned system
xk+1 =�I �M�1A
�xk +M�1
b
{{M�1A = I � (I �M�1A)
P Q
can be viewed as a simple splitting method with splitting
where M is a preconditioner.15Tuesday, October 29, 13
Fixed point iteration
16
In the scalar case, we could have chosen m directly :
1
m
(b� ax) = 0
to ensure convergence of the fixed point method.
���1�a
m
��� < 1
This led to the requirement that
Of course, there were many other ways that we could have designed the fixed point scheme.
16Tuesday, October 29, 13
Simple Iteration Algorithm
17
Given an initial guess x0, compute r0 = b� Ax0, and
solve Mz0 = r0.
For k = 1, 2, . . .,Set xk = xk�1 + zk�1.
Compute rk = b�Axk.
Solve Mzk = rk
17Tuesday, October 29, 13