Solution to linear equhgations

43
Solution of System of Linear Equations & Eigen Values and Eigen Vectors P. Sam Johnson March 30, 2015 P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 1/43

Transcript of Solution to linear equhgations

Page 1: Solution to linear equhgations

Solution of System of Linear Equations&

Eigen Values and Eigen Vectors

P. Sam Johnson

March 30, 2015

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 1/43

Page 2: Solution to linear equhgations

Overview

Linear systemsAx = b (1)

occur widely in applied mathematics.

They occur as direct formulations of “real world” problems ; but moreoften, they occur as a part of numerical analysis.

There are two general categories of numerical methods for solving (1).

Direct methods are methods with a finite number of steps ; and they endwith the exact solution provided that all arithmetic operations are exact.

Iterative methods are used in solving all types of linear systems, but theyare most commonly used for large systems.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 2/43

Page 3: Solution to linear equhgations

Overview

Iterative methods are faster than the direct methods. Even the round-offerrors in iterative methods are smaller.

In fact, iterative method is a self correcting process and any error madeat any stage of computation gets automatically corrected in thesubsequent steps.

We start with a simple example of a linear system and we discussconsistency of system of linear equations, using the concept ofdeterminant.

Two iterative methods – Gauss-Jacobi and Gauss-Seidel methods arediscussed, to solve a given linear system numerically.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 3/43

Page 4: Solution to linear equhgations

Overview

Eigen values are a special set of scalars associated with a linear system ofequations. The determination of the eigen values and eigen vectors of asystem is extremely important in physics and engineering, where it isequivalent to matrix diagonalization and arises in such commonapplications as stability analysis, the physics of rotating bodies, and smalloscillations of vibrating systems, to name only a few.

Each eigen value is paired with a corresponding vector, called eigenvector.

We discuss properties of eigen values and eigen vectors in the lecture. Alsowe give a method, called Rayeigh’s Power Method, to find out thelargest eigen value in magnitude (also called, dominant eigen value). Thespecial advantage of the power method is that the eigen vector correspondsto the dominant eigen value is also calculated at the same time.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 4/43

Page 5: Solution to linear equhgations

An Example

A shopkeeper offers two standard packets because he is convinced thatnorth indians each more wheat than rice and south indians each more ricethan wheat. Packet one P1 : 5kg wheat and 2kg rice ; Packet two P2 :2kg wheat and 5kg rice. Notation. (m, n) : m kg wheat and n kg rice.

Suppose I need 19kg of wheat and 16kg of rice. Then I need to buy xpackets of P1 and y packets of P2 so that x(5, 2) + y(2, 5) = (10, 16).

Hence I have to solve the following system of linear equations

5x + 2y = 10

2x + 5y = 16.

Suppose I need 34 kg of wheat and 1 kg of rice. Then I must buy 8packets of P1 and −3 packets of P2. What does this mean? I buy 8packets of P1 and from these I make three packets of P2 and give themback to the shopkeeper.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 5/43

Page 6: Solution to linear equhgations

Consistency of System of Linear Equations – Graphically(1 Equation, 2 Variables)

The solution of the equation

ax + by = c

is the set of all points satisfy the equation forms a straight line in theplane through the point (c/b, 0) and with slope −a/b.

Two lines

parallel – no solution

intersect – unique solution

same – infinitely many solutions.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 6/43

Page 7: Solution to linear equhgations

Consistency of System of Linear Equations

Consider the system of m linear equations in n unkowns

a11x1 + a12x2 + · · ·+ a1nxn = b1

a21x1 + a22x2 + · · ·+ a2nxn = b2

... +... + · · · +

... =...

am1x1 + am2x2 + · · ·+ amnxn = bm.

Its matrix form isa11 a12 . . . a1na21 a22 . . . a2n

...... . . .

...am1 am2 . . . amn

x1x2...

xm

=

b1

b2...

bm.

That is, Ax = b, where A is called the coefficient matrix.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 7/43

Page 8: Solution to linear equhgations

To determine whether these equations are consistent or not, we find theranks of the coefficient matrix A and the augmented matrix

K =

a11 a12 . . . a1n b1

a21 a22 . . . a2n b2...

... · · ·...

...am1 am2 . . . amn bm

= [A b].

We denote the rank of A by r(A).

1. r(A) 6= r(K ), then the linear system Ax = b is inconsistent and hasno solution.

2. r(A) = r(K ) = n, then the linear system Ax = b is consistent andhas a unique solution.

3. r(A) = r(K ) < n, then the linear system Ax = b is consistent andhas an infinite number of solutions.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 8/43

Page 9: Solution to linear equhgations

Solution of Linear Simultaneous Equations

Simultaneous linear equations occur quite often in engineering and science.The analysis of electronic circuits consisting of invariant elements, analysisof a network under sinusoidal steady-state conditions, determination of theoutput of a chemical plant, finding the cost of chemical reactions are someof the problems which depend on the solution of simultaneous linearalgebraic equations, the solution of such equations can be obtained bydirect or iterative methods (successive approximation methods).

Some direct methods are as follows:

1. Method of determinants – Cramer’s rule

2. Matrix inversion method

3. Gauss elimination method

4. Gauss-Jordan method

5. Factorization (triangulization) method.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 9/43

Page 10: Solution to linear equhgations

Iterative Methods

Direct methods yield the solution after a certain amount of computation.

On the other hand, an iterative method is that in which we start from anapproximation to the true solution and obtain better and betterapproximations from a computation cycle repeated as often as may benecessary for achieving a desired accuracy.

Thus in an iterative method, the amount of computation depends on thedegree of accuracy required.

For large systems, iterative methods are faster than the direct methods.Even the round-off errors in iterative methods are smaller. In fact,iteration is a self correcting process and any error made at any stage ofcomputation gets automatically corrected in the subsequent steps.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 10/43

Page 11: Solution to linear equhgations

Diagonally Dominant

Definition

An n × n matrix A is said to be diagonally dominant if, in each row, theabsolute value of each leading diagonal element is greater than or equal tothe sum of the absolute values of the remaining elements in that row.

The matrix A =

10 −5 −24 −10 31 6 10

is a diagonally dominant and the

matrix A =

2 3 −15 8 −41 1 1

is not a diagonally dominant matrix.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 11/43

Page 12: Solution to linear equhgations

Diagonal System

Definition

In the sytem of simultaneous linear equations in n unknowns AX = B, if Ais diagonally dominant, then the system is said to be diagonal system.

Thus the system of linear equations

a1x + b1y + c1z = d1

a2x + b2y + c2z = d2

a3x + b3y + c3z = d3

is a diagonal system if

|a1| ≥ |b1|+ |c1||b2| ≥ |a2|+ |c2||c3| ≥ |a3|+ |b3|.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 12/43

Page 13: Solution to linear equhgations

The process of iteration in solving AX = B will converge quickly if thecofficient matrix A is diagonally dominant.

If the coefficient matrix is not diagonally dominant we must rearrange theequations in such a way that the resulting coefficient matrix becomesdominant, if possible, before we apply the iteration method.

Exercises1. Is the system of equations diagonally dominant? If not, make it diagonally dominant.

3x + 9y − 2z = 10

4x + 2y + 13z = 19

4x − 2y + z = 3.

2. Check whether the system of equations

x + 6y − 2z = 5

4x + y + z = 6

−3x + y + 7z = 5.

is a diagonal system. If not, make it a diagonal system.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 13/43

Page 14: Solution to linear equhgations

Jacobi Iteration Method (also known as Gauss-Jacobi)

Simple iterative methods can be designed for systems in which thecoefficients of the leading diagonal are large as comparted to others.We now describe three such methods. Gauss-Jacobi Iteration Method

Consider the system of linear equations

a1x + b1y + c1z = d1

a2x + b2y + c2z = d2

a3x + b3y + c3z = d3.

If a1, b2, c3 are large as compared to other coefficients (if not, rearrangethe given linear equations), solve for x , y , z respectively. That is, thecofficient matrix is diagonally dominant.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 14/43

Page 15: Solution to linear equhgations

Then the system can be written as

x =1

a1(d1 − b1y − c1z)

y =1

b2(d2 − a2x − c2z) (2)

z =1

c3(d3 − a3x − b3y).

Let us start with the initital approximations x0, y0, z0 for the values ofx , y , z repectively. In the absence of any better estimates for x0, y0, z0,these may each be taken as zero.

Substituting these on the right sides of (2), we gt the first approximations.

This process is repeated till the difference between two consecutiveapproximations is negligible.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 15/43

Page 16: Solution to linear equhgations

Exercises3. Solve, by Jacobi’s iterative method, the equations

20x + y − 2z = 17

3x + 20y − z = −182x − 3y + 20z = 25.

4. Solve, by Jacobi’s iterative method correct to 2 decimal places, the equations

10x + y − z = 11.19

x + 10y + z = 28.08

−x + y + 10z = 35.61.

5. Solve the equations, by Gauss-Jacobi iterative method

10x1 − 2x2 − x3 − x4 = 3

−2x1 + 10x2 − x3 − x4 = 15

−x1 − x2 + 10x3 − 2x4 = 27

−x1 − x2 − 2x3 + 10x4 = −9.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 16/43

Page 17: Solution to linear equhgations

Gauss-Seidel Iteration Method

This is a modification of Jacobi’s Iterative Method.

Consider the system of linear equations

a1x + b1y + c1z = d1

a2x + b2y + c2z = d2

a3x + b3y + c3z = d3.

If a1, b2, c3 are large as compared to other coefficients (if not, rearrangethe given linear equations), solve for x , y , z respectively.

Then the system can be written as

x =1

a1(d1 − b1y − c1z)

y =1

b2(d2 − a2x − c2z) (3)

z =1

c3(d3 − a3x − b3y).

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 17/43

Page 18: Solution to linear equhgations

Here also we start with the initial approximations x0, y0, z0 for x , y , zrespectively, (each may be taken as zero).

Subsituting y = y0, z = z0 in the first of the equations (3), we get

x1 =1

a1(d1 − b1y0 − c1z0).

Then putting x = x1, z = z0 in the second of the equations (3), we get

y1 =1

b2(d2 − a2x1 − c2z0).

Next subsituting x = x1, y = y1 in the third of the equations (3), we get

z1 =1

c3(d3 − a3x1 − b3y1)

and so on.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 18/43

Page 19: Solution to linear equhgations

That is, as soon as a new approximation for an unknown is found, it isimmediately used in the next step.

This process of iteration is repeatedly till the values of x , y , z are obtainedto desired degree of accuracy.

Since the most recent approximations of the unknowns are used whichproceeding to the next step, the convergence in the Gauss-Seidelmethod is twice as fast as in Gauss-Jacobi method.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 19/43

Page 20: Solution to linear equhgations

Convergence Criteria

The condition for convergence of Gauss-Jacobi and Gauss-Seidel methodsis given by the following rule:

The process of iteration will converge if in each equation of the system,the absolute value of the largest co-efficient is greater than the sum of theabsolute values of all the remaining coefficients.

That is, if the coefficient matrix is a diagonal dominant matrix, then theprocess of iteration will converge.

For a given system of linear equations, before finding approximations byusing Gauss-Jacobi / Gauss-Seidel methods, convergence criteria should beverified. If the coefficient matrix is a not diagonal dominant matrix,rearrange the equations, so that the new coefficient matrix is diagonaldominant. Note that all systems of linear equations are not diagonaldominant.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 20/43

Page 21: Solution to linear equhgations

Exercises6. Apply Gauss-Seidel iterative method to solve the equations

20x + y − 2z = 17

3x + 20y − z = −182x − 3y + 20z = 25.

7. Solve the equations, by Gauss-Jacobi and Gauss-Seidel methods (and compare the values)

27x + 6y − z = 85

x + y + 54z = 110

6x + 15y + 2z = 72.

8. Apply Gauss-Seidel iterative method to solve the equations

10x1 − 2x2 − x3 − x4 = 3

−2x1 + 10x2 − x3 − x4 = 15

−x1 − x2 + 10x3 − 2x4 = 27

−x1 − x2 − 2x3 + 10x4 = −9.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 21/43

Page 22: Solution to linear equhgations

Relaxation Method

Consider the system of linear equations

a1x + b1y + c1z = d1

a2x + b2y + c2z = d2

a3x + b3y + c3z = d3.

We define the residuals Rx ,Ry ,Rz by the relations

Rx = d1 − a1x − b1y − c1z

Ry = d2 − a2x − b2y − c2z (4)

Rz = d3 − a3x − b3y − c3z .

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 22/43

Page 23: Solution to linear equhgations

To start with we assume that x = y = z = 0 and calculate the initialresiduals.

Then the residuals are reduced step by step, by giving increments tothe variables.

We note from the equations (4) that if x is increased by 1 (keeping y andz constant), Rx ,Ry and Rz decrease by a1, a2, a3 respectively. This isshown in the following table along with the effects on the residuals when yand z are given unit increments.

δRx δRy δRz

δx = 1 −a1 −a2 −a3δy = 1 −b1 −b2 −b3

δz = 1 −c1 −c2 −c3

The table is the negative of transpose of the coefficient matrix.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 23/43

Page 24: Solution to linear equhgations

At each step, the numerically largest residual is reduced to almost zero.

How can one reduce a particular residual?

To reduce a particular residual, the value of the corresponding variable ischanged.

For example, to reduce Rx by p, x should be increased by p/a1.

When all the residuals have been reduced to almost zero, the incrementsin x , y , z are added separately to give the desired solution.

Verification Process : The residuals are not all negligible when thecomputed values of x , y , z are substituted in (4), then there is somemistake and the entire process should be rechecked.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 24/43

Page 25: Solution to linear equhgations

Convergence of the Relaxation Method

Relaxation method can be applied successfully only if there is at least onerow in which diagonal element of the coefficient matrix dominate theother coefficients in the corresponding row.

That is,

|a1| ≥ |b1|+ |c1||b2| ≥ |a2|+ |c2||c3| ≥ |a3|+ |b3|

should be valid for at least one row.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 25/43

Page 26: Solution to linear equhgations

Exercises

9. Solve by relaxation method, the equations

9x − 2y + z = 50

x + 5y − 3z = 18

−2x + 2y + 7z = 19.

10. Solve by relaxation method, the equations

10x − 2y − 3z = 205

−2x + 10y − 2z = 154

−2x − y + 10z = 120.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 26/43

Page 27: Solution to linear equhgations

Eigen Values and Eigen Vectors

For any n × n matrix A, consider the polynomial

χA(λ) := |λI − A| =

∣∣∣∣∣∣∣∣λ− a11 −a12 · · · −a1n−a21 λ− a22 · · · −a2n· · · · · · · · · · · ·−an1 −an2 · · · λ− ann

∣∣∣∣∣∣∣∣ . (5)

Clearly this is a monic polynomial of degree n. By the fundamentaltheorem of algebra, χ(A) has exactly n (not necessarily distinct) roots.

χA(λ) the characteristic polynomial ofA

χA(λ) = 0 the characteristic equation of A

the roots of χA(λ) the characteristic roots of A

distinct roots of χA(λ) the spectrum of A

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 27/43

Page 28: Solution to linear equhgations

Few Observations on Characteristic Polynomials

The constant terms and the coefficient of λn−1 in χA(λ) are (−1)n|A|and tr(A).

The sum of the characteristic roots of A is tr(A) and the product ofthe characteristic roots of A is |A|.Since λI − AT = (λI − A)T , characteristic polynomials of A and AT

are the same.

Since λI − P−1AP = P−1(λI − A)P, similar matrices have the samecharacteristic polynomials.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 28/43

Page 29: Solution to linear equhgations

Eigen Values and Eigen Vectors

Definition

A complex number λ is an eigen value of A if there exists x 6= 0 in Cn

such that Ax = λx. Any such (non-zero) x is an eigen vector of Acorresponding to the eigen value λ.

When we say that x is an eigen vector of A we mean that x is an eigenvector of A corresponding to some eigen value of A.

λ is an eigen value of A iff the system (λI − A)x = 0 has a non-trivialsolution.

λ is a characteristic root of A iff λI − A is singular.

Theorem

A number λ is an eigen value of A iff λ is a characteristic root of A.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 29/43

Page 30: Solution to linear equhgations

The preceding theorem shows that eigen values are the same ascharacteristic roots. However, by ‘the characteristic roots of A’ we meanthe n roots of the characteristic polynomial of A whereas ‘the eigen valuesof A’ would mean the distinct characteristic roots of A.

Equivalent names:

Eigen values proper values, latent roots, etc.

Eigen vectors characteristic vectors, latent vectors, etc.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 30/43

Page 31: Solution to linear equhgations

Properties of Eigen Values

1. The sum and product of all eigen values of a matrix A is the sum ofprincipal diagonals and determinant of A, respectively. Hence amatrix is not invertible if and only if it has a zero eigen value.

2. The eigen values of A and AT (the transpose of A) are same.

3. If λ1, λ2, . . . , λn are eigen values of A, then

(a) 1/λ1, 1/λ2, . . . , 1/λn are eigen values of A−1.(b) kλ1, kλ2, . . . , kλn are eigen values of kA.(c) kλ1 + m, kλ2 + m, . . . , kλn + m are eigen values of kA + mI.(d) λp1 , λ

p2 , . . . , λ

pn are eigen values of Ap, for any positive integer p.

4. The transformation of A by a non-singular matrix P to P−1AP iscalled a similarity transformation.Any similarity transformation applied to a matrix leaves its eigenvalues unchanged.

5. The eigen values of a real symmetric matrix are real.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 31/43

Page 32: Solution to linear equhgations

Cayley - Hamilton Theorem

Theorem

For every matrix A, the characteristic polynomial of A annihilates A.That is, every matrix satisfies its own characteristic equation.

Main uses of Cayley-Hamilton theorem

1. To evaluate large powers A.

2. To evaluate a polynomial in A with large degree even if A is singular.

3. To express A−1 as a polynomial in A whereas A is non-singular.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 32/43

Page 33: Solution to linear equhgations

Dominant Eigen Value

We have seen that the eigen values of an n × n matrix A are obtained bysolving its characteristic equation

λn + an−1λn−1 + cn−2λ

n−2 + · · ·+ c0 = 0.

For large values of n, polynomial equations like this one are difficult andtime-consuming to solve.

Moreover, numerical techniques for approximating roots of polynomialequations of high degree are sensitive to rounding errors. We now look atan alternative method for approximating eigen values.

As presented here, the method can be used only to find the eigenvalue of A, that is, largest in absolute value – we call this eigen value thedominant eigen value of A.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 33/43

Page 34: Solution to linear equhgations

Dominant Eigen Vector

Dominant eigen values are of primary interest in many physicalapplications.

Let λ1, λ2, . . . , and λn be the eigen values of an n × n matrix A.

λ1 is called the dominant eigen value of A if

|λi | > |λj |, i = 2, 3, . . . , n.

For example, the matrix

(1 00 −1

)has no dominant eigen value.

The eigen vectors corresponding to λ1 are called dominant eigen vectorsof A.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 34/43

Page 35: Solution to linear equhgations

Power Method

In many engineering problems, it is required to compute the numericallylargest eigen values (dominant eigen values) and the corresponding eigenvectors (dominant eigen vectors).

In such cases, the following iterative method is quite convenient which isalso well-suited for machine computations.

Let x1, x2, . . . , xn be the eigen vectors corresponding to the eigen valuesλ1, λ2, . . . , λn.

We assume that the eigen values of A are real and distint with

|λ1| > |λ2| > · · · > |λn|.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 35/43

Page 36: Solution to linear equhgations

As the vectors x1, x2, . . . , xn are linearly independent, any arbitrary columnvector can be written as

x = k1x1 + k2x2 + · · ·+ knxn.

We now operate A repeatedly on the vector x .

ThenAx = k1λ1x1 + k2λ2x2 + · · ·+ knλnxn.

Hence

Ax = λ1

[k1x1 + k2

(λ2λ1

)x2 + · · ·+ kn

(λnλ1

)xn

].

and through iteration we obtain,

Arx = λ1

[k1x1 + k2

(λ2λ1

)r

x2 + · · ·+ kn

(λnλ1

)r

xn

](6)

provided λ1 6= 0.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 36/43

Page 37: Solution to linear equhgations

Since λ1 is dominant, all terms inside the brackets have limit zero exceptthe first term.

That is, for large values of r , the vector

k1x1 + k2

(λ2λ1

)r

x2 + · · ·+ kn

(λnλ1

)r

xn

will converge to k1x1, that is the eigen vector of λ1.

The eigenvalue is obtained as

λ1 = limn→∞

(Ar+1x)p(Arx)p

p = 1, 2, . . . , n

where the index p signifies the pth component in the corresponding vector.

That is, if we take the ratio of any corresponding components of Ar+1xand Arx , this ratio should therefore have a limit λ1.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 37/43

Page 38: Solution to linear equhgations

Linear Convergence

Neglecting the terms of λ2, λ3, . . . , λn in (6) we get

Ar+1x

k1λr+11

− x1 =λ2λ1

(Arx

k1λr1− x1

)which shows that the convergence is linear with convergence ratio∣∣∣∣λ2λ1

∣∣∣∣.Thus, the convergence is faster than if this ratio is small.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 38/43

Page 39: Solution to linear equhgations

Working Rule To Find Dominant Eigen Value

Choose x 6= 0, an arbitrary real vector.

Generally, we choose x as 111

, or,

100

.

In other words, for the purpose of computation, we generally take x with 1as the first component.

We start with a column vector x which is as near the solution as possibleand evaluate Ax which is written as

λ(1)x (1)

after normalization. This gives the first approximation λ(1) to the eigenvalue and x (1) is the eigen vector.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 39/43

Page 40: Solution to linear equhgations

Rayeigh’s Power Method

Similarly, we evaluate Ax (2) = λ(2)x (2) which gives the secondapproximation. We repeat this process till [x (r) − x (r−1)] becomesnegligible.

Then λ(r) will be the dominant (largest) eigen value and x (r), thecorresponding eigen vector (dominant eigen vector).

The iteration method of finding the dominant eigen value of a squarematrix is called the Rayeigh’s power method.

At each stage of iteration, we take out the largest component from y (r) tofind x (r) where

y (r+1) = Ax (r), r = 0, 1, 2, . . . .

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 40/43

Page 41: Solution to linear equhgations

To find the Smallest Eigen Value

To find the smallest eigen value of a given matrix A, apply power methodto the matrix A−1.

Find the dominant eigen value of A−1 and then take is reciprocal.

Exercise

11. Say true or false with justification: If λ is the dominant eigen value ofA and β is the dominant eigen value of A− λI , then the smallesteigen value of A is λ+ β.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 41/43

Page 42: Solution to linear equhgations

Exercises

12. Determine the largest eigen value and the corresponding eigen vectorof the matrix (

5 41 2

).

13. Find the largest eigen value and the corresponding eigen vector of thematrix 2 −1 0

−1 2 −10 −1 2

using power method. Take [1, 0, 0]T as an initial eigen vector.

14. Obtain by power method, the numerically dominant eigen value andeigen vector of the matrix 15 −4 −3

−10 12 −6−20 4 −2.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 42/43

Page 43: Solution to linear equhgations

References

1. Richard L. Burden and J. Douglas Faires, Numerical Analysis -Theory and Applications, Cengage Learning, Singapore.

2. Kendall E. Atkinson, An Introduction to Numerical Analysis, WileyIndia.

3. David Kincaid and Ward Cheney, Numerical Analysis -Mathematics of Scientific Computing, American MathematicalSociety, Providence, Rhode Island.

4. S.S. Sastry, Introductory Methods of Numerical Analysis, FourthEdition, Prentice-Hall, India.

P. Sam Johnson (NITK) Solution of System of Linear Equations & Eigen Values and Eigen Vectors March 30, 2015 43/43