Inverse Eigenvalue Problems Introduction J I One ......Inverse Eigenvalue Problems Natalya Jackson...

25
x y 1 1 -1 1 2/2 2/2 - 2/2 2/2 Introduction One Interesting IEP Applications and . . . Conclusion Home Page Title Page JJ II J I Page 1 of 25 Go Back Full Screen Close Quit Inverse Eigenvalue Problems Natalya Jackson December 19th, 2010 Abstract This project explores the theory and applications of the Inverse Eigen- value Problems including an algorithm for creating symmetric matrices with prescribed eigenvalues and eigenvectors. Introduction In this project I will explore the Inverse Eigenvalue Problems, which are mainly about finding algorithms for creating matrices which have desired eigenvalues and eigenvectors. Some applications of this important area of mathematics will be explored as well, but mostly I will focus on one simple algorithm for creating symmetric matrices and the efficiency of its computability.

Transcript of Inverse Eigenvalue Problems Introduction J I One ......Inverse Eigenvalue Problems Natalya Jackson...

Page 1: Inverse Eigenvalue Problems Introduction J I One ......Inverse Eigenvalue Problems Natalya Jackson December 19th, 2010 Abstract This project explores the theory and applications of

x

y

[11

][−11

]

[√2/2√2/2

][−√2/2√2/2

]

Introduction

One Interesting IEP

Applications and . . .

Conclusion

Home Page

Title Page

JJ II

J I

Page 1 of 25

Go Back

Full Screen

Close

Quit

Inverse Eigenvalue Problems

Natalya Jackson

December 19th, 2010

Abstract

This project explores the theory and applications of the Inverse Eigen-value Problems including an algorithm for creating symmetric matrices withprescribed eigenvalues and eigenvectors.

Introduction

In this project I will explore the Inverse Eigenvalue Problems, which are mainlyabout finding algorithms for creating matrices which have desired eigenvaluesand eigenvectors. Some applications of this important area of mathematics willbe explored as well, but mostly I will focus on one simple algorithm for creatingsymmetric matrices and the efficiency of its computability.

Page 2: Inverse Eigenvalue Problems Introduction J I One ......Inverse Eigenvalue Problems Natalya Jackson December 19th, 2010 Abstract This project explores the theory and applications of

x

y

[11

][−11

]

[√2/2√2/2

][−√2/2√2/2

]

Introduction

One Interesting IEP

Applications and . . .

Conclusion

Home Page

Title Page

JJ II

J I

Page 2 of 25

Go Back

Full Screen

Close

Quit

Eigenvalues and Eigenvectors

According to [1] an eigenvalue is “any number such that a given square matrixminus that number times the identity matrix has a zero determinant.” Whatdoes this mean in practical terms? In [2], a description is given first of eigen-vectors, which are vectors in the same direction as the result of multiplyingthe vector by a given matrix. The scalar multiple of the eigenvector which re-sults from multiplying by the matrix is called an eigenvalue. The relationship isillustrated as follows:

Ax = λx (1)

The matrix A, when multiplied by the eigenvector x, produces the vector whichis the result of multiplying the scalar eigenvalue λ by the eigenvector.

The Inverse Eigenvalue Problems

The Inverse Eigenvalue Problems, or IEP’s, are a well-studied yet continuallydeveloping branch of Linear Algebra which concern the construction of matri-ces using spectral data, which may consist of complete or partial informationabout the eigenvalues and eigenvectors of the desired matrix [4]. There are twobasic components to any IEP: solvability and computability. Solvability refersto whether necessary conditions exist for the problem to have a solution, andcomputability concerns finding an actual procedure for constructing the matrixassuming solvability has been shown. There are theoretical foundations for de-termining the solvability of an IEP, but the computability will sometimes be

Page 3: Inverse Eigenvalue Problems Introduction J I One ......Inverse Eigenvalue Problems Natalya Jackson December 19th, 2010 Abstract This project explores the theory and applications of

x

y

[11

][−11

]

[√2/2√2/2

][−√2/2√2/2

]

Introduction

One Interesting IEP

Applications and . . .

Conclusion

Home Page

Title Page

JJ II

J I

Page 3 of 25

Go Back

Full Screen

Close

Quit

Ox

y

Ax = λx

x

x

y

λx

λy

Figure 1: The matrix A does not change the direction of vector x, but stretchesit by a factor of λ. Therefore λ and x are an eigenvalue and eigenvector pair formatrix A.

Page 4: Inverse Eigenvalue Problems Introduction J I One ......Inverse Eigenvalue Problems Natalya Jackson December 19th, 2010 Abstract This project explores the theory and applications of

x

y

[11

][−11

]

[√2/2√2/2

][−√2/2√2/2

]

Introduction

One Interesting IEP

Applications and . . .

Conclusion

Home Page

Title Page

JJ II

J I

Page 4 of 25

Go Back

Full Screen

Close

Quit

a practical consideration, which is why new algorithms are constantly beingsought.

One Interesting IEP

Given an orthonormal set of vectors, {p1,p2, . . . ,pn}, and a collection of realscalars, λ1, λ2, ..., λn, the goal is to create a matrix S such that Spj = λjpj forj = 1, 2, . . . , n.

Heuvers’ Algorithm

Konrad J. Heuvers of Michigan Technological University produced a theoremfor a relatively simple algorithm which creates a symmetric matrix from pre-determined eigenvectors and eigenvalues [3]. The resulting matrices are sym-metric because he starts with eigenvectors which are mutually perpendicular.What follows is a walk-through of his algorithm, a proof, and a numerical ex-ample showing how the algorithm works.

Let {p1,p2, . . . ,pn} be an arbitrary orthonormal basis for Rn. This means

that before applying this algorithm one must be sure that all the desired eigen-vectors are mutually perpendicular and that they are normalized, or divided bytheir respective magnitudes to produce unit vectors.

Next, let λ1, λ2, ..., λn be n arbitrary real numbers (the desired eigenvalues)and τ be any real number such that τ ≤ λj for j = 1, 2, ..., n.

Then define the matrix B using the following steps. Start by defining µj in

Page 5: Inverse Eigenvalue Problems Introduction J I One ......Inverse Eigenvalue Problems Natalya Jackson December 19th, 2010 Abstract This project explores the theory and applications of

x

y

[11

][−11

]

[√2/2√2/2

][−√2/2√2/2

]

Introduction

One Interesting IEP

Applications and . . .

Conclusion

Home Page

Title Page

JJ II

J I

Page 5 of 25

Go Back

Full Screen

Close

Quit

terms of λj and τ :

µj =√λj − τ (2)

Now define each column vector bj as follows:

bj = µjpj (3)

And define the matrix B in terms of its columns:

B = [b1,b2, ...bn] (4)

Note the presence of τ in the radicand of the expression given in (2). This is whyit is necessary for τ to be less than each of the desired eigenvalues as previouslymentioned.

Finally, define the matrix S as shown in the following expression:

S = BBT + τI (5)

Since we began with real eigenvalues and mutually perpendicular eigenvec-tors, S should be a symmetric matrix. Since a symmetric matrix is equal to itstranspose, we can easily check this assumption:

S = BBT + τI

First take the transpose of both sides:

ST = (BBT + τI)T

Page 6: Inverse Eigenvalue Problems Introduction J I One ......Inverse Eigenvalue Problems Natalya Jackson December 19th, 2010 Abstract This project explores the theory and applications of

x

y

[11

][−11

]

[√2/2√2/2

][−√2/2√2/2

]

Introduction

One Interesting IEP

Applications and . . .

Conclusion

Home Page

Title Page

JJ II

J I

Page 6 of 25

Go Back

Full Screen

Close

Quit

The transpose of a sum of matrices is the sum of the individual transposes:

= (BBT )T + (τI)T

A scalar can be multiplied by a transpose instead of multiplying first, thentransposing:

= (BBT )T + τ(IT )

The transpose of the identity is simply the identity:

= (BBT )T + τI

The transpose of a product is the product of the transposes, but reversed:

= (BT )TBT + τI

The transpose of a transpose is the original matrix:

= BBT + τI

= S

So we’ve shown that ST = S, therfore the matrix S is symmetric.

Page 7: Inverse Eigenvalue Problems Introduction J I One ......Inverse Eigenvalue Problems Natalya Jackson December 19th, 2010 Abstract This project explores the theory and applications of

x

y

[11

][−11

]

[√2/2√2/2

][−√2/2√2/2

]

Introduction

One Interesting IEP

Applications and . . .

Conclusion

Home Page

Title Page

JJ II

J I

Page 7 of 25

Go Back

Full Screen

Close

Quit

Proof

Note that per the instruction given in (3), the matrix B is constructed as shown:

B = [b1, . . . ,bj , . . . ,bn] = [µ1p1, . . . , µjpj , . . . , µnpn] (6)

And the transpose matrix BT is:

BT =

µ1pT1

...µjp

Tj

...µnp

Tn

(7)

If a given vector pj and a corresponding scalar λj are to be an eigenvector andcorresponding eigenvalue of a matrix S, then we must show that the followingrelationship exists:

Spj = λjpj (8)

To show this relationship exists, first establish the construction of Spj , thenshow that this leads to the above conclusion. Recall from (5) that the matrix Sis defined by:

S = BBT + τI

Page 8: Inverse Eigenvalue Problems Introduction J I One ......Inverse Eigenvalue Problems Natalya Jackson December 19th, 2010 Abstract This project explores the theory and applications of

x

y

[11

][−11

]

[√2/2√2/2

][−√2/2√2/2

]

Introduction

One Interesting IEP

Applications and . . .

Conclusion

Home Page

Title Page

JJ II

J I

Page 8 of 25

Go Back

Full Screen

Close

Quit

Multiply S and BBT + τI by pj :

Spj = (BBT + τI)pj

Now distribute pj within the parentheses:

= BBTpj + τIpj

Recognize that multiplying the identity matrix by pj leaves simply pj :

= BBTpj + τpj

Recall that B and BT are defined in (6):

= [µ1p1, . . . , µjpj , . . . , µnpn]

µ1pT1

µ2pT2

...µnp

Tn

pj + τpj

Multiply the rows of BT by the column vector pj :

= [µ1p1, . . . , µjpj , . . . , µnpn]

µ1pT1 pj...

µjpTj pj

...µnp

Tnpj

+ τpj

Page 9: Inverse Eigenvalue Problems Introduction J I One ......Inverse Eigenvalue Problems Natalya Jackson December 19th, 2010 Abstract This project explores the theory and applications of

x

y

[11

][−11

]

[√2/2√2/2

][−√2/2√2/2

]

Introduction

One Interesting IEP

Applications and . . .

Conclusion

Home Page

Title Page

JJ II

J I

Page 9 of 25

Go Back

Full Screen

Close

Quit

At this point we have within our equation a column vector of n components, eachcontaining the dot product of a row vector with a column vector. Since each rowvector is the transpose of one of our original set of orthonormal vectors, takingthe dot product of each vector with one particular pj will produce all zeros(because they are mutually perpendicular) except when taking the dot productof pj with itself, which will give a value of one because it is a unit vector andthe dot product of a vector with itself is the square of the magnitude. Hence wenow have the following continuation:

[µ1p1, . . . , µjpj , . . . , µnpn]

µ1pT1 pj...

µjpTj pj

...µnp

Tnpj

+ τpj = [µ1p1, . . . , µjpj , . . . , µnpn]

0...µj...0

+ τpj

Multiplying here gives a zero vector for every column except µjpj , which ismultiplied by muj and thus leaves only:

= µ2jpj + τpj

factoring out the vector pj gives:

= (µ2j + τ)pj

Page 10: Inverse Eigenvalue Problems Introduction J I One ......Inverse Eigenvalue Problems Natalya Jackson December 19th, 2010 Abstract This project explores the theory and applications of

x

y

[11

][−11

]

[√2/2√2/2

][−√2/2√2/2

]

Introduction

One Interesting IEP

Applications and . . .

Conclusion

Home Page

Title Page

JJ II

J I

Page 10 of 25

Go Back

Full Screen

Close

Quit

and since in (2) we defined µj by√λj − τ , squaring this quantity gives:

= λjpj

Since we have shown that the relationship shown in (8) exists, each vec-tor pj and corresponding scalar λj are therefore an eigenvector and eigenvaluepair for the matrix S, which was created using Heuvers’ algorithm. It appearsthat Heuvers’ algorithm is valid, that the matrix S is a symmetric matrix witheigenvalues λ1, . . . , λj , ..., λn and eigenvectors {p1, . . . ,pj , . . . ,p

n}.

An Example

For the sake of brevity, let’s create a 2 × 2 matrix beginning with two eigen-vectors and corresponding eigenvalues. We’ll start with the following arbitraryorthonormal basis for R2:

BR2 =

2/2

√2/2

,

−√

2/2

√2/2

(9)

Recall from (2) that µj =√λj − τ . For simplicity of computing µ1 and µ2

using this definition, we’ll choose λ1 = 2, λ2 = 5, and τ = 1. The two values arecomputed:

Page 11: Inverse Eigenvalue Problems Introduction J I One ......Inverse Eigenvalue Problems Natalya Jackson December 19th, 2010 Abstract This project explores the theory and applications of

x

y

[11

][−11

]

[√2/2√2/2

][−√2/2√2/2

]

Introduction

One Interesting IEP

Applications and . . .

Conclusion

Home Page

Title Page

JJ II

J I

Page 11 of 25

Go Back

Full Screen

Close

Quit

µ1 =√λ1 − τ

=√

2− 1

= 1

And:

µ2 =√λ2 − τ

=√

5− 1

= 2

Next we create the matrix B, composed of the columns b1 and b2. Recallfrom (3) that each vector bj is defined as µjpj . The construction of matrix Bis shown below:

B = [b1,b2]

Recall the construction of each of the columns:

= [µ1p1, µ2p2]

Replace each value for µ and each vector p:

=

1

2/2

√2/2

, 2

−√

2/2

√2/2

Page 12: Inverse Eigenvalue Problems Introduction J I One ......Inverse Eigenvalue Problems Natalya Jackson December 19th, 2010 Abstract This project explores the theory and applications of

x

y

[11

][−11

]

[√2/2√2/2

][−√2/2√2/2

]

Introduction

One Interesting IEP

Applications and . . .

Conclusion

Home Page

Title Page

JJ II

J I

Page 12 of 25

Go Back

Full Screen

Close

Quit

Multiply to get the final matrix B:

=

2/2 −√

2

√2/2

√2

(10)

Page 13: Inverse Eigenvalue Problems Introduction J I One ......Inverse Eigenvalue Problems Natalya Jackson December 19th, 2010 Abstract This project explores the theory and applications of

x

y

[11

][−11

]

[√2/2√2/2

][−√2/2√2/2

]

Introduction

One Interesting IEP

Applications and . . .

Conclusion

Home Page

Title Page

JJ II

J I

Page 13 of 25

Go Back

Full Screen

Close

Quit

Now we can create our matrix S as follows:

S = BBT + τI

Replace B with the constructed matrix from (10) and transpose it:

=

2/2 −√

2

√2/2

√2

2/2√

2/2

−√

2√

2

+ 1

[1 00 1

]

Perform the matrix multiplication B times BT :

=

5/2 −3/2

−3/2 5/2

+

[1 00 1

]

Now subtract:

=

7/2 −3/2

−3/2 7/2

We should check the validity of this solution by finding the eigenvalues andeigenvectors of S to see if they match our initial conditions. First we’ll find theeigenvalues of S, beginning by setting the determinant of S − λI equal to zero:

Page 14: Inverse Eigenvalue Problems Introduction J I One ......Inverse Eigenvalue Problems Natalya Jackson December 19th, 2010 Abstract This project explores the theory and applications of

x

y

[11

][−11

]

[√2/2√2/2

][−√2/2√2/2

]

Introduction

One Interesting IEP

Applications and . . .

Conclusion

Home Page

Title Page

JJ II

J I

Page 14 of 25

Go Back

Full Screen

Close

Quit

det(S − λI) = 0

Recall the matrix S we found, and multiply a 2× 2 identity matrix by λ:

det

7/2 −3/2

−3/2 7/2

[λ 00 λ

] = 0

Now subtract the two matrices:∣∣∣∣∣∣

7/2− λ −3/2

−3/2 7/2− λ

∣∣∣∣∣∣= 0

The determinant of this matrix is of the form ad− bc:(

7

2− λ)2

−(−3

2

)2

= 0

Multiply out the terms:

49

4− 7λ+ λ2 − 9

4= 0

Combine like terms and place in descending order:

λ2 − 7λ+ 10 = 0

Page 15: Inverse Eigenvalue Problems Introduction J I One ......Inverse Eigenvalue Problems Natalya Jackson December 19th, 2010 Abstract This project explores the theory and applications of

x

y

[11

][−11

]

[√2/2√2/2

][−√2/2√2/2

]

Introduction

One Interesting IEP

Applications and . . .

Conclusion

Home Page

Title Page

JJ II

J I

Page 15 of 25

Go Back

Full Screen

Close

Quit

Factor the resulting quadratic equation:

(λ− 2)(λ− 5) = 0

The obvious solutions of this equation are:

λ = 2, 5

Note that the eigenvalues of our matrix S are equal to the ones we chosebefore applying Heuvers’ algorithm. Now we’ll see if the eigenvectors check aswell, starting with the first eigenvalue λ = 2:

Sx = 2x

Subtract 2x from both sides:

Sx− 2x = 0

Multiply the vector x on the left by the identity matrix:

Sx− 2Ix = 0

Factor out the vector x:

(S − 2I)x = 0

Page 16: Inverse Eigenvalue Problems Introduction J I One ......Inverse Eigenvalue Problems Natalya Jackson December 19th, 2010 Abstract This project explores the theory and applications of

x

y

[11

][−11

]

[√2/2√2/2

][−√2/2√2/2

]

Introduction

One Interesting IEP

Applications and . . .

Conclusion

Home Page

Title Page

JJ II

J I

Page 16 of 25

Go Back

Full Screen

Close

Quit

Recall the matrix S we constructed from the algorithm, and multiply the identitymatrix by two:

7/2 −3/2

−3/2 7/2

[2 00 2

]x = 0

Now subtract the matrices and rewrite as a matrix multiplication:

3/2 −3/2

−3/2 3/2

[xy

]=

[00

]

Note that multiplying one times each column and adding them together willproduce the zero vector:

[xy

]=

[11

]

Therefore [1, 1]T is an eigenvector of S associated with teh eigenvalue λ = 2.Now we’ll perform the same process with λ = 5:

Sx = 5x

Subtract 5x from both sides:

Sx− 5x = 0

Page 17: Inverse Eigenvalue Problems Introduction J I One ......Inverse Eigenvalue Problems Natalya Jackson December 19th, 2010 Abstract This project explores the theory and applications of

x

y

[11

][−11

]

[√2/2√2/2

][−√2/2√2/2

]

Introduction

One Interesting IEP

Applications and . . .

Conclusion

Home Page

Title Page

JJ II

J I

Page 17 of 25

Go Back

Full Screen

Close

Quit

Multiply the vector x on the left by the identity matrix:

Sx− 5Ix = 0

Factor out the vector x:

(S − 5I)x = 0

Recall the matrix S we constructed from the algorithm, and multiply the identitymatrix by five:

7/2 −3/2

−3/2 7/2

[5 00 5

]x = 0

Now subtract the matrices and rewrite as a matrix multiplication:

−3/2 −3/2

−3/2 −3/2

[xy

]=

[00

]

Note that adding one times the first column to −1 times the second column willproduce the zero vector:

[xy

]=

[−11

]

Page 18: Inverse Eigenvalue Problems Introduction J I One ......Inverse Eigenvalue Problems Natalya Jackson December 19th, 2010 Abstract This project explores the theory and applications of

x

y

[11

][−11

]

[√2/2√2/2

][−√2/2√2/2

]

Introduction

One Interesting IEP

Applications and . . .

Conclusion

Home Page

Title Page

JJ II

J I

Page 18 of 25

Go Back

Full Screen

Close

Quit

x

y

[11

][−11

]

[√2/2√2/2

][−√

2/2√2/2

]

Figure 2: Note that the eigenvectors of the S we created with the algorithm arescalar multiples of the eigenvectors we chose beforehand.

Page 19: Inverse Eigenvalue Problems Introduction J I One ......Inverse Eigenvalue Problems Natalya Jackson December 19th, 2010 Abstract This project explores the theory and applications of

x

y

[11

][−11

]

[√2/2√2/2

][−√2/2√2/2

]

Introduction

One Interesting IEP

Applications and . . .

Conclusion

Home Page

Title Page

JJ II

J I

Page 19 of 25

Go Back

Full Screen

Close

Quit

Therefore [−1, 1]T is an eigenvector of the matrix S associated with theeigenvalue λ = 5.

Note that the eigenvectors of the S we created with the algorithm are scalarmultiples of the eigenvectors we chose beforehand. Since the nullspaces of S−2Iand S−5I are both closed under scalar multiplication, the eigenvectors we foundconfirm the validity of the algorithm.

Page 20: Inverse Eigenvalue Problems Introduction J I One ......Inverse Eigenvalue Problems Natalya Jackson December 19th, 2010 Abstract This project explores the theory and applications of

x

y

[11

][−11

]

[√2/2√2/2

][−√2/2√2/2

]

Introduction

One Interesting IEP

Applications and . . .

Conclusion

Home Page

Title Page

JJ II

J I

Page 20 of 25

Go Back

Full Screen

Close

Quit

Applications and Practical Considerations

Inverse Eigenvalue Problems are found in applications where the goal is to findthe physical parameters of a system based on known behavorior or to constructa system with physical parameters resulting in a desired dynamical behavior[4]. Particle physics, molecular spectroscopy, and geophysics are but a few fieldswhich have applications of IEP’s.

Benefits and Drawbacks of Heuvers’ Algorithm

Every algorithm which is proposed as a solution to an Inverse Eigenvalue Prob-lem has its benefits and its drawbacks. Heuvers’ algorithm is simple enoughto be understood by an undergraduate linear algebra student, and simplicityis a plus when considering the practicability of using an algorithm to computethe desired matrix. It is also easily proved to work for any n eigenvalues andeigenvectors. However, Heuvers’ algorithm only creates symmetric matrices dueto the requirement of real eigenvalues and mutually perpendicular eigenvectors.Populations are often modeled by matrices, but these matrices are not oftensymmetric so this algorithm wouldn’t necessarily be appropriate for, say, deter-mining the Leslie matrix required for a particular end population growth factorand associated population proportions. Also, the desired vectors must first benormalized, or converted to unit vectors, by dividing each vector by its magni-tude.

Page 21: Inverse Eigenvalue Problems Introduction J I One ......Inverse Eigenvalue Problems Natalya Jackson December 19th, 2010 Abstract This project explores the theory and applications of

x

y

[11

][−11

]

[√2/2√2/2

][−√2/2√2/2

]

Introduction

One Interesting IEP

Applications and . . .

Conclusion

Home Page

Title Page

JJ II

J I

Page 21 of 25

Go Back

Full Screen

Close

Quit

Computation

When considering the practical applications of an algorithm such as Heuvers’, itis important to recognize the limitations of computing such matrices by hand.This process can be described as tedious at best, impossibly time-consumingat worst. The algorithm itself is very useful, but a computer program whichcan perform the computations would be even more useful. For this reason Idecided to create a function in Matlab which performs the calculations requiredfor Heuvers’ algorithm. I have included comments describing the purpose of eachcode block:

function [S] = Heuvers(P, lambda, tau)

% This function requires three input values. The columns of the matrix P

% are the desired eigenvectors, the vector lambda contains the corresponding eigenvalues, and tau is a scalar.

if nargin==0

display(’Proceeding with example:’)

P = [1/sqrt(2) -1/sqrt(2); 1/sqrt(2) 1/sqrt(2)];

lambda = [2 5];

tau = 1;

end

% The above block contains the input values from the numerical example we

% already calculated, to allow a user to demonstrate the program’s capabilities without manually entering input arguments.

[m,n] = size(P);

Page 22: Inverse Eigenvalue Problems Introduction J I One ......Inverse Eigenvalue Problems Natalya Jackson December 19th, 2010 Abstract This project explores the theory and applications of

x

y

[11

][−11

]

[√2/2√2/2

][−√2/2√2/2

]

Introduction

One Interesting IEP

Applications and . . .

Conclusion

Home Page

Title Page

JJ II

J I

Page 22 of 25

Go Back

Full Screen

Close

Quit

nn = length(lambda);

if m~=n

display(’Length of eigenvectors must equal number of eigenvectors’)

return

end

% The above checks to see if the eigenvector matrix is square and displays

% an error if it is not.

if nn~=n

display(’Number of eigenvectors must equal number of eigenvalues’)

return

end

% The above checks that the length of the lambda vector equals the number

% of columns in the eigenvector matrix and displays an error if not.

for i = 1:n

for j = 1:n

if i = j

break

elseif abs(P(:,i)’*P(:,j))>0.0001

display(’Eigenvectors must be mutually perpendicular’)

return

Page 23: Inverse Eigenvalue Problems Introduction J I One ......Inverse Eigenvalue Problems Natalya Jackson December 19th, 2010 Abstract This project explores the theory and applications of

x

y

[11

][−11

]

[√2/2√2/2

][−√2/2√2/2

]

Introduction

One Interesting IEP

Applications and . . .

Conclusion

Home Page

Title Page

JJ II

J I

Page 23 of 25

Go Back

Full Screen

Close

Quit

end

end

end

% The above loop takes the dot product of each eigenvector with every other

% eigenvector to make sure it is as close to zero as truncation will allow,

% to check for mutual orthogonality and display an error message if necessary.

for i = 1:n

if abs(1 - (P(:,i)’*P(:,i))) > 0.0001

display(’Eigenvectors must be unit vectors’)

return

end

end

% The above loop takes the dot product of each eigenvector with itself to

% make sure it is as close to one as truncation will allow to make sure

% they are unit vectors and display an error message if not.

mu = sqrt(lambda-tau);

% A vector mu is created by subtracting tau from each element in the lambda

% vector and then taking the square root of each of the new elements.

Page 24: Inverse Eigenvalue Problems Introduction J I One ......Inverse Eigenvalue Problems Natalya Jackson December 19th, 2010 Abstract This project explores the theory and applications of

x

y

[11

][−11

]

[√2/2√2/2

][−√2/2√2/2

]

Introduction

One Interesting IEP

Applications and . . .

Conclusion

Home Page

Title Page

JJ II

J I

Page 24 of 25

Go Back

Full Screen

Close

Quit

for i = 1:n

B(:,i) = mu(i)*P(:,i);

end

% The above loop constructs the matrix B by taking each element in the

% vector mu and multiplying it by its respective column in P, an

% eigenvector, to create the columns of B.

S = B*B’ + tau*eye(n);

% The matrix S is constructed according to the algorithm’s instructions.

end

Note that two loops include a margin of error, namely 0.0001. If the user isworking with entries which require a greater level of accuracy this margin canbe changed, however truncation of decimals can lead to the program throwingerrors when there actually are none.

Conclusion

The Inverse Eigenvalue Problems are an important branch of linear algebra whichare useful in applications across a broad range of the sciences. The goal is tocreate a matrix given some set of desired eigenvectors and eigenvalues. KonradJ. Heuvers wrote a very useful algorithm for creating symmetric matrices using

Page 25: Inverse Eigenvalue Problems Introduction J I One ......Inverse Eigenvalue Problems Natalya Jackson December 19th, 2010 Abstract This project explores the theory and applications of

x

y

[11

][−11

]

[√2/2√2/2

][−√2/2√2/2

]

Introduction

One Interesting IEP

Applications and . . .

Conclusion

Home Page

Title Page

JJ II

J I

Page 25 of 25

Go Back

Full Screen

Close

Quit

an orthonormal basis for Rn, a set of corresponding eigenvalues, and a chosenscalar. I have given a simple proof and a numerical example, as well as a Matlabfunction which performs the computation. Heuvers’ algorithm is a useful toolbecause it is simple to understand and calculate. It only creates symmetricmatrices, however, which are powerful in many applications but this limits therange of applications which could benefit from this particular algorithm.

References

[1] Trustees of Princeton UniversityWordNet A Lexical Database for English,2010, http://wordnetweb.princeton.edu/perl/webwn?s=eigenvalue

[2] Gilbert Strang Introduction to Linear Algebra, Fourth Edition, Wesley-Cambridge, 2009.

[3] Konrad J. Heuvers Symmetric Matrices with Prescribed Eigenvalues andEigenvectors, Mathematics Magazine, Vol. 55, No. 2. (Mar., 1982), pp.106-111

[4] Moody T. Chu and Gene Golub Inverse Eigenvalue Problems: Theoryand Appplications, Department of Mathematics, North Carolina State Uni-versity, 2001., http://www4.ncsu.edu/~mtchu/Research/Lectures/Iep/preface.ps