Kumar Me Maths e Materials

21
KCE/DEPT. OF MATHEMATICS/E-MATERIAL MA 7156 – APPLIED MATHEMATICS FOR PERVASIVE COMPUTING 1.1 Vector spaces The main structures of linear algebra are vector spaces. A vector space over a field F is a set V together with two binary operations. Elements of V are called vectors and elements of F are called scalars. The first operation, vector addition, takes any two vectors v and w and outputs a third vector v + w. The second operation, scalar multiplication, takes any scalar a and any vector v and outputs a new vector av. The operations of addition and multiplication in a vector space must satisfy the following axioms. In the list below, let u, v and w be arbitrary vectors in V, and a and b scalars in F. Axiom Signification Associativity of addition u + (v + w) = (u + v) + w Commutativity of addition u + v = v + u Identity element of addition There exists an element 0 ∈ V, called the zero vector, such that v + 0 = v for all v V. Inverse elements of addition For every v ∈ V, there exists an element −v V, called the additive inverse of v, such that v + (−v) = 0 Distributivity of scalar multiplication with respect to vector addition a(u + v) = au + av Distributivity of scalar (a + b)v = av + bv Page 1

description

e-materials

Transcript of Kumar Me Maths e Materials

KCE/DEPT. OF MATHEMATICS/E-MATERIALMA 7156 APPLIED MATHEMATICS FOR PERVASIVE COMPUTING

1.1 Vector spaces The main structures of linear algebra are vector spaces. A vector space over a field F is a set V together with two binary operations. Elements of V are called vectors and elements of F are called scalars. The first operation, vector addition, takes any two vectors v and w and outputs a third vector v + w. The second operation, scalar multiplication, takes any scalar a and any vector v and outputs a new vector av. The operations of addition and multiplication in a vector space must satisfy the following axioms. In the list below, let u, v and w be arbitrary vectors in V, and a and b scalars in F.AxiomSignification

Associativityof additionu+ (v+w) = (u+v) +w

Commutativityof additionu+v=v+u

Identity elementof additionThere exists an element 0 V, called thezero vector, such thatv+ 0 =vfor allvV.

Inverse elementsof additionFor everyv V, there exists an element vV, called theadditive inverseofv, such thatv+ (v) = 0

Distributivityof scalar multiplication with respect to vector additiona(u+v) =au+av

Distributivity of scalar multiplication with respect to field addition(a+b)v=av+bv

Compatibility of scalar multiplication with field multiplicationa(bv) = (ab)v[nb 1]

Identity element of scalar multiplication1v=v, where 1 denotes themultiplicative identityinF.

The first four axioms are those ofVbeing anabelian groupunder vector addition. Vector spaces may be diverse in nature, for example, containingfunctions,polynomialsor matrices. Linear algebra is concerned with properties common to all vector spaces.

Solution:

Solution:

1.2 BASIC VECTOR ANALYSIS METHODS:Linear transformations Similarly as in the theory of other algebraic structures, linear algebra studies mappings between vector spaces that preserve the vector-space structure. Given two vector spaces V and W over a field F, a linear transformation (also called linear map, linear mapping or linear operator) is a map T:V W that is compatible with addition and scalar multiplication: T(u+v)=T(u)+T(v), T(av)=aT(v) for any vectors u,v V and a scalar a F.Additionally for any vectors u, v V and scalars a, b F: T(au+bv)=T(au)+T(bv)=aT(u)+bT(v) When a bijective linear mapping exists between two vector spaces (that is, every vector from the second space is associated with exactly one in the first), we say that the two spaces are isomorphic. Because an isomorphism preserves linear structure, two isomorphic vector spaces are "essentially the same" from the linear algebra point of view. One essential question in linear algebra is whether a mapping is an isomorphism or not, and this question can be answered by checking if the determinant is nonzero. If a mapping is not an isomorphism, linear algebra is interested in finding its range (or image) and the set of elements that get mapped to zero, called the kernel of the mapping.Linear transformations have geometric significance. For example, 2 2 real matrices denote standard planar mappings that preserve the origin.Subspaces, span, and basis.Again, in analogue with theories of other algebraic objects, linear algebra is interested in subsets of vector spaces that are themselves vector spaces; these subsets are called linear subspaces. For example, both the range and kernel of a linear mapping are subspaces, and are thus often called the range space and the null space; these are important examples of subspaces. Another important way of forming a subspace is to take a linear combination of a set of vectors v1, v2,, vk: a1 v1 + a2 v2 + . + ak vkwhere a1, a2, , ak are scalars. The set of all linear combinations of vectors v1, v2, , vk is called their span, which forms a subspace.A linear combination of any system of vectors with all zero coefficients is the zero vector of V. If this is the only way to express the zero vector as a linear combination of v1, v2,, vk then these vectors are linearly independent. Given a set of vectors that span a space, if any vector w is a linear combination of other vectors (and so the set is not linearly independent), then the span would remain the same if we remove w from the set. Thus, a set of linearly dependent vectors is redundant in the sense that there will be a linearly independent subset which will span the same subspace. Therefore, we are mostly interested in a linearly independent set of vectors that spans a vector space V, which we call a basis of V. Any set of vectors that spans V contains a basis, and any linearly independent set of vectors in V can be extended to a basis. It turns out that if we accept the axiom of choice, every vector space has a basis; nevertheless, this basis may be unnatural, and indeed, may not even be constructible. For instance, there exists a basis for the real numbers considered as a vector space over the rational, but no explicit basis has been constructed.Theorem : If T is a linear transformation from V to W and u and v exist in V then:1) T(0) = 02) T(-v) = -T(v)3) T(u - v) = T(u) - T(v)

1.3 Matrix norms:A matrix norm is a natural extension of the notion of a vector norm to matrices.In what follows,will denote thefieldofrealorcomplex numbers. Letdenote thevector spacecontaining all matrices withrows andcolumns with entries in. Throughout the articledenotes theconjugate transposeof matrix.A matrix norm is avector normon. That is, ifdenotes the norm of the matrix, then,

ifffor allinand all matricesinfor all matricesandinAdditionally, in the case of square matrices (thus,m=n), some (but not all) matrix norms satisfy the following condition, which is related to the fact that matrices are more than just vectors:for all matricesandinA matrix norm that satisfies this additional property is called asub-multiplicative norm.Induced normIf vector norms on Km and Kn are given (K is the field of real or complex numbers), then one defines the corresponding induced norm or operator norm on the space of m-by-n matrices as the following maxima:

The operator norm corresponding to thep-norm for vectorsis:

These are different from the entrywisep-norms and the Schattenp-norms for matrices treated below, which are also usually denoted byFor example, if the matrixAis defined by

then we have ||A||1= max(|-3|+2+0, 5+6+2, 7+4+8) = max(5,13,19) = 19. and ||A||= max(|-3|+5+7, 2+6+4,0+2+8) = max(15,12,10) = 15.1.4 Jordan canonical form:In linear algebra, a Jordan normal form (often called Jordan canonical form)[1] of a linear operator on a finite-dimensional vector space is an upper triangular matrix of a particular form called a Jordan matrix, representing the operator with respect to some basis. Such matrix has each non-zero off-diagonal entry equal to 1, immediately above the main diagonal (on the super diagonal), and with identical diagonal entries to the left and below them. If the vector space is over a field K, then a basis with respect to which the matrix has the required form exists if and only if all eigen values of the matrix lie in K, or equivalently if the characteristic polynomial of the operator splits into linear factors over K. This condition is always satisfied if K is the field of complex numbers. The diagonal entries of the normal form are the eigen values of the operator, with the number of times each one occurs being given by its algebraic multiplicity. For Example,

Problem(Example):Thematrices whose only eigenvalue isseparate into three similarity classes. The three classes have these canonical representatives.

In particular, this matrix

belongs to the similarity class represented by the middle one, because we have adopted the convention of ordering the blocks of sub diagonal ones from the longest block to the shortest.This example shows how to calculate the Jordan normal form of a given matrix. As the next section explains, it is important to do the computation exactly instead of rounding the results.Problem (Example)Consider the matrix

which is mentioned in the beginning of the article.Thecharacteristic polynomialofAis

This shows that the eigenvalues are 1, 2, 4 and 4, according to algebraic multiplicity. The eigenspace corresponding to the eigenvalue 1 can be found by solving the equationAv= v. It is spanned by the column vectorv= (1, 1, 0, 0)T. Similarly, the eigenspace corresponding to the eigenvalue 2 is spanned byw= (1, 1, 0, 1)T. Finally, the eigenspace corresponding to the eigenvalue 4 is also one-dimensional (even though this is a double eigenvalue) and is spanned byx= (1, 0, 1, 1)T. So, thegeometric multiplicity(i.e., the dimension of the eigenspace of the given eigenvalue) of each of the three eigenvalues is one. Therefore, the two eigenvalues equal to 4 correspond to a single Jordan block, and the Jordan normal form of the matrixAis thedirect sum

There are three chains. Two have length one: {v} and {w}, corresponding to the eigenvalues 1 and 2, respectively. There is one chain of length two corresponding to the eigenvalue 4. To find this chain, calculate

Pick a vector in the above span that is not in the kernel ofA4I, e.g.,y= (1,0,0,0)T. Now, (A4I)y=xand (A4I)x= 0, so {y,x} is a chain of length two corresponding to the eigenvalue 4.The transition matrixPsuch thatP1AP=Jis formed by putting these vectors next to each other as follows

A computation shows that the equationP1AP=Jindeed holds.

If we had interchanged the order of which the chain vectors appeared, that is, changing the order ofv,wand {x,y} together, the Jordan blocks would be interchanged. However, the Jordan forms are equivalent Jordan forms.1.5 Generalized Eigenvectors:Ageneralized eigenvectorof annnmatrixis avectorwhich satisfies certain criteria which are more relaxed than those for an (ordinary)eigenvector.Letbe ann-dimensionalvector space; letbe alinear mapinL(V), the set of all linear maps frominto itself; and letbe thematrix representationofwith respect to some orderedbasis.Problem (Example)The matrix

haseigenvaluesandwithalgebraic multiplicitiesand, butgeometric multiplicitiesand.Thegeneralized eigenspacesofare calculated below.is the ordinary eigenvector associated with.is a generalized eigenvector associated with.is the ordinary eigenvector associated with.andare generalized eigenvectors associated with.

This results in a basis for each of thegeneralized eigenspacesof. Together the twochainsof generalized eigenvectors span the space of all 5-dimensional column vectors.

An "almost diagonal" matrixinJordan normal form, similar tois obtained as follows:

whereis ageneralized modal matrixfor, the columns ofare acanonical basisfor, and.1.6 Singular Value Decomposition:Thesingular value decomposition(SVD) is afactorizationof arealorcomplexmatrix. It has many useful applications insignal processingand statistics.Formally, the singular value decomposition of anmnreal or complex matrixMis a factorization of the formM=UV, whereUis anmmreal or complexunitary matrix,is anmnrectangular diagonal matrixwith non-negative real numbers on the diagonal, andV(theconjugate transposeofV, or simply the transpose ofVifVis real) is annnreal or complexunitary matrix. The diagonal entriesi,iofare known as thesingular valuesofM. Themcolumns ofUand thencolumns ofVare called theleft-singular vectorsandright-singular vectorsofM, respectively.The singular value decomposition and theeigen decompositionare closely related. Namely: The left-singular vectors ofMareeigenvectorsofMM. The right-singular vectors ofMare eigenvectors ofMM. The non-zero singular values ofM(found on the diagonal entries of) are the square roots of the non-zeroeigen valuesof bothMMandMM.Theorem : (The Singular Value Decomposition, SVD). Let A be an (mn) matrix with m n.Then there exist orthogonal matrices U (m m) and V (n n) and a diagonal matrix =diag(1, . . . , n) (m n) with 1 2 . . . n 0, such thatA = UVTholds. If r > 0 is the smallest singular value greater than zero then the matrix A has rank r.Problem (Example)Consider the4 5matrix

A singular value decomposition of this matrix is given byUV

Noticeis zero outside of the diagonal and one diagonal element is zero. Furthermore, because the matricesUandVareunitary, multiplying by their respective conjugate transposes yieldsidentity matrices, as shown below. In this case, becauseUandVare real valued, they each are anorthogonal matrix.

This particular singular value decomposition is not unique. Choosingsuch that

is also a valid singular value decomposition.1.7 Pseudo inverse:Ageneralized inverseof amatrixAis a matrix that has some properties of theinverse matrixofAbut not necessarily all of them. Formally, given a matrixand a matrix,is a generalized inverse ofif it satisfies the condition.The purpose of constructing a generalized inverse is to obtain a matrix that can serve as the inverse in some sense for a wider class of matrices than invertible ones. A generalized inverse exists for an arbitrary matrix, and when a matrix has an inverse, then this inverse is its unique generalized inverse. Some generalized inverses can be defined in any mathematical structure that involves associative multiplication, that is, in asemi group.Example:Given any m n-matrix A (real or complex), the pseudo-inverse A+ of A is the uniquen m-matrix satisfying the following properties:AA+A = A,A+AA+ = A+(AA+)T = AA+(A+A)T= A+A.1.8 Least Square Approximations:The method ofleast squaresis a standard approach inregression analysisto the approximate solution ofover determined systems, i.e., sets of equations in which there are more equations than unknowns. "Least squares" means that the overall solution minimizes the sum of the squares of the errors made in the results of every single equation.Problems (Example):A crucial application of least squares is fitting a straight line to m points.Start with three points: Find the closest line to the points .0; 6/; .1; 0/, and .2; 0/.No straight line b = C + Dt goes through those three points. We are asking for twonumbers C and D that satisfy three equations. Here are the equations at t D 0; 1; 2 tomatch the given values b = 6, 0, 0:t = 0 The first point is on the line b = C + Dt if C C + D.0 = 6t = 1 The second point is on the line b = C+Dt if C + D.1 = 0t = 2 The third point is on the line b = C + Dt if C + D.2 = 0.1.9 QR algorithm:The QR algorithm is an eigen value algorithm: that is, a procedure to calculate the eigen values and eigenvectors of a matrix.

Problem (Example):Find the eigenvectors for:

>> A=[4 2/3 -4/3 4/3; 2/3 4 0 0; -4/3 0 6 2; 4/3 0 2 6];>> [q r] = slow_qr(A);>> q1 = qq1 = -0.8944 0.1032 -0.1423 -0.4112 -0.1491 -0.9862 0.0237 0.0685 0.2981 -0.0917 -0.8758 -0.3684 -0.2981 0.0917 -0.4606 0.8310>> A1=r*q;>> [q r] = slow_qr(A1);>> q2 = q1*qq2 = 0.7809 -0.0770 0.0571 -0.6173 0.2082 0.9677 -0.0131 0.1415 -0.4165 0.1697 0.7543 -0.4782 0.4165 -0.1697 0.6539 0.6085

>> A2 = r*q;>> [q r] = slow_qr(A2);>> q3 = q2*qq3 = -0.7328 0.0424 -0.0159 -0.6789 -0.2268 -0.9562 0.0043 0.1850 0.4536 -0.2048 -0.7187 -0.4856 -0.4536 0.2048 -0.6952 0.5187

Page 1