Intensity Transformations (Chapter 3) CS474/674 – Prof. Bebis.
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
-
Upload
georgina-west -
Category
Documents
-
view
214 -
download
0
Transcript of Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
Linear Algebra Review
1
CS479/679 Pattern RecognitionDr. George Bebis
n-dimensional Vector
• An n-dimensional vector v is denoted as follows:
• The transpose vT is denoted as follows:
Inner (or dot) product
• Given vT = (x1, x2, . . . , xn) and wT = (y1, y2, . . . , yn), their dot product defined as follows:
or
(scalar)
Orthogonal / Orthonormal vectors
• A set of vectors x1, x2, . . . , xn is orthogonal if
• A set of vectors x1, x2, . . . , xn is orthonormal if
k
Linear combinations
• A vector v is a linear combination of the vectors v1, ..., vk if:
where c1, ..., ck are constants.
• Example: vectors in R3 can be expressed as a linear combinations of unit vectors i = (1, 0, 0), j = (0, 1, 0), and k = (0, 0, 1)
Space spanning
• A set of vectors S=(v1, v2, . . . , vk ) span some space W if every vector in W can be written as a linear combination of the vectors in S
- The unit vectors i, j, and k span R3
w
Linear dependence
• A set of vectors v1, ..., vk are linearly dependent if at least one of them is a linear combination of the others.
(i.e., vj does not appear on the right side)
Linear independence
• A set of vectors v1, ..., vk is linearly independent if no vector can be represented as a linear combination of the remaining vectors, i.e.:
Example:
Vector basis
• A set of vectors (v1, ..., vk) forms a basis in some vector space W if:
(1) (v1, ..., vk) are linearly independent
(2) (v1, ..., vk) span W
• Standard bases:R2 R3 Rn
Matrix Operations
• Matrix addition/subtraction– Matrices must be of same size.
• Matrix multiplication
Condition: n = q
m x n q x p m x p
n
Identity Matrix
Matrix Transpose
Symmetric Matrices
Example:
Determinants2 x 2
3 x 3
n x n
Properties:
Matrix Inverse
• The inverse A-1 of a matrix A has the property: AA-1=A-1A=I
• A-1 exists only if
• Terminology– Singular matrix: A-1 does not exist– Ill-conditioned matrix: A is “close” to being singular
Matrix Inverse (cont’d)
• Properties of the inverse:
Matrix trace
Properties:
Rank of matrix
• Equal to the dimension of the largest square sub-matrix of A that has a non-zero determinant.
Example: has rank 3
Rank of matrix (cont’d)
• Alternative definition: the maximum number of linearly independent columns (or rows) of A.
i.e., rank is not 4!
Example:
Rank of matrix (cont’d)
Eigenvalues and Eigenvectors
• The vector v is an eigenvector of matrix A and λ is an eigenvalue of A if:
i.e., the linear transformation implied by A cannot change the direction of the eigenvectors v, only their magnitude.
(assume non-zero v)
Computing λ and v
• To find the eigenvalues λ of a matrix A, find the roots of the characteristic polynomial:
Example:
Properties
• Eigenvalues and eigenvectors are only defined for square matrices (i.e., m = n)
• Eigenvectors are not unique (e.g., if v is an eigenvector, so is kv)
• Suppose λ1, λ2, ..., λn are the eigenvalues of A, then:
Matrix diagonalization
• Given an n x n matrix A, find P such that: P-1AP=Λ where Λ is diagonal
• Take P = [v1 v2 . . . vn], where v1,v2 ,. . . vn are the eigenvectors of A:
Matrix diagonalization (cont’d)
Example:
• Only if P-1 exists (i.e., P must have n linearly independent eigenvectors, that is, rank(P)=n)
• If A is diagonalizable, then the corresponding eigenvectors v1,v2 ,. . . vn form a basis in Rn
Are all n x n matrices diagonalizable P-1AP ?
Matrix decomposition
• Let us assume that A is diagonalizable, then A can be decomposed as follows:
Special case: symmetric matrices
• The eigenvalues of a symmetric matrix are real and its eigenvectors are orthogonal.
P-1=PT
A=PDPT=