Chapter 10Real Inner Products and
Least-Square
10.1 IntroductionTo any two vectors u and v
of the same dimension having real components, we associate a scalar called the inner product
denoted as u, v, by multiplying together the corresponding elements of u and v and then summing the results.If u = (u1, u2, …, un) and v = (v1, v2, …, vn) are vectors in Rn, then the inner product is computed by the following formula:
u, v = u1v1 + u2v2 + … + unvn
Example: If u = (3, 1, 2) and v = (2, -2, 1) thenu, v = 3(2) + 1(-2) + 2(1) = 6
10.1 Introduction: Properties of Inner product
(I1) u, u is positive if u ≠ 0; u, u =0 if and only if u=0.
(I2) u, v = v, u
(I3) u, kv =k u, v for any scalar k
(I4) u, v + w = u, v + u, w
(I5) 0, v = v, 0 = 0
10.1 IntroductionThe magnitude of a vector u is denoted by ||u|| and is defined by
||u|| = u, u½
A nonzero vector is normalized if it is divided by its magnitude.
A unit vector is a vector whose magnitude is unity.
A normalized vector is always a unit vector.
Examples on the board.
Orthogonal Vectors
Definition 1
Two vectors u and v are called orthogonal (or perpendicular) if u, v = 0.
A set of vectors is called an orthogonal set if each vector in the set is orthogonal to every other vector in the set.
Example:u1 = (0, 1, 0), u2 = (1, 0, 1), u3 = (1, 0, -1)
form an orthogonal set since u1, u2 = u1, u3 = u2, u3 = 0.
Projections
xa
Projections
€
u = projax =a,x
a,aa v = x −
a,x
a,aa
10.2 Orthonormal Vectors• Definition 2
A set of vectors is orthonormal if it is an orthogonal set having the property that every vector is a unit vector.
• Example:– Recall that u1 = (0, 1, 0), u2 = (1, 0, 1), u3 = (1, 0, -1) is an
orthogonal set; but it is not orthonormal– The magnitudes of the vectors are
– Normalizing u1, u2, and u3 yields
– The set S = {v1, v2, v3} is orthonormal since v1, v2 = v1, v3 = v2, v3 = 0 and ||v1|| = ||v2|| = ||v3|| = 1
1 2 31, 2, 2 u u u
)2
1,0,
2
1(),
2
1,0,
2
1(),0,1,0(
3
33
2
22
1
11
u
uv
u
uv
u
uv
10.2 Orthonormal Vectors• Theorem 1: An orthonormal set of vectors is linearly
independent.
Proof on the board.
• Theorem 2: For every linearly independent set of vectors { x1, x2, …, xn },
there exists an orthonormal set of vectors { q1, q2, …, qn }
such that
each qj (j=1, 2, …, n) is a linear combination of x1, x2, …, xn .
Proof of Theorem 2: Gram-Schmidt orthonormalization process
€
y3 = x3 −x3,y1
y1,y1
y1 −x3,y2
y2,y2
y2
€
y j = x j −x j ,ykyk,yk
yk ( j = 2,3,...,n)k=1
j−1
∑€
y2 = x2 −x2,y1
y1,y1
y1
€
y1 = x1
Gram-Schmidt orthonormalization process: example
• Apply the Gram-Schmidt process to transform the basis vectors
u1 = (1, 1, 1), u2 = (0, 1, 1), u3 = (0, 0, 1)
into an orthogonal basis {v1, v2, v3}; then normalize the orthogonal basis vectors to obtain an orthonormal basis {q1, q2, q3}.
• Solution: – Step 1: v1 = u1 = (1, 1, 1)
– Step 2:
€
v2 = u2 − projv1u2 = u2 −
u2,v1
v1,v1
v1
= (0, 1, 1) −2
3(1, 1, 1) = −
2
3,
1
3,
1
3
⎛
⎝ ⎜
⎞
⎠ ⎟
Gram-Schmidt orthonormalization process: example
– Step 3:
– Thus, v1 = (1, 1, 1), v2 = (-2/3, 1/3, 1/3), v3 = (0, -1/2, 1/2) form an orthogonal basis. The magnitudes of these vectors are
so an orthonormal basis is 1 2 3
6 13, ,
3 2 v v v
1 21 2
1 2
33
3
1 1 1 2 1 1( , , ), ( , , ),
63 3 3 6 6
1 1(0, - , )
22
v vq q
v v
vq
v
€
v3 = u3 − projv1u3 − proj v2
u3 = u3 −u3,v1
v1,v1
v1 −u3,v2
v2,v2
v2
= (0, 1, 1) −1
3(1, 1, 1) −
1/3
2 /3−
2
3,
1
3,
1
3
⎛
⎝ ⎜
⎞
⎠ ⎟= 0, −
1
2,
1
2
⎛
⎝ ⎜
⎞
⎠ ⎟
Top Related