m1p2

download m1p2

of 96

Transcript of m1p2

  • 7/27/2019 m1p2

    1/96

    Algebra I lecture notes

    version

    Imperial College LondonMathematics 2005/2006

  • 7/27/2019 m1p2

    2/96

    CONTENTS Algebra I lecture notes

    Contents

    1 Groups 51.1 Definition and examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    1.1.1 Group table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.2 Subgroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121.2.1 Criterion for subgroups . . . . . . . . . . . . . . . . . . . . . . . . . 13

    1.3 Cyclic subgroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141.3.1 Order of an element . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    1.4 More on the symetric groups Sn . . . . . . . . . . . . . . . . . . . . . . . . 181.4.1 Order of permutation . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    1.5 Lagranges Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211.5.1 Consequences of Lagranges Theorem . . . . . . . . . . . . . . . . . 22

    1.6 Applications to number theory . . . . . . . . . . . . . . . . . . . . . . . . . 221.6.1 Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

    1.7 Applications of the group Zp . . . . . . . . . . . . . . . . . . . . . . . . . . 261.7.1 Mersenne Primes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271.7.2 How to find Meresenne Primes . . . . . . . . . . . . . . . . . . . . . 28

    1.8 Proof of Lagranges Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 30

    2 Vector Spaces and Linear Algebra 342.1 Definition of a vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . 352.2 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382.3 Solution spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392.4 Linear Combinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

    2.5 Span . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412.6 Spanning sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432.7 Linear dependence and independence . . . . . . . . . . . . . . . . . . . . . 442.8 Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482.9 Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512.10 Further Deductions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

    3 More on Subspaces 573.1 Sums and Intersections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573.2 The rank of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

    2

  • 7/27/2019 m1p2

    3/96

    Algebra I lecture notes CONTENTS

    3.2.1 How to find row-rank(A) . . . . . . . . . . . . . . . . . . . . . . . . 613.2.2 How to find column-rank(A)? . . . . . . . . . . . . . . . . . . . . . 63

    4 Linear Transformations 684.1 Basic properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704.2 Constructing linear transformations . . . . . . . . . . . . . . . . . . . . . . 714.3 Kernel and Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734.4 Composition of linear transformations . . . . . . . . . . . . . . . . . . . . . 784.5 The matrix of a linear transformation . . . . . . . . . . . . . . . . . . . . . 794.6 Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . 82

    4.6.1 How to find evals / evecs ofT? . . . . . . . . . . . . . . . . . . . . 834.7 Diagonalisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844.8 Change of basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

    5 Error-correcting codes 885.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 885.2 Theory of Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

    5.2.1 Error Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 915.3 Linear Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

    5.3.1 Minimum distance of linear code . . . . . . . . . . . . . . . . . . . 925.4 The Check Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 935.5 Decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

    3

  • 7/27/2019 m1p2

    4/96

    CONTENTS Algebra I lecture notes

    Introduction

    (1) Groups used throughout maths and science to describe symmetry e.g. everyphysical object, algebraic equation or system of differential equations, . . . , has agroup associated with it.

    (2) Vector spaces have seen and studied some of these already, e.g. Rn.

    4

  • 7/27/2019 m1p2

    5/96

    Algebra I lecture notes

    Chapter 1

    Groups

    1.1 Definition and examples

    Definition 1.1. Let S be a set. A binary operation on S is a rule which assigns to anyordered pair (a, b) (a, b S) an element a b S.In other words, its a function from S S S.Eg 1.1.

    1) S = Z, a b = a + b

    2) S = C, a b ab3) S = R, a b = a b4) S = R, a b = min(a, b)5) S = {1, 2, 3}, a b = a (eg. 1 1 = 1, 2 3 = 2)

    Given a binary operation on a set S and a,b,c S, can form a b c in two ways

    (a b) ca

    (b

    c)

    These may or may not be equal.

    Eg 1.2. In 1), (a b) c = a (b c).In 3), (3 5) 4 = (3 5) 4 = 6. Whereas 3 (5 4) = 3 (5 4) = 2Definition 1.2. A binary operation is associative if for all a,b,c S

    (a b) c = a (b c)

    Associativity is important.

    5

  • 7/27/2019 m1p2

    6/96

    1.1. DEFINITION AND EXAMPLES Algebra I lecture notes

    Eg 1.3. Solve 5 + x = 2.We add

    5 to get

    5 + (5 + x) =

    5 + 2. Now we use associativity! We rebracket to

    (5 + 5) + x = 5 + 2. Thus 0 + x = 5 + 2, so x = 3.To do this, we needed

    1) associativity of +

    2) the existence of 0 (with 0 + x = x)

    3) existence of5 (with 5 + 5 = 0)

    Generally, suppose we have a binary operation and an equation

    a x = b(a, b S constants, x S unknown) To be able to solve, we need

    1) associativity

    2) existence ofe S such that e x = x for x S

    3) existence ofa S such that a a = e

    Then can solve

    a x = ba (a x) = a b(a a) x = a b

    e x = a bx = a b

    Group will be a structure in which we can solve equations like this.

    Definition 1.3. A group (G, ) is a set G with a binary operation satisfying the followingaxioms (for all a,b,c S)

    (1) ifa, b S then a b S (closure)

    (2) (a b) c = a (b c) (associativity)

    (3) there exists e S such that

    e x = x e = x

    (identity axiom)

    6

  • 7/27/2019 m1p2

    7/96

  • 7/27/2019 m1p2

    8/96

    1.1. DEFINITION AND EXAMPLES Algebra I lecture notes

    1) Suppose e, e are identity elements. So

    e x = x e = xe x = x e = x

    Then

    e = e e = e

    2) Let x G and suppose x, x are inverses of x. That means

    x

    x = x x

    = ex x = x x = e

    Then

    x = x e= x (x x)= (x x) x= e x= x

    2

    Notation 1.1.

    e is the identity element of G x1 is the inverse of x Instead of (G, ) is a group, we write G is a group under .

    Often drop the in a b, and write just ab.Eg 1.5.In (Z, +), x1 = x. Z is a group under addition.In (Q {0}, x), x1 = 1x .

    Definition 1.4. We say (G, ) is a finite group if |G| is finite; (G, ) is an infinite group if|G| is infinite.Eg 1.6. All groups in example 2.4 are infinite except the last which has size (order) 4.

    8

  • 7/27/2019 m1p2

    9/96

    Algebra I lecture notes 1.1. DEFINITION AND EXAMPLES

    Eg 1.7. Let F = R or C. Say a matrix (aij) is a matrix over F if all aij F.Set of all n

    n matrices over F under matrix multiplication is not a group (problem with

    inverse axiom). But lets define GL(n, F) to be the set of all n n invertible matrices overF.

    Definition 1.5. Denote the set of all invertible matrices over field F as GL(n, F)

    GL(n, F) = {(aij) | 1 < i, j n, aij F}

    Claim GL(n, F) is a group under matrix multiplication.

    Proof. Write G = GL(n, F).

    Closure Let A, B G. So A, B are invertible. Now

    (AB)1 = B1A1

    since

    (AB)(B1A1) = A(BB1B) = AIA1 = I

    (B1A1)(AB) = B1(A1A)B = B1IB = I

    So AB G.Associativity proved in M1GLA.

    Identity is identity matrix In.

    Inverse of A is A1 (since AA1 = A1A = I). Note A1 G as it has an inverse,A.

    2

    1) GL(1,R) is the set of all (a) with a R, a = 0. This is just the group (R {0}, ).

    2) GL(2,C) is the set of

    a bc d

    , a,b,c,d C, ad bc = 0.

    Note 1.1. Usually, in a group (G, ) a b is not same as b a.

    Definition 1.6. Let (G, ) be a group. If for all a, b G, a b = b a we call G an abeliangroup.Eg 1.8. (Z, +) is abelian as a + b = b + a for all a, b Z. So are the other groups in 2.4.So is GL(1, F).But GL(2,R) is not abelian, since

    1 10 2

    0 11 0

    =

    1 12 0

    0 11 0

    1 10 2

    =

    0 21 1

    9

  • 7/27/2019 m1p2

    10/96

    1.1. DEFINITION AND EXAMPLES Algebra I lecture notes

    Groups of permutations

    Definition 1.7. S a set. A permutation of S is a function f : S

    S which is a bijection(both injection and surjection).

    Eg 1.9. S = {1, 2, 3, 4}, f : 1 2, 2 3, 3 4, 4 1 is a permutation.Notation 1.2.

    f =

    1 2 3 42 4 3 1

    is a permutation 1 2, 2 4, 3 3, 4 1.Let

    g = 1 2 3 43 1 2 4

    The composition f q is defined byf g = f(g(s))

    Here

    f g =

    1 2 3 43 2 4 1

    Recall the inverse function f1 is the inverse of f. Here

    f1 = 1 2 3 4

    4 1 3 2Notice f1 f =

    1 2 3 41 2 3 4

    = e, the identity function.

    Proposition 1.2. Let S = {1, 2, 3, . . . , n} and let G be the set of all permutations ofS. Then (G, ), being the function composition, is a group, i.e. G is a group undercomposition.

    Proof. Notation for f G is

    f = 1 2 n

    f(1) f(2) f(n) Closure By M1F, if f, g are bijections S S then f g is a bijection. Associativity Let f , g , h G and apply s S Then

    f (g h)(s) = f(g h(s))= f(g(h(s)))

    (f g) h(s) = (f g)(h(s)) = f(g(h(s)))So f (g h) = (f g) h

    10

  • 7/27/2019 m1p2

    11/96

    Algebra I lecture notes 1.1. DEFINITION AND EXAMPLES

    Identity is e =

    1 2 n1 2 n

    , since e f = f e = f.

    Inverse of f is f1 =

    f(1) f(2) f(n)1 2 n

    and f1 f = f f1 = e

    2

    Definition 1.8. The group of all permutations of{1, 2, . . . , n} is written Sn and called thesymmetric group of degree n.

    Eg 1.10. S2 =

    e,

    1 22 1

    . So |S2| = 2.

    Eg 1.11. S3 = 1 2 31 2 3 ,1 2 32 1 3 ,1 2 33 2 1 ,1 2 31 3 2 ,1 2 32 3 1 ,1 2 33 1 2|S3| = 6Proposition 1.3. Sn is a finite group of size n!.

    Proof. For f =

    1 2 n

    f(1) f(2) f(n)

    , number of choices for f(1) is n, for f(2) is n1,f(3) is n2, . . . , f(n) is 1. The total number of permutations is n (n1) (n2) 1 = n!.2

    Notation 1.3. (Multiplicative notation for groups)

    If (G, ) is a group, well usually write just ab instead of a b. We can define powersa2 = a aa3 = a a a

    an = a a a

    n

    When we write Let G be a group, we mean the binary operation is understood, andwere writting ab instead of a b, etc.

    Eg 1.12. In (Z, +), ab = a b = a + b and a2

    = a a = 2a, i.e. an

    = na.

    1.1.1 Group table

    Definition 1.9. Let G be a group, with elements a , b , c, . . . . Form a group table

    a b c a a2 ab ac b ba b2 bc ...

    ...

    11

  • 7/27/2019 m1p2

    12/96

    1.2. SUBGROUPS Algebra I lecture notes

    Eg 1.13. S3 =

    e = 1 2 31 2 3a =

    1 2 32 3 1

    a2 =

    1 2 33 1 2

    b =

    1 2 32 1 3

    ab = 1 2 3

    3 2 1a2b =

    1 2 31 3 2

    e a a2 b ab a2be e a a2 b ab a2ba a a2 e ab a2b b

    a2 a2 e a a2b b abb b a2b ab e a2 a

    ab ab b a2b a e a2

    a2

    b a2

    b ab b a2

    a e

    a3 = e ba = a2b b2 = e

    1.2 Subgroups

    Definition 1.10. Let (G,

    ) be a group. A subgroup of (G,

    ) is a subset of G which isitself a group under .Eg 1.14.

    (Z, +) is a subgroup of (R, +). (Q {0}, ) is not a subgroup of (R, +) ({1, 1}, ) is a subgroup of ({1, 1, i, i}, ) ({1, i}, ) is not a subgroup of ({1, 1, i, i}, ) (closure fails i i = 1)

    12

  • 7/27/2019 m1p2

    13/96

    Algebra I lecture notes 1.2. SUBGROUPS

    1.2.1 Criterion for subgroups

    Proposition 1.4. Let G be a group.1

    Let H G. Then H is a subgroup of G if thefollowing conditions are true

    (1) e H (where e is the identity of G)

    (2) ifh, k H then hk H (H is cloesed)

    (3) ifh H then h1 H

    Proof. Assume (1)-(3). Check the group axioms for H:

    Closure true by (2)

    Associativity true by associativity for G

    Identity by (1)

    Identity by (3)

    2

    Eg 1.15. Let G = GL(2,R) (22 invertible matrices overR). Let H =

    1 n0 1

    | n Z

    .

    Claim H is a subgroup of G.

    Proof. Check (1)-(3) of previous proposition

    (1) e =

    1 00 1

    H

    (2) Let h =

    1 n0 1

    , k =

    1 p0 1

    . Then nk =

    1 p + n0 1

    H.

    (3) Let h =

    1 n0 1

    . Then h1 =

    1 n0 1

    H

    2

    1so is understood and we write ab instead ofa b

    13

  • 7/27/2019 m1p2

    14/96

    1.3. CYCLIC SUBGROUPS Algebra I lecture notes

    1.3 Cyclic subgroups

    Let G be a group. Let a G. Recalla1 = a, a2 = aa, . . .

    Negative powers

    a0 = e

    a2 = a1a1

    an = a1 a1

    n

    Note 1.2. All the powers an

    (n Z) lie in G (by closure).Lemma 1.1. For any m, n Z

    aman = am+n

    Proof. For m, n > 0

    aman =

    m+n a a

    n

    a a m

    = am+n

    For m 0, n < 0aman = a a

    n

    a1 a1 n

    = am(n)

    Similarly for m < 0, n 0. Finally, when m, n < 0

    aman =

    mn

    a1 a1

    na1 a1

    m= am+n

    2

    Proposition 1.5. Let G be a group. Let a G. DefineA = {an | n Z} = . . . , a2, a1, e , a , a2, . . .

    Then A is a subgroup of G.

    Proof. Check (1)-(3) of 2.4

    14

  • 7/27/2019 m1p2

    15/96

    Algebra I lecture notes 1.3. CYCLIC SUBGROUPS

    (1) e = a0 A

    (2) am

    , an

    A then am

    an

    = am+n

    A(3) an A then (an)1 = an A

    2

    Definition 1.11. Write A = a, called the cyclic subgroup of G generated by a. So foreach element a G we get a cyclic subgroup a of G.Eg 1.16. (1) G = (Z, +). What is the cyclic subgroup 3?

    Well, 31 = 3, 32 = 3 + 3 = 6, 3n = 3n, 31 = 3, 3n = 3n. So 3 = {3n | n Z}.Similarly 1 = {n | n Z} = Z.

    (2) G = S3, a =1 2 3

    2 3 1

    . What is a?

    Well

    a0 = e

    a1 = a

    a2 =

    1 2 33 1 2

    a3 = e

    a4 = a

    a5 = a2...

    a1 = a3a1 = a2

    a2 = a...

    Hence a = {an | n Z} = {e,a,a2} .Now consider b, b =

    1 2 32 1 3

    . Here b0 = e, b1 = b, b2 = e, . . . .

    So b = {e, b}.(3) All the cyclic subgroups ofS3 = {e,a,a2,b,ab,a2b}

    e = {e}a = {e,a,a2}a2

    = {e,a,a2}b = {e, b}

    ab = {e,ab}

    a2b

    = {e, a2b}

    15

  • 7/27/2019 m1p2

    16/96

    1.3. CYCLIC SUBGROUPS Algebra I lecture notes

    Definition 1.12. Say a group G is a cyclic group, if there exists an element a G suchthat

    G = a = {an | n Z}Call a a generator for G.

    Eg 1.17.

    (1) (Z, +) = 1, So (Z, +) is cyclic with generator 1(2) ({1, 1, i, i}, ) is cyclic, generator i, since

    i = {i0, i1, i2, i3} = {1, i, 1, i}Another generator is i, but 1 and 1 are not generators.

    (3) S3 is not cyclic, as non of its 5 cyclic subgroups is the whole of S3.For any n N there exists a cyclic group of size n (having n elements) Cn

    Cn = {x C | xn = 1}the complex n-th roots of unity, under multiplication. By M1F, we know Cn = {1, , 2, . . . , n1},where = e2

    i

    n , SoCn =

    a cyclic subgroup of (C {0}, ).

    1.3.1 Order of an element

    Definition 1.13. Let G be a group, let a G. The order of a, written o(a), is thesmallest positive integer k such that ak = e. So o(a) = k means ak = e and ai = e fori = 1, . . . , k 1.If no such k exists, we say a has infinite order and we write o(a) = .Eg 1.18.

    (1) e has order 1, and is the only such element.

    (2) G = S3, a =

    1 2 32 3 1

    . Then a1 = a, a2 =

    1 2 33 1 2

    , a3 =

    1 2 31 2 3

    . So

    o(a) = 3.For b =

    1 2 32 1 3

    , b1 = e, b2 = e, so o(b) = 2. Full list:

    o(e) = 1

    o(a) = 3

    o(a2) = 3

    o(b) = 2

    o(ab) = 2

    o(a2b) = 2

    16

  • 7/27/2019 m1p2

    17/96

    Algebra I lecture notes 1.3. CYCLIC SUBGROUPS

    (3) G = (Z, +). What is o(3)? In G, e = 0, 3n = n 3. So 3n = e for any n N, soo(3) =

    .

    (4) G = GL(2,C), A =

    i 00 e2i/3

    . Then

    Ak =

    ik 00 e2ik/3

    So smallest k for which this is the identity is 12 o(A) = 12.

    Proposition 1.6. G a group, a G. The number of elements in the cyclic subgroupgenerated by a is equal to o(a).

    | a

    |= o(a)

    Proof.

    (1) Suppose o(a) = k, finite. This means ak = e, but ai = e for 1 i k 1.Write A = a = {an | n Z}. Then A contains

    e,a,a2, . . . , ak1

    These are all different elements of G since for 1 y < j k 1,ai = aj

    a1ai = aiaj

    e = a

    ji

    o(a) = j i < k contradiction.Hence A contains e , a , . . . , ak1, all distinct, so

    |A| KWe now show that every element of A is one of e , a , . . . , ak1. Let an A. Write

    n = qk + r, 0 r k 1Then

    an

    = aqk+r

    = aqkar

    = (ak)qar

    = eqar

    = ar

    So an = ar {e,a,a2, . . . , ak1}. Weve shownA = {e,a,a2, . . . , ak1}

    so |A| = k = o(a).

    17

  • 7/27/2019 m1p2

    18/96

    1.4. MORE ON THE SYMETRIC GROUPS SN Algebra I lecture notes

    (2) Suppose o(a) = . This means

    ai

    = e for i 1If i < j then ai = aj since

    ai = aj

    e = aji

    contradiction.Then

    A = {. . . , a2, a1, e , a , a2, . . . }and all these elements are different elements of G. So

    |A

    |=

    = o(a).

    2

    Eg 1.19.

    (1) G = S3, a =

    1 2 32 3 1

    . Then a = {e,a,a2}, size 3, o(a) = 3.

    (2) G = (Z, +). Then 3 = {3n | n Z}, is infinite and o(3) = .(3) Cn = = {1, , . . . , n1}, size n, and o() = n.

    1.4 More on the symetric groups Sn

    Eg 1.20. Let f =

    1 2 3 4 5 6 7 84 5 6 3 2 7 1 8

    S8. What is f2, f5?

    We need better notation to see answers quicklky. Observe the numbers 143671 arein a cycle, as well as numbers 25 and 8.We will write f = (1 4 3 6 7)(2 5)(8). These are the cycles of f. Each symbol in the firstcycle goes to the next except for the last 7 which goes back to the first 1. The cycles aredisjoint they have no symbols in common. Call this the cycle notation for f.

    Definition 1.14. In general, an r cycle is a permutationa1 a2 . . . ar

    which sends a1 a2 ar a1 . . . .Eg 1.21. Can easily go from cycle notation to original, e.g.

    g = (1 5 3)(2 4)(6 7) S7g =

    1 2 3 4 5 6 75 4 1 2 3 7 6

    18

  • 7/27/2019 m1p2

    19/96

    Algebra I lecture notes 1.4. MORE ON THE SYMETRIC GROUPS SN

    Proposition 1.7. Every permutation f in Sn can be expressed in the cycle notation, i.e.as a product of disjoint cycles.

    Proof. Following procedure works:Start with 1, and write down sequence

    1, f(1), f2(1), . . . , f r1(1)

    until the first repeat fr(1). Then in fact fr(1) = 1, since

    fr(1) = fi(1)

    fi(1)fr(1) = 1

    fri(1) = 1

    which is a contradiction as fr(1) is the first repeat. So have the r-cycle

    (1 f(1) fr1(1))first cycle of f.Second cycle: Pick a symbol i not in the first cycle, and write

    i, f(i), f2(i), . . . , f s1(i)

    where fs(i) = i. Then this is the second cycle of f. This cycle is disjoint from the first

    since if not, say fj

    (i) = k in first cycle, then fsj(k)

    = fs

    (i) = i would be in the first cycle.Now carry on: pick j not in first two cycles and repeat to get third cycle and carry on untilwe have used all the symbols 1, . . . , n. so

    f = (1 f(1) fr1(1)(i f(i) fs1(i)) . . .a product of disjoint cycles. 2

    Note 1.3. Cycle notation is not quite unique e.g. (1 2 3 4) can be written as (2 3 4 1)AND (1 2)(3 4 5) = (3 4 5)(1 2). Notation is unique appart from such changes.

    Eg 1.22.

    1. The elements of S3 in cycle notation

    e = (1)(2)(3)

    a = (1 2 3)

    a2 = (1 3 2)

    b = (1 2)(3)

    ab = (1 3)(2)

    a2b = (2 3)(1)

    19

  • 7/27/2019 m1p2

    20/96

    1.4. MORE ON THE SYMETRIC GROUPS SN Algebra I lecture notes

    2. For disjoint cycles, order of multiplication does not matter, e.g.

    (1 2)(3 4 5) = (3 4 5)(1 2)

    For non-disjoint cycles it does matter, e.g.

    (1 2)(1 3) = (1 3)(1 2)

    3. Multiplication is easy using cycle notation, e.g.

    f = (1 2 3 5 4) S5g = (2 4)(1 5 3) S5

    then

    f g = (1 4 3 2)(5)

    Definition 1.15. Let g = (1 2 3) (4 5)(5 7)(8)(9) S9. The cycle shape of g is

    (3, 2, 2, 1, 1)

    i.e. the sequence of numbers giving the cycle-length of g in descending order. Abbreviate:

    (3, 22, 12)

    Eg 1.23. How many permutation of each cycle-shape in S4?

    cycle-shape e.g number in S4(14) e 1

    (2, 12) (1 2)(3)(4) 42 = 6(3, 1) (1 2 3)(4)

    43

    2 = 8(4) (1 2 3 4) 3! = 6

    (22) (1 2)(3 4) 3

    Total 24 = 4!.

    1.4.1 Order of permutation

    Recall the order o(f) of f Sn is the smallest positive integer k such that fk = e.

    20

  • 7/27/2019 m1p2

    21/96

    Algebra I lecture notes 1.5. LAGRANGES THEOREM

    Eg 1.24. f = (1 2 3 4), 4-cycle then

    f1 = f

    f2 = (1 3)(2 4)

    f3 = (1 4 3 2)

    f4 = e

    So o(f) = 4. Similarly, if f = (1 2 . . . r) then o(f) = r.

    Eg 1.25. g = (1 2 3)(4 5 6 7). What is o(g)?

    g2 = (1 2 3)(4 5 6 7) (1 2 3)(4 5 6 7)(disjoint) = (1 2 3)2(4 5 6 7)2

    Similarly

    gi = (1 2 3)i(4 5 6 7)i

    To make gi = e, need i to be divisible by 3 (to get rid of (1 2 3)i) and by 4 (to get rid of(4 5 6 7)i). So o(g) = lcm(3, 4) = 12.

    Same argument gives

    Proposition 1.8. The order of a permutation in cycle notation is the least commonmultiple of the cycle lengths

    Eg 1.26. Order of (1 2)(3 4 5 6) is lcm(2, 4) = 4. The order of (1 3)(3 4 5 6) is not 4 (not

    disjoint)Eg 1.27. Pack of 8 cards. Shuffle by dividing into two halves and interlacing, so if originalorder is 1, 2, . . . , 8 then the order is 1, 5, 2, 6, 3, 7, 4, 8. How many shuffles bring cards backto original order?This is the permutation s in S8

    s =

    1 2 3 4 5 6 7 81 5 2 6 3 7 4 8

    In cycle notation s = (1)(2 5 3)(4 6 7)(8). So order of s o(s) = lcm(3, 3, 1, 1) = 3, so 3shuffles are required.

    1.5 Lagranges Theorem

    Recall G a finite group means G has a finite number of elements. Size of G is |G|, e.g.|S3| = 6.Theorem 1.1. Let G be a finite group. If H is any subgroup of G, then |H| divides |G|.Eg 1.28. Subgroups of S3 have size 1, 2, 3 or 6.

    Note 1.4. It does not work the other wat round, i.e. ifa is a number dividing |G|, thenthere may well not exist a subgroup of G of size a.

    21

  • 7/27/2019 m1p2

    22/96

    1.6. APPLICATIONS TO NUMBER THEORY Algebra I lecture notes

    1.5.1 Consequences of Lagranges Theorem

    Corollary 1. If G is a finite group and a G then o(a) divides G.Proof. Let H = a, cyclic subgroup of G generated by a. By 1.6, |H| = o(a) so byLagrange, o(a) divides |G|. 2

    Corollary 2. Let G be a finite group and let n = |G|. If a G, then an = e.

    Proof. Let k = o(a). By 1, k divides n. Say n = kr. So an =

    akr

    = er = e. 2

    Corollary 3. If |G| is a prime number, then G is cyclic.

    Proof. Let|G

    |= p, prime. Pick a

    G with a

    = e. By Lagrange, the cyclic subgroup

    ahas size dividing p. It contains e, a, so has size 2, therefore has size p. As |G| = p, this

    implies G = a, cyclic. 2

    Eg 1.29. Subgroups of S3. These have size 1 e, 2, 3 cyclic by 3. So we know all thesubgroups of S3.

    1.6 Applications to number theory

    Definition 1.16. Fix a positive integer m N. For any integer r, the residue class of rmodulo m denoted [r]m is

    [r]m = {km + r | k Z}

    Eg 1.30.

    [0]5 = {5k | k Z}[1]5 = {. . . , 9, 4, 1, 6, 11, . . . } = [1]5

    [2]5 = [3]5 = [8]5Since every integer is congruent to 0, 1, 2, . . . , m 1 modulo m,

    [0]m [1]m [m 1]m = Z

    and every integer is in exactly one of these residue classes.

    Proposition 1.9.

    [a]m = [b]m a b mod m

    Proof.

    Suppose [a]m = [b]m. As a [a]m this implies a [b]m, so a b mod m.

    22

  • 7/27/2019 m1p2

    23/96

    Algebra I lecture notes 1.6. APPLICATIONS TO NUMBER THEORY

    Suppose a b mod m. Now

    x a mod m x b mod m(as is an equivalence relation). So

    x [a]m x [b]mTherefore [a]m = [b]m.

    2

    Eg 1.31.[17]9 = [

    19]9

    Definition 1.17. Write Zm for the set of all the residue classes

    [0]m , [1]m , . . . , [m 1]mFrom now on well usually drop the subscript m and write

    [r] = [r]m

    Definition 1.18. Define +, on Zm by[a] + [b] = [a + b]

    [a] [b] = [ab]This is OK, as

    [a] = [a] [b] = [b]

    [a + b] = [a + b] [ab] = [ab]

    Eg 1.32.

    [2] + [4] = [1][3] + [3] = [1]

    [3] [3] = [4]

    1.6.1 Groups

    Eg 1.33. (Zm, +) is a group. What about (Zm, )? Identity will be [1]. So [0] will haveno inverse (as [0] [a] = [0]). So let

    Zm = Zm {[0]}

    23

  • 7/27/2019 m1p2

    24/96

    1.6. APPLICATIONS TO NUMBER THEORY Algebra I lecture notes

    For which m is (Zm, ) a group?

    Eg 1.34.

    Z2 = {[1]}. This is a group.Z3 = {[1] , [2]}

    [1] [2][1] [1] [2][2] [2] [1]

    Compare with S2 to see it is a group.

    Z4

    [1] [2] [3][1] [1] [2] [3][2] [2] [0]

    Here [2] Z4, but [2] [2] = [0] / Z4.Theorem 1.2. (Zm,

    ) is a group iff m is a prime number.

    Proof.

    Suppose Zm is a group. If m is not a prime, then

    m = ab, 1 < a, b < m

    so [a], [b] Zm (neither is [0]). but

    [a] [b] = [ab] = [m] = [0]

    This contradicts closure. So m is prime.

    Suppose m is a prime, write m = p.We show that Zp is a group.

    Closure Let [a] , [b] Zp. Then [a] , [b] = [0], so p |a and p |b. Then p |ab (asp is prime result from M1F). So

    [a] [b] = [ab] = [0]

    Thus [a] [b] Zp.

    24

  • 7/27/2019 m1p2

    25/96

    Algebra I lecture notes 1.6. APPLICATIONS TO NUMBER THEORY

    Associativity

    ([a] [b]) [c] = [ab] c = [(ab)c][a] ([b] [c]) = [a] [bc] = [a(bc)]

    These are equal as (ab)c = a(bc) for a,b,c Z. Identity is [1] as [a] [1] = [1] [a] = [a].

    Inverses Let [a] Zp. We want to find [a] such that [a] [a] = [a] [a] = [1], i.e.

    [aa] = [1]

    aa 1 mod p

    Heres how. Well, [a] = [0] so p |a. As p is prime, hcf(p, a) = 1. By M1F,there exist integers s, t Z with

    sp + ta = 1

    Then

    ta = 1 sp 1 mod p

    So

    [t] [a] = [1]Then [t] Zp ([t] = [0]) and [t] = [a]1.

    2

    So, Zp (p prime)

    (1) is abelian

    (2) has p 1 elements

    Eg 1.35. Z

    5 = {[1] , [2] , [3] , [4]}. Is Z

    5 cyclic?Well

    [2]2 = 4 [2]3 = [3] [2]4 = [1]

    So Z5 = [2].Eg 1.36. In the group Z31 what is [7]

    1?From the proof above, want to find s, t with

    7s + 31t = 1

    25

  • 7/27/2019 m1p2

    26/96

    1.7. APPLICATIONS OF THE GROUP ZP Algebra I lecture notes

    Use Euclidean algrithm

    31 = 4 7 + 37 = 2 3 + 1

    So

    1 = 7 2 3= 7 2(31 4 7)= 9 2 31

    So [7]1 = [9] .

    1.7 Applications of the group ZpTheorem 1.3. (Fermats Little Theorem) Let p be a prime, and let n be an integer notdivisible by p. Then

    np1 1 mod p

    Proof. Work in the groupZp =

    {[1] , . . . , [p

    1]

    }As p |n, [n] = [0],so[n] Zp

    Now Cor.?? says: if |G| = k then ak = e a G.Hence

    [n]p1 = identity ofZp= [1]

    Since

    [n]p1 = [n] [n] = np1so

    np1

    = [1] np1 1 mod p(from prop. 1.9). 2

    Corollary 4. Let p be prime. Then for all integers n

    np n mod p

    26

  • 7/27/2019 m1p2

    27/96

    Algebra I lecture notes 1.7. APPLICATIONS OF THE GROUP ZP

    Proof. If p |n then by FLT

    np1

    1 mod pnp n mod p

    If p|n then both np and n are congruent to 0 mod p. 2Eg 1.37. p = 5, then 1314 1 mod 5p = 17, then 6216 1 mod 17.Eg 1.38. Find remainder when divide 682 by 17.

    616 1 mod 17 (FLT)(616)5 = 680 1 mod 17

    682 = 680 66 62 2 mod 17

    Second application.

    1.7.1 Mersenne Primes

    Definition 1.19. A prime number p is called a Mersenne prime if p = 2n 1 for somen N.

    Eg 1.39.

    22 1 = 323 1 = 724 1 = 1525 1 = 3127 1 = 127

    The largest known primes are Mersenne primes. Largest known 2/2/06

    230402457

    1Connection with perfect numbers

    Definition 1.20. A positive integer N is perfect if N is equal to the sum of its positivedivisors (including 1, not N).

    Eg 1.40.

    6 = 1 + 2 + 3

    28 = 1 + 2 + 4 + 7 + 14

    27

  • 7/27/2019 m1p2

    28/96

    1.7. APPLICATIONS OF THE GROUP ZP Algebra I lecture notes

    Theorem 1.4. (Euler)

    (1) If 2n

    1 is prime then 2n1

    (2n

    1) is perfect.(2) Every even perfect number is of this form

    Proof.

    Sheet 4.

    Harder - look it up.

    2

    It is still unsolved is there an odd perfect number?

    1.7.2 How to find Meresenne Primes

    Proposition 1.10. If 2n 1 is prime, then n must be prime.

    Proof. Suppose n is not prime. So

    n = ab, 1 < a, b < n

    Then

    2n 1 = 2ab 1= (2a 1)(2a(b1) + 2a(b2) + + 2a + 1)

    (using xb 1 = (x 1)(xb1 + ) with x = 2a)So 2n 1 has factor 2a 1 > 1, so is not prime. Hence 2n 1 implies n prime. 2

    Eg 1.41. Know22 1, 23 1, 25 1, 27 1

    are prime. Next cases

    211 1, 213 1, 217 1Are these prime?

    We will answer this using the group Zp. We will need

    Proposition 1.11. Let G be a group, and let a G. Suppose an = e. Then o(a)|n.

    28

  • 7/27/2019 m1p2

    29/96

    Algebra I lecture notes 1.7. APPLICATIONS OF THE GROUP ZP

    Proof. Let K = o(a). Write

    n = qK + r, 0 r < KThen

    e = an = aqK+r

    = aqKar = (aK)qar

    = eqar

    = ar

    So ar = e. This K is smallest positive integer such that aK = e and 0 r < K, this forcesr = 0. Hence K = o(a) divides n. 2

    Proposition 1.12. Let N = 2p 1, p prime. Let q be prime, and suppose q|N. Thenq 1 mod p.Proof. q|N means N 0 mod q, i.e.

    2p 1 mod qThis means that

    [2]p = [1] Zq

    We know that Zq is a group of order q 1. We also know that o([2]) in Zq divides p, so is1 or p as p is prime.If o([2]) = 1, then

    [2] = [1] in Zq

    that is

    2 1 mod q1 0 mod q

    so q|1, a contradiction.Hence we must have o([2]) = p

    By Corollary 1,o([2]) divides |Zq|

    That is, p divides q 1q 1 0 mod p

    q 1 mod p2

    29

  • 7/27/2019 m1p2

    30/96

    1.8. PROOF OF LAGRANGES THEOREM Algebra I lecture notes

    Test for a Mersenne primeN = 2p

    1

    List all the primes q with q 1 mod p and q < N and check, one by one, to see if anydivide N. If none of them divide N, we have a prime.

    Eg 1.42. p = 11. N = 2p 1 = 2047, N < 50. Which primes q less than 50 have q 1mod 11? We check through all numbers congruent to 1 mod 11.

    12, 23, 34, 45

    The only prime less than 50 that can possibly divide 2047 is 23. Now we check to see if23|211 1, i.e., if 211 = 1 mod 23.

    25 32 9 mod 23210 (25)2 92 mod 23

    12 mod 23211 23 mod 23

    1 mod 23Conclusion 211 1 is not a prime it has a factor of 23.Eg 1.43. 213 1 is prime Exercise sheet.

    1.8 Proof of Lagranges Theorem

    Now we have to prove the Lagranges Theorem

    Theorem 1.5. Let G be a finite group of order |G|, with a subgroup H of order |H| = m.Then m divides |G|.Note 1.5. The idea write H = {h1, . . . , hm}. Then we divide G into blocks.

    H Hx Hyh1 h1x h1y . . .

    h2 h2x h2y . . .... . . .1 2 3 . . . r

    We want the blocks to have the following three properties

    (1) Each block has m distinct elements

    (2) No element of G belongs to two blocks

    (3) Every element ofG belongs to (exactly) one block

    30

  • 7/27/2019 m1p2

    31/96

    Algebra I lecture notes 1.8. PROOF OF LAGRANGES THEOREM

    Then |G| is the total number of elements listed in the blocks, i.e. rm, so m||G|.

    Definition 1.21. For x G, H subgroup of G, define the right cosetHx = {hx | h H} = {hx | h H}

    = {h1x, h2x , . . . , hmx}The official name for a block is a right coset.

    Note 1.6. Hx GEg 1.44. G = S3, H = a = {e,a,a2}, a = (1 2 3).

    H = He = Ha = Ha2 = ea2, aa2, a2a2a2, e , a

    Take b = (1 2), so b2 = e,

    Hb =

    eb,ab,a2b

    =

    b,ab,a2b

    e ba aba2 a2n

    Lemma 1.2. For any x in G

    |Hx| = mProof. By definition, we have

    Hx = {h1, x , . . . , hmx}These elements are all different, as

    hix = hjx

    hixx1 = hjxx

    1

    hi = hj

    So |Hx| = m. 2Lemma 1.3. If x, y G then either Hx = Hy or Hx Hy = .Proof. Suppose

    Hx Hy = We will show this implies Hx = Hy.We can choose an element a Hx Hy. Then

    a = hix a = hjy

    31

  • 7/27/2019 m1p2

    32/96

    1.8. PROOF OF LAGRANGES THEOREM Algebra I lecture notes

    for some hi, hj H.

    a =ix = hiyx = h1i hjy

    Then for any h Hhx = hh1i hjy

    As H is a subgroup, hh1i hj H. Hencehx Hy

    This shows Hx Hy.Similarly

    hix = hjy

    y = h1j hix

    so for any h Hhy = hh1j hix Hx

    So Hy Hx.We conclude Hx = Hy. 2

    Lemma 1.4. Let x

    G. Then x lies in the right coset Hx.

    Proof. As H is a subgroup, e H. So x = ex Hx. 2Theorem 1.6. Let G be a finite group of order |G|, with a subgroup H of order |H| = m.Then m divides |G|.Proof. By 1.4, G is equal to the union all the right cosets of H, i.e.

    G =xG

    Hx

    Some of these right cosets will be equal (eg. G = S3, H =

    a

    , then H = He = Ha = Ha2).

    Let the list of different right cosets be

    Hx1, . . . , H xr

    ThenG = Hx1 Hx2 Hxr

    and Hxi = Hxj if i = j (eg. in G = S3, G = H Hb).By 1.3, Hxi Hxj = if i = j. Picture

    G = Hx1 Hx2 Hxr (1.1)

    32

  • 7/27/2019 m1p2

    33/96

    Algebra I lecture notes 1.8. PROOF OF LAGRANGES THEOREM

    So |G| = |Hx1| + + |Hxr|. By 1.2

    |Hxi| = m = |H|So

    |G| = rm = r|H|Therefore |H| divides |G|. 2Proposition 1.13. Let G be a finite group, and H a subgroup of G. Let

    r =|G||H|

    Then there are exactly r different right cosets of H in G, say

    Hx1, . . . H xr

    They are disjoint, andG = Hx1 Hxr

    Definition 1.22. The integer r = |G||H|

    is called the index of H in G, written

    r = |G : H|

    Eg 1.45.

    (1) G = S3, H = a = {e,a,a2}. Index |G : H| = 63 = 2. There are 2 right cosets H,Hb and G = H Hb.

    (2) G = S3, K = b = {e, b} where b = (1 2)(3). Index |G : K| = 62 = 3. So there are 3right cosets they are

    Ke = K = {e, b}Ka = {a,ba}

    = a, a2bKa2 = a2, ba2 = a2, ab

    33

  • 7/27/2019 m1p2

    34/96

    Algebra I lecture notes

    Chapter 2

    Vector Spaces and Linear Algebra

    Recall

    Rn = {(x1, x2, . . . , xn) | xi R}Basic operations on Rn:

    addition (x1, . . . , xn) + (y1, . . . , yn) = (x1 + y1, . . . , xn + yn)

    scalar multiplication (x1, . . . , xn) = (x1, . . . , xn), R

    These operations satisfy the following rules:

    Addition rulesA1 u + (v + w) = (u + v) + w associativity

    A2 v + 0 = 0 + v = v identity

    A3 v + (v) = 0 inversesA4 u + v = v + u abelian

    (These say (Rn, +) is an abelian group)

    Scalar multiplication rules

    S1 (v + w) = v + w

    S2 ( + )v = v + v

    S3 (v) = ()v

    S4 1v = v

    These are easily proved for Rn:

    Eg 2.1.

    34

  • 7/27/2019 m1p2

    35/96

    Algebra I lecture notes 2.1. DEFINITION OF A VECTOR SPACE

    A1

    u + (v + w) = (u1, . . . , un) + ((v1, . . . , vn) + (w1, . . . , wn))= (u1, . . . , un) + (v1 + w1, . . . , vn + wn)

    = (u1 + (v1 + w1), . . . )

    = ((u1 + v1) + w1, . . . )

    = ((u1, . . . ) + (v1, . . . )) + (w1, . . . )

    = (u + v) + w

    S3

    (v) = ((v1, . . . , vn))= ((v1), . . . , (vn))

    = (()v1, . . . , ()vn) (assoc. of (R, ))= ()v

    2.1 Definition of a vector space

    A vector space will be a set of objects with addition and scalar multiplication definedsatisfying the above axioms. Want to let the scalars be either R or C (or a lot of otherthings). So let

    F = either R or C

    Definition 2.1. A vector space over F is a set V of objects called vectors together witha set of scalars F and with

    a rule for adding any two vectors v, w V to get a vector v + w V a rule for multiplying any vector v V by any scalar F to get a vector v V. a zero vector 0 V

    for any v V, a vector v VSuch that the axioms A1-A4 and S1-S4 are satisfied.

    There are many different types of vectors spaces

    Eg 2.2.

    (1) Rn is a vector space over R

    (2) Cn = {(z1, . . . , z n) | zi C} with addition u + v and scalar multiplication v ( C)is a vector space over C.

    35

  • 7/27/2019 m1p2

    36/96

    2.1. DEFINITION OF A VECTOR SPACE Algebra I lecture notes

    (3) Let m, n N. Define

    Mm,n = set of all m n matrices with real entries(So in this example, vectors are matrices.) Adopt the usual rules for addition andscalar multiplication of matrices: A = (aij), B = (bij), R

    A + B = (aij + bij)

    A = (aij)

    Zero vector is the matrix 0 (m n zero matrix). And A = (aij). Then Mm,nbecomes a vector space over R (check axioms).

    (4) A non-example: Let V = R2, with usual addition defined and new scalar multiplica-tion: (x1, x2) = (x1, 0). Lets check axioms

    A1-A4 hold

    S1 (v + w) = v + w holds S2 ( + ) v = v + v holds S3 ( v) = ()v holds S4 1 v = v fails. To show this, need to produce just one v for which it fails,

    eg. 1

    (17, 259) = (17, 0)

    = (17, 259)

    (5) Functions. LetV = set of all functions f : R R

    So vectors are functions.

    Addition f + g is a function x f(x) + g(x) Scalar multiplication is a function x f(x) ( R) Zero vector is the function 0 : x 0. Inverses f is a function x f(x)

    Check the axioms

    A1 using associativity ofR

    (f + (g + h))(x) = f(x) + (g + h)(h)

    = f(x) + (g(x) + h(x))

    = (f(x) + g(x)) + h(x)

    = (f + g)(x) + h(x)

    = ((f + g) + h) (x)

    36

  • 7/27/2019 m1p2

    37/96

    Algebra I lecture notes 2.1. DEFINITION OF A VECTOR SPACE

    Conclude V is a vector space over R.

    (6) Polynomials. Recall a polynomial overR

    is an expressionp(x) = a0 + a1x + + anx

    with all ai R. LetV = set of all polynomials over R

    Addition If p(x) =

    aixi, q(x) =

    bix

    i then

    p(x) + q(x) =

    (ai + bi)xi

    Scalar multiplication If p(x) =

    aix

    i, ai R then

    p(x) = aixi Zero vector is 0 the poly with all coefficients 0

    Negative of p(x) =

    aixi is

    p(x) =

    aixi

    Now check A1-A4, S1-S4. So V is a vector space over R.

    Consequence of axioms

    Proposition 2.1. Let V be a vector space over F and let v V, F(1) 0v = 0

    (2) 0 = 0

    (3) ifv = 0 then = 0 or v = 0

    (4) ()v = v = (v)Proof.

    (1) Observe

    0v = (0 + 0)v

    = 0v + 0v by S2

    0v + (0v) = (0v + 0v) + (0v)0 = 0v

    (2)

    0 = (0 + 0)

    = 0 + 0 by S1

    0 = 0

    Parts (3), (4) Ex. sheet 5 2

    37

  • 7/27/2019 m1p2

    38/96

    2.2. SUBSPACES Algebra I lecture notes

    2.2 Subspaces

    Definition 2.2. Let V be a vector space over F, and let W V. Say W is a subspace ofV if W is itself a vector space, with the same addition and scalar multiplication as V.

    Criterion for subspaces

    Proposition 2.2. W is a subspace of vector space V if the following hold:

    (1) 0 W(2) ifv, w W then v + w W(3) ifw W, F then w W

    Proof. Assume (1), (2), (3). We show W is a vector space. Addition and scalar multiplication on W are defined by (2), (3). Zero vector 0 W by (1) Negative w = (1)w W by (3).

    Finally, A1-A4, S1-S4 hold for W since they hold for V. 2

    Eg 2.3.

    1. V is a subspace of itself.

    2. {0} is a subspace of any vector space.3. Let V = R2 and

    W = {(x1, x2) | x1 + 2x2 = 0}Claim W is a subspace ofR2.

    Proof. Check (1)-(3) from the proposition

    (1) 0 W since 0 + 2 0 = 0(2) Let v = (v1, v2)

    W, w = (w1, w2)

    W. So

    v1 + 2v2 = w1 + 2w2 = 0

    v1 + w1 + 2(v2 + w2) = 0

    v + w = (v1 + w1, v2 + w2) W(3) Let v = (v1, v2) W, R. Then

    v1 + 2v2 = 0

    v1 + 2v2 = 0

    v = (v1, v2) W

    38

  • 7/27/2019 m1p2

    39/96

    Algebra I lecture notes 2.3. SOLUTION SPACES

    So W is a subspace by 2.2. 2

    4. Same proof shows that any line through 0 (ie. px1 + qx2 = 0) is a subspace ofR2.

    Note 2.1. A line not through the origin is not a subspace (no zero vector).The only subspace ofR2 are: lines through 0, R2 itself, {0}.

    5. Let V = vector space of polynomials over R. Define

    W = polynomials of degree at most 3

    (recall deg(p(x)) = highest power ofx appearing in p(x)).Claim W is a subspace of V.

    Proof.

    (1) 0 W(2) ifp(x), q(x) W then deg(p), deg(q) 3, hence deg(p + q) 3, so p + q W.(3) ifp(x) W, R, then p(x) has degree of most 3, so p(x) W.

    2

    2.3 Solution spaces

    Vast collection of subspaces ofRn is provided by the following

    Proposition 2.3. Let A be an m n matrix with real entries and letW = {x Rn | Ax = 0}

    (The set of solutions of the system of linear equations Ax = 0)Then W is a subspace ofRn.

    Proof. We check 3 conditions of 2.2.

    (1) 0 W (as A0 = 0)(2) ifv, w W then Av = Aw = 0. Hence A(v + w) = 0, so v + w W(3) ifv W, R (Av = 0), then A(v) = (Av) = 0 = 0, so v W

    2

    Definition 2.3. The system Ax = 0 is a homogeneous system of linear equations, and Wis called the solution space

    Eg 2.4.

    39

  • 7/27/2019 m1p2

    40/96

    2.4. LINEAR COMBINATIONS Algebra I lecture notes

    1. m = 1, n = 2, A =

    a b

    . Then

    W = x R2 | ax1 + bx2 = 0which is a line through 0.

    2. m = 1, n = 3, A =

    a b c

    . Then

    W =

    x R3 | ax1 + bx2 + cx3 = 0

    a plane through 0.

    3. m = 2, n = 4, A =

    1 2 1 0

    1 0 1 2

    Here

    W = x R4 | x1 + 2x2 + x3 = 0, x1 + x3 + 2x4 = 04. Try a non-linear equation:

    W =

    (x1, x2) R2 | x1x2 = 0

    Answer is no. To show this, need a single counterexample to one of the conditionsof 2.2, eg:(1, 0), (0, 1) W, but (1, 0) + (0, 1) = (1, 1) / W.

    2.4 Linear Combinations

    Definition 2.4. Let V be a vector space over F and let v1, v2, . . . , vk be vectors in V. Avector v V of the form

    v = 1v1 + 2v2 + + kvkis called a linear combination of v1, . . . , vk.

    Eg 2.5.

    1. V = R2. Let v1 = (1, 1). The linear combinations of v1 are the vectors

    v = v1 ( R) = (, )These form the line through origin and v1, ie. x1 x2 = 0.

    2. V = R2. Let

    v1 = (1, 0)

    v2 = (0, 1)

    The linear combinations of v1, v2 are

    1v1 + 2v2 = (1, 2)

    So every vector in R2 is a linear combination of v1, v2.

    40

  • 7/27/2019 m1p2

    41/96

    Algebra I lecture notes 2.5. SPAN

    3. V = R3. Let

    v1 = (1, 1, 1)v2 = (2, 2, 1)Typical linear combination is

    1v1 + 2v2 = (1 + 22, 1 + 22, 1 2)This gives all vectors in the plane containing origin, v1, v2, which is x1 x2 = 0. So eg.(1, 0, 0) is not a linear combination of v1, v2.

    2.5 Span

    Definition 2.5. Let V be a vector space over F, and let v1, . . . , vk be vectors in V. Definethe span of v1, . . . , vk, written

    Sp(v1, . . . , vk)

    to be the set of all linear combinations of v1, . . . , vk. In other words

    Sp(v1, . . . , vk) = {1v1 + + kvk | i F} VEg 2.6.

    1. V = R2, any v1 V. Then

    Sp(v1) = all vectors v1 ( R)= line through 0, v1

    2. In R2,Sp((1, 0), (0, 1)) = R2

    3. In R3, v1 = (1, 1, 1), v2 = (2, 2, 1)Sp(v1, v2) = plane containing 0, v1, v2

    = plane x1 = x2

    4. In R3

    Sp(v1 = (1, 0, 0), v2 = (0, 1, 0), v3 = (0, 0, 1)) = whole of R3

    5. V = R3. Let

    w1 = (1, 0, 0)

    w2 = (1, 1, 0)

    w3 = (1, 1, 1)

    Claim: Sp(w1, w2, w3) = R3.

    41

  • 7/27/2019 m1p2

    42/96

    2.5. SPAN Algebra I lecture notes

    Proof. Observe

    v1 = w1v2 = w2 w1v3 = w3 w2

    Hence any linear combination ofv1, v2, v3 is also a linear combination ofw1, w2, w3 (i.e.(1, 2, 3) = 1v1+2v2 +3v3 = 1w1+2(w2w1)+3(w3w2) Sp(w1, w2, w3))2

    6. V = vector space of polynomials over R. Let

    v1 = 1

    v2 = x

    v3 = x2

    Then

    Sp(v1, v2, v3) = {1v1 + 2v2 + 3v3 | i R}=

    1 + 2x + 3x

    2 | i R

    = set of all polynomials of degree 2

    Eg 2.7. In general, If v1, v2 are vectors in R3, not on same line through 0 (i.e. v1

    = v1),

    thenSp(v1, v2) = plane through 0, v1, v2

    Proposition 2.4. V vector space, v1, . . . , vk V. ThenSp(v1, . . . , vk)

    is a subspace of V.

    Proof. Check the conditions of 2.2

    (1) Taking all i = 0 (using 2.1)

    0v1 + 0v2 + + 0vk = 0 + + 0 = 0So 0 is a linear combination of v1, . . . , vk, so 0 Sp(v1, . . . , vk)

    (2) Let v, w Sp(v1, . . . , vk), sov = 1v1 + + kvk

    w = 1v1 + + kvkThen v + w = (1 + 1)v1 + + (k + k)vk Sp(v1, . . . , vk).

    42

  • 7/27/2019 m1p2

    43/96

    Algebra I lecture notes 2.6. SPANNING SETS

    (3) Let v Sp(v1, . . . , vk), F, so

    v = 1v1 + + kvkso

    v = (1)v1 + + (kvk) Sp(v1, . . . , vk)

    2

    2.6 Spanning sets

    Definition 2.6. V vector space, W a subspace of V. We say vectors v1, . . . , vk span W if

    (1) v1, . . . , vk W and

    (2) W = Sp(v1, . . . , v2)

    Call the set {v1, . . . , vk} a spanning set of W.

    Eg 2.8.

    {(1, 0, 0) , (1, 1, 0) , (1, 1, 1)} is a spannig set for R3.

    (1, 1, 1) , (2, 2, 1) span the plane x1 x2 = 0. Let

    W =

    x R4

    1 1 3 12 3 1 1

    1 0 8 2

    x = 0

    Find a (finite) spanning set for W.Solve system

    1 1 3 1 02 3 1 1 01 0 8 2 0

    1 1 3 1 0

    0 1 5 1 00 1 5 1 0

    1 1 3 1 00 1 5 1 0

    0 0 0 0 0

    Echelon form:

    x1 + x2 + 3x3 + x4 = 0

    x2 5x3 x4 = 0

    43

  • 7/27/2019 m1p2

    44/96

    2.7. LINEAR DEPENDENCE AND INDEPENDENCE Algebra I lecture notes

    General solution

    x4 = ax3 = b

    x2 = a + 5b

    x1 = a 3b (a + 5b)= 2a 8b

    i.e. x = (2a 8b, a + 5b,b,a). So W = {(2a 8b, a + 5b,b,a) | a, b R} Definetwo vectors (take a = 1 and b = 0 and vice versa)

    w1 = (

    2, 1, 0, 1) a = 1, b = 0

    w2 = (8, 5, 1, 0) a = 0, b = 1Claim W = Sp(w1, w2)

    Proof. Observe

    (2a 8b, a + 5b,b,a) = a(2, 1, 0, 1) + b(8, 5, 1, 0)= aw1 + bw2

    This gives a general method of finding spanning sets of solution spaces. 2

    2.7 Linear dependence and independence

    Definition 2.7. V vector space over F. We say a set of vectors v1, . . . , vk in V is a linearlyindependent set if the following condition holds

    1v1 + + kvk = 0 all i = 0

    Usually just say the vectors v1, . . . , vk are linearly independent vectors.We say the set {v1, . . . , vk} is linearly dependent if the oposite true, i.e. if we can findscalars i such that

    (1) 1v1 + + kvk = 0(2) at least one i = 0

    Eg 2.9.

    1. V = R2, v1 = (1, 1). Then {v1} is a linearly independent set, as

    v1 = 0 (, ) = (0, 0) = 0

    44

  • 7/27/2019 m1p2

    45/96

    Algebra I lecture notes 2.7. LINEAR DEPENDENCE AND INDEPENDENCE

    2. V = R2, the set {0} is linearly dependent, e.g.

    20 = 0

    3. In R2, let v1 = (1, 1), v2 = (2, 1). Is {v1, v2} linearly independent?Consider equation

    1v1 + 2v2 = 0

    i.e.(1, 1) + (22, 2) = (0, 0)

    i.e.

    1 + 22 = 01 + 2 = 0

    1 = 2 = 0

    4. In R3, let

    v1 = (1, 0, 1)

    v2 = (2, 2, 1)v3 = (1, 4, 5)

    Are v1, v2, v3 linearly independent?Consider system

    x1v1 + x2v2 + x3v3 = 0 (2.1)

    This is the system of linear equations 1 2 10 2 4

    1 1 5

    x = 0

    (i.e.

    v1 v2 v3

    x = 0)Solve

    1 2 1 00 2 4 01

    1

    5 0

    1 2 1 00 2 4 00

    3

    6 0

    1 2 1 00 2 4 00 0 0 0

    Solution x = (3a, 2a, a) (any a). So

    3v1 2v2 + v3 = 0So v1, v2, v3 are linearly dependent. Geometrically, v1, v2 span a plane in R

    3 and v3 =3v1 + 2v2 Sp(v1, v2) is in this plane.In general: in R3, three vectors are linearly dependent iff they are coplanar.

    45

  • 7/27/2019 m1p2

    46/96

    2.7. LINEAR DEPENDENCE AND INDEPENDENCE Algebra I lecture notes

    4. V = vector space of polynomials over R. Let

    p1(x) = 1 + x

    2

    p2(x) = 2 + 2x x2p3(x) = 1 + 4x 5x2

    Are p1, p2, p3 linearly dependent?Consider equation

    1p1 + 2p2 + 3p3 = 0

    Equating coefficients

    1 + 22 + 3 = 0

    22 + 43 = 0

    1 2 53 = 0Showed in previous example that a solution is

    1 = 3, 2 = 2, 3 = 1So

    3p1 2p2 +p1 = 0So linearly dependent.

    5. V = vector space of functions R R. Letf1(x) = sin x, f2(x) = cos x

    So f1, f2 V. Are f1, f2 linearly independent? Sheet 6.

    Two basic results about linearly independent sets.

    Proposition 2.5. Any subset of a linearly independent set of vectors is linearly indepen-dent.

    Proof. Let S be a lin. indep. set of vectors, and T

    S. Label vectors in S, T

    T = {v1, . . . , vt}S = {v1, . . . , vt, vt+1, . . . , vs}

    Suppose

    1v1 + + tvt = 0Then

    1v1 + + tvt + 0vt+1 + + 0vs = 0As S is lin. indep., all coeffs must be 0, so all i = 0. Thus T is lin. indep. 2

    46

  • 7/27/2019 m1p2

    47/96

    Algebra I lecture notes 2.7. LINEAR DEPENDENCE AND INDEPENDENCE

    Proposition 2.6. V vector space, v1, . . . , vk V. Then the following two statements areequivalent (i.e. (1)

    (2)).

    (1) v1, . . . , vk are lin. dependent

    (2) there exists i such that vi is a linear combination of v1, . . . , vi1.

    Proof.

    (1) (2) Suppose v1, . . . , vk is lin. dep., so there exist i such that1v1 + + kvk = 0

    and j = 0 for some j. Choose the largest j for which j = 0. So1v1 + + jvj = 0

    Then

    jvj = 1v1 j1vj1So

    vj = 1j

    j1j

    which is a linear combination of v1, . . . , vj1.

    (1) (2) Assume vi is a linear combination of v1, . . . , vi1, sayv1 = 1v1 + + i1vi1

    Then

    1v1 + + i1vi1 vi + 0vi+1 + + 0vk = 0Not all the coefficients in this equation are zero (coef of vi is 1). So v1, . . . , vk arelin. dependent.

    2

    Eg 2.10. v1 = (1, 0, 1), v2 = (2, 2, 1), v3 = (1, 4, 5) in R3. These are linearly dependent:3v1 2v2 + v3 = 0. And v3 = 3v1 + 2v2 a linear combination of previous ones.Proposition 2.7. V vector space, v1, . . . , vk V. Suppose vi is a linear combination ofv1, . . . , vi1. Then

    Sp(v1, . . . , vk) = Sp(v1, . . . , vi1, vi+1, . . . , vk)

    (i.e. throwing out vi does not change Sp(v1, . . . , vk))

    47

  • 7/27/2019 m1p2

    48/96

    2.8. BASES Algebra I lecture notes

    Proof. Letvi = 1v1 +

    + i1vi1 (j

    F)

    Now considerv = 1v1 + + kvk Sp(v1, . . . , vk)

    Then

    v = 1v1 + + i1vi1 ++i(1v1 + + i1vi1) ++i+1vi+1 + + kvk

    So v is a lin. comb. of

    v1, . . . , vi1, vi+1, . . . , vk

    Therefore Sp(v1, . . . , vk) Sp(v1, . . . , vi1, vi+1, . . . vk). 2Eg 2.11. v1 = (1, 0, 1), v2 = (2, 2, 1), v3 = (1, 4, 5). Here

    v3 = 3v1 + 2v2So Sp(v1, v2, v3) = Sp(v1, v2).

    2.8 Bases

    Definition 2.8. V a vector space. We say a set of vectors {v1, . . . , vk} in V, is basis of Vif

    (1) V = Sp(v1, . . . , vk)

    (2) {v1, . . . , vk} is a linearly independent set.Informally, a basis is a spanning set for which we cannot throw any of the vectors away.

    Eg 2.12.

    1. {(1, 0)v1

    , (0, 1)v2

    } is a basis ofR2.

    Proof.

    (1) (x1, x2) = x1v1 + x2v2 so R2 = Sp(v1, v2)

    (2) v1, v2 are linearly independent as

    1v1 + 2v2 = 0 (1, 2) = (0, 0) 1 = 2 = 0

    48

  • 7/27/2019 m1p2

    49/96

    Algebra I lecture notes 2.8. BASES

    2

    2. (1, 0, 0), (1, 1, 0), (1, 1, 1) is a basis ofR3

    Proof.

    (1) They span R3 previous example.

    (2)

    x1v1 + x2v2 + x3v3 = 0

    leads to the system 1 1 1 00 1 1 0

    0 0 1 0 with the only solution x1 = x2 = x3 = x4 = 0 v1, v2, v3 are lin. indep.

    2

    Theorem 2.1. Let V be a vector space with a spanning set v1, . . . , vk (i.e. V = Sp(v1, . . . , vk)).Then there is a subset of{v1, . . . , vk} which is a basis of V.Proof. Consider the set

    v1, . . . , vk

    We throw away vectors in this list which are linear combinations of the previous vectors

    in the list. End up with a basis. Process:Casting out Process

    First, throw away any zero vectors in the list.

    Start at v2: if it is a linear combination v1, (i.e. v2 = v1), then delete it; if not,leave it there.

    Now consider v3: if it is a linear combination of the remaining previous vectors, deleteit; if not, leave it there.

    Continue, moving from left to right, deleting any vi, which is a linear combination of

    previous vectors in the list.

    End up with a subset {w1, . . . , wm} of{v1, . . . , vk} such that(1) V = Sp(w1, . . . , wm) (by 2.7)

    (2) no wi is a linear combination of previous ones.

    Then {w1, . . . , wm} form a linearly independent set by 2.6. Therefore {w1, . . . , wm} is abasis of V. 2

    Eg 2.13.

    49

  • 7/27/2019 m1p2

    50/96

    2.8. BASES Algebra I lecture notes

    1. V = R3, v1 = (1, 0, 1), v2 = (2, 2, 1), v3 = (1, 4, 5). Let W = Sp(v1, v2, v3). Finda basis of W.

    1) Is v2 a linear combination of v1? No: leave it in.

    2) Is v3 a linear combination of v1, v2? Yes: v3 = 3v1 + 2v2.So cast out v3: basis for W is {v1, v2}.

    2. Heres a meatier example of The Casting out Process. Let V = R4 and

    v1 = (1, 2, 3, 1)v2 = (2, 2, 2, 1)

    v3 = (5, 2, 1, 3)v4 = (11, 2, 1, 7)v5 = (2, 8, 2, 3)

    Let W = Sp(v1, . . . , v5), subspace ofR4. Find a basis of W.

    The all-in-one-go method: Form 5 4 matrix

    v1...

    v5

    and reduce it to echelon form:

    1 2 3 12 2 2 15 2 1 3

    11 2 1 72 8 2 3

    v1

    v2v3v4v5

    1 2 3 10 6 8 10 12 16 20 24 32 40 12 4 1

    v1

    v2 2v1v3 5v1v4 11v1v5 2v1

    1 2 3 10 6 8 10 0 0 00 0 0 00 0 12 3

    v1v2 2v1v3 5v1 2(v2 2v1)v4 11v1 4(v2 2v1)v5 2v1 2(v2 2v1)

    So v3 is a linear combination of v1 and v2: cast it out. And v4 is linear combinationof previous ones: cast it out.Row vectors in echelon form are linearly independent: So last row v5 + 2v1 2v2 isnot a linear combination of the first two rows. So v5 is not a linear combination ofv1, v2.Conclude: Basis of W is {v1, v2, v5}.

    To help with spanning calculations:

    Eg 2.14. Let v1 = (1, 2, 1), v2 = (2, 0, 1), v3 = (0, 1, 3), v4 = (1, 2, 3). Do v1, v2, v3, v4span the whole ofR3?

    50

  • 7/27/2019 m1p2

    51/96

    Algebra I lecture notes 2.9. DIMENSION

    Let b R3. Then b Sp(v1, v2, v3, v4) iff system x1v1+x2v2+x3v3+x4v4 = b has a solutionfor x1, x2, x3, x4

    R. This system is

    1 2 0 1 b2 0 1 2 b21 1 3 3 b3

    1 2 0 1 b10 4 1 0 b2 2b1

    0 3 3 4 b3 + b1

    1 2 0 1 b10 4 1 0 b2 2b1

    0 0 9 12

    This system has a solution for any b R3. Hence Sp(v1, . . . , v4) R3.

    2.9 DimensionDefinition 2.9. A vector space V is finite-dimensional if it has a finite spanning set, i.e.there is a finite set of vectors v1, . . . , vk such that V = Sp(v1, . . . , vk).

    Eg 2.15. Rn is finite dimensional. To show this, let

    e1 = (1, 0, 0, . . . , 0)

    e2 = (0, 1, 0, . . . , 0)...

    en = (0, 0, 0, . . . , 1)

    Then for any x = (x1, . . . , xn) Rn

    x = x1e1 + x2e2 + + xnenSo Rn = Sp(e1, . . . , en) is finite-dimensional.

    Note 2.2. {e1, . . . , en} is linearly independent since 1e1+ +nen = 0 implies (1, . . . , n) =0, so all i = 0. So {e1, . . . , en} is a basis for Rn, called the standard basis.Eg 2.16. Let V be a vector space of polynomials over R.

    Claim: V is not finite-dimensional.

    Proof. By contradiction. Assume V has a finite spanning set p1, . . . , pk. Let deg (pi) = niand let n = max (n1, . . . , nk. Any linear combination 1p1 + + kpk ( R) has degree n. So the poly xn+1 is not a linear combination of vectors from our assumed spanningset; contradiction. 2

    Proposition 2.8. Any finite-dimensional vector space has a basis.

    Proof. Let V be a finite-dimensional vector space. Then V has a finite spanning set. Thiscontains a basis of V by Theorem 2.1. 2

    51

  • 7/27/2019 m1p2

    52/96

    2.9. DIMENSION Algebra I lecture notes

    Definition 2.10. The dimension ofV is the number of vectors in any basis of V.1 Writtendim V

    Eg 2.17. Rn has basis e1, . . . , en, so dimRn = n.

    Eg 2.18. Let v R2, v = 0 and let L be the line through 0 and v. So L is a subspace.L = {v | R}

    So L = Sp(v) and {v} is a basis ofL. So dim L = 1.Eg 2.19. Let v1, v2 R3 with v1, v2 = 0 and v2 = v1. Then Sp(v1, v2) = P is a planethrough 0, v1, v2. As v2 = v1, {v1, v2} is linearly indepenednt, so is a basis of P. Sodim P = 2.

    Major result:Theorem 2.2. Let V be a finite-dimensional vector space. Then all bases of V have thesame number of vectors.

    Proof. Based on:

    Lemma 2.1. (Replacement Lemma) V a vector space. Suppose v1, . . . , vk and x1, . . . , xrare vectors in V such that

    v1, . . . , vk span V x1, . . . , xr are linearly independent

    Then

    (1) r k and(2) there is a subset {w1, . . . , wkr} of{v1, . . . , vk} such that x1, . . . , xr, w1, . . . , wkr span

    V (i.e. we can replace r of the vs by the xs and still span V)

    Eg 2.20. V = R3.

    e1, e2, e3 span R3

    x1 = (1, 1, 1)According to 2.1(2), we can replace one of the eis by x1 and get a spanning set {x1, ei, ej}.How? Consider spanning set

    x1, e1, e2, e3

    This set is linearly dependent since x = e1 + e2 e3. By 2.6, one of the vectors is thereforea linear combination of previous ones in this case

    e3 = e1 + e2 x1So cast out e3 spanning set is {x1, e1, e2}.

    1following theorem shows the uniqueness of this number

    52

  • 7/27/2019 m1p2

    53/96

    Algebra I lecture notes 2.10. FURTHER DEDUCTIONS

    Proof. (of Lemma 2.1)Consider S1 =

    {x1, v1, . . . , vk

    }. This spans V. It is linearly dependent, as x1 is a linear

    combination of the spanning set v1, . . . , vk. So by 2.6, some of the vectors in S1 is alinear combination of previous ones. This vector is not x1, so it is some vi. By 2.7,V = Sp(x1, v1, . . . , vi, . . . , vk). Now let S2 = {x1, x2, v1, . . . , vi, . . . , vk}. This spans V andis linearly dependent, as x2 is a linear combination of others. By 2.6, there exists a vectorin S2 which is linear combination of previous ones. It is not x1 and x2, as x1, x2 are linearlyinedpendent. So it is some vj. By 2.7, V = Sp(x1, x2, v1, . . . , vi, . . . , vj , . . . , vk). Continuelike this, adding xs, deleting vs.If r > k, then eventually, we delete all the vs and get V = Sp(x1, . . . , xk). Then xk+1 is alinear combination of x1, . . . , xk. This cant happed as x, . . . , xk+1 is linearly independentset. Therefore r k.This proces ends when weve used up all the xs, giving

    V = Sp(x1, . . . , xr, k r remaining vs)2

    (Proof of 2.2 continued)Let {v1, . . . , vk} and {x1, . . . , xr} be the bases ofV. Both are spanning sets for V and bothare linearly independent. Well v1, . . . , vk span, x1, . . . , xr is linearly inedpendent, so by theprevious lemma, r k.Similarly, x1, . . . , xr span. v1, . . . , vk is linearly independent, so by the previous lemmaagain, k

    r.

    Hence r = k. So all bases of V have the same number of vectors. 2

    2.10 Further Deductions

    Proposition 2.9. Let dim V = n. Any spanning set for V of size n is a basis ofV.

    Proof. Let {v1, . . . , vn} be the spanning set. By 2.1, this set contains a basis of V. By 2.2,all bases of V have the size n. Therefore, {v1, . . . , vn} is a basis of V. 2Eg 2.21. Is (1, 2, 3), (0, 2, 5), (1, 0, 6) a basis ofR3?

    1 2 30 2 5

    1 0 6

    1 2 30 2 5

    0 2 9

    1 2 30 2 5

    0 0 14

    The rows of this echelon form are linearly independent, so cant cast out any vectors. Sothey form a basis.

    53

  • 7/27/2019 m1p2

    54/96

    2.10. FURTHER DEDUCTIONS Algebra I lecture notes

    Proposition 2.10. If{x1, . . . , xr} is a linearly independent set in V, then there is a basisof V containing x1, . . . , xr.

    Proof. Let v1, . . . , vn be a basis ofV. By 2.1(2), there exists {w1, . . . , wnr} {v1, . . . , vn}such that

    V = Sp(x1, . . . , xr, w1, . . . , wnr)

    Then x1, . . . , xr, w1, . . . , wnr is a spanning set of size n, hence is a basis by 2.9. 2

    Eg 2.22. Let v1 = (1, 0, 1, 2), v2 = (1, 1, 2, 5) R4. Find a basis ofR4 containing v1, v2.Claim: v1, v2, e1, e2 is a basis ofR

    4.

    Proof. Clearly can get all standard basis vectors e1, e2, e3, e4 as linear combination ofv1, v2, e1, e2. So v1, v2, e1, e2 span R

    4, so they are basis by 2.9. 2

    Proposition 2.11. Let W be subspace of V. Then

    (1) dim W dim V(2) IfW = V, then dim W < dim V

    Proof.

    (1) Let w1, . . . , wr be a basis of W. This set is linearly independent. So by Proposition2.10 there is a basis of V containing it. Say w1, . . . , wr, v1, . . . , vs. Then dim V =

    r + s r = dim W.(2) If dim W = dim V, then s = 0 and w1, . . . , wr is a basis ofV, so V = Sp(w1, . . . , wr) =

    W.

    2

    Eg 2.23. (The subspaces ofR3) Let W be a subspace ofR3. Then dim W dimR3 = 3.Possibilities:

    dim W = 3 Then W = R3

    dim W = 2 Then W has a basis {v1, v2} so W = Sp(v1, v2), which is a plane through0, v1, v2.

    dim W = 1 Then W has a basis {v1} so W = Sp(v1), which is a line through 0, v1. dim W = 0 Then W = {0}.

    Conclude: The subspaces ofR3 are {0}, R3 and lines and planes containing 0.Proposition 2.12. Let dim V = n. Any set of n vectors which is linearly independent isa basis of V.

    54

  • 7/27/2019 m1p2

    55/96

    Algebra I lecture notes 2.10. FURTHER DEDUCTIONS

    Proof. Let v1, . . . , vn be linearly independent. By Proposition 2.10 there is a basis contain-ing v1, . . . , vn. As all bases have n vectors, v1, . . . , vn must be a basis. 2

    Eg 2.24. Is the set (1, 1, 1, 0), (2, 0, 1, 2), (0, 3, 1, 1), (2, 2, 1, 0) a basis ofR4?v1v2v3v4

    1 1 1 02 0 1 20 3 1 12 2 1 0

    1 1 1 00 2 2 20 3 1 10 0 3 0

    1 1 1 00 1 1 10 0 4 50 0 3 0

    1 1 1 00 1 1 10 0 4 50 0 0 1

    w1w2w3w4

    The vectors w1, w2, w3, w4 are linearly independent (clear as they are in echelon form). By2.12, w1, . . . , w4 are a basis ofR

    4, therefore v1, . . . , v4 span R4 (as ws are linear combina-

    tions of vs), therefore v1, . . . , v4 is a basis ofR4, by 2.9.

    Proposition 2.13. Let dim V = n. Then any set ofn + 1 vectros or more vectors in V islinearly dependent.

    Proof. Let S be a set ofn + 1 or more vectors. IfS is linearly independent, it is containedin a basis by Proposition 2.10, which is impossible as all bases have n vectors. So S islinearly dependent. 2

    Eg 2.25. (A fact about matrices)Let V = M2,2, the vector space of all 2 2 matrices over R (usual addition A + B andscalar multiplication A of matrices) Basis: Let

    E11 = 1 00 0

    E12 =0 1

    0 0

    E21 =

    0 01 0

    E22 =

    0 00 1

    Claim: E11, E12, E21, E22 is a basis of V = M2,2.

    Proof.

    55

  • 7/27/2019 m1p2

    56/96

    2.10. FURTHER DEDUCTIONS Algebra I lecture notes

    Span

    a bc d = aE11 + bE12 + cE21 + dE22 Linear independence

    1E11 + 2E12 + 3E21 + 4E22 = 0 implies1 23 4

    =

    0 00 0

    i = 0

    2

    So dim V = 4.Now let A V = M2,2. Consider I, A, A2, A3, A4. These are 5 vectors in V, so they arelinearly dependent by 2.13. This means there exist i R (at least one non zero) suchthat

    4A4 + 3A

    3 + 2A2 + 1A + 0I = 0

    This means, if we write

    p(x) = 4x4 + 3x

    3 + 2x2 + 1x + 0

    then p(x) = 0 and p(A) = 0. So weve proved the following:Proposition 2.14. For any 2 2 matrix A there exists a nonzero polynomial p(x) ofdegree 4, such that p(A) = 0.Note 2.3. This generalizes to n

    n matrices.

    Summary so far

    V a finite-dimensional vector space (i.e. V has a finite spanning set)

    Basis of V is a linear independent spanning set All bases have the same size called dim V (Theorem 2.2) Every spanning set contains a basis (Theorem 2.1)

    Write dim V = n Any spanning set of size n is a basis (Proposition 2.9) Any linearly independent set of size n is a basis (Proposition 2.12) Any linearly independent set is contained in a basis (Proposition 2.10) Any set of n + 1 or more vectors is linearly dependent (Proposition 2.13) Any subspace W of V has dim W n, and dim W = n W = V (Proposition

    2.11)

    56

  • 7/27/2019 m1p2

    57/96

    Algebra I lecture notes

    Chapter 3

    More on Subspaces

    3.1 Sums and Intersections

    Definition 3.1. V a vector space. Let U, W be subspaces of V. The intersection of Vand W is

    U W = {v | v U and v W}The sum of U and W

    U + W = {u + w | u U and w W}Note 3.1. U + W contains

    all the vectors in u U (as u + 0 U + W) all the vectors in w W many more vectors (usually)

    Eg 3.1. V = R2 U = Sp(1, 0), W = Sp(0, 1). Then U + W contains all vectors 1(1, 0) +2(0, 1) = (1, 2). So U + W is the whole of R

    2.

    Proposition 3.1. U W and U + W are subspaces of V.

    Proof. Use subspaes criterion, Proposition 2.3

    U + W

    (1) As U, W are subspaces, both contain 0, so 0 + 0 = 0 U + W(2) Let u1 + w1, u2 + w2 U+ W (where ui U, wi W). Then (u1 + w1) + (u2 +

    w2) = (u1 + u2) + (w1 + w2) U + W(3) Let u + w U + W, F. Then

    (u + w) = u + w U + W

    57

  • 7/27/2019 m1p2

    58/96

    3.1. SUMS AND INTERSECTIONS Algebra I lecture notes

    U W Sheet 8.2

    What about dimensions of U + W, U W?First:

    Proposition 3.2. If U = Sp(u1, . . . , ur), W = Sp(w1, . . . , ws). Then

    U + W = Sp(u1, . . . , ur, w1, . . . , ws)

    Proof. Let u + w U + W. Then (i, i F)

    u = 1u1 +

    + rvr

    w = 1w1 + + swsSo

    u + w = 1u1 + + rvr + 1w1 + + sws Sp(u1, . . . , ur, w1, . . . , wr)

    So

    U + W Sp(u1, . . . , ur, w1, . . . , ws)

    All the ui, wi are in U+W. As U+W is a subspace, it therefore contains Sp(u1, . . . , ur, w1, . . . , ws).HenceU + W = Sp(u1, . . . , ur, w1, . . . , ws)

    2

    Eg 3.2. In the above example, U + W = Sp((1, 0), (0, 1)) = R2.

    Eg 3.3. Let U = {x R3 | x1 + x2 + x3 = 0}, W = {x R3 | x1 + 2x2 + x3 = 0} sub-spaces ofR3. Find bases of U, W, U W, U + W.

    For U general solution is (a b,b,a), so basis for U is: {(1, 0, 1), (1, 1, 0)}. For W general solution is (2b + a,b,a), so basis of W is: {(1, 0, 1), (2, 1, 0)}.

    U W: this is

    x R3

    1 1 11 2 1

    x = 0. Solve

    1 1 1 0

    1 2 1 0

    1 1 1 00 3 2 0

    General solution is (a, 2a, 3a). Basis for U + W is {(1, 2, 3)}.

    58

  • 7/27/2019 m1p2

    59/96

    Algebra I lecture notes 3.1. SUMS AND INTERSECTIONS

    U + W: By Proposition 3.2

    U + W = Sp((1, 0, 1), (1, 1, 0), (1, 0, 1), (2, 1, 0))Check that can cast out only 1 vector. So U + W has dimension 3, so

    U + W = R3

    So

    dim U = dim W = 2

    dim U W = 1dim U + W = 3

    Theorem 3.1. Let V be a finite-dimensional space and let U, W be subspaces ofV. Then

    dim(U + W) = dim U + dim W dim(U W)Proof. Let

    dim U = m

    dim W = n

    dim(U W) = r

    Aim: to prove dim (U + W) = m + n r. Start with basis {x1, . . . , xr} ofU W. By 2.10,can extend this to bases

    Bu = {x1, . . . , xr, u1, . . . , umr} basis of UBw = {x1, . . . , xr, w1, . . . , wnr} bases of W

    Let

    B = Bu Bw = {x1, . . . , xr, u1, . . . , umr, w1, . . . , wnr}Claim B is a basis ofU + W.

    Proof.

    (1) Span: B spans U + W by Proposition 3.2.

    (2) Linear independence: We show that B is linearly independent. Suppose

    1x1 + + rxr + 1u1 + + mrumr + 1w1 + + nrwnri.e.

    r

    i=1ixi +

    mr

    i=1iui +

    nr

    i=1iwi = 0 (3.1)

    59

  • 7/27/2019 m1p2

    60/96

    3.1. SUMS AND INTERSECTIONS Algebra I lecture notes

    Let

    v =nr

    i=1

    iwi

    Then v W. Alsov =

    ixi

    iui U

    So v is in U W.As x1, . . . , xr is a basis of U W

    v =ri=1

    ixi ( F)

    As v = iwi, this gives

    ri=1

    ixi +nri=1

    iwi = 0

    Since Bw = {x1, . . . , xr, w1, . . . , wnr} is linearly independent, this forces (for all i)i = 0

    i = 0

    (i.e. v = 0). Then by (3.1)

    ri=1

    ixi +mri=1

    iui = 0

    Since Bu = {x1, . . . , xr, u1, . . . , umr} is linearly independent, this forces (for all i)i = 0

    i = 0

    So weve shown that in (3.1), all coefficients i, i, i are zero, showing that B =

    Bu Bw is linearly independent. Hence B is a basis of U + W.2

    Then we proved that

    dim(U + W) = r + m r + n r = r + m r2

    Eg 3.4. V = R4. Suppose U, W are subspaces with dim U = 2, dim W = 3. Thendim U + W 3 (as it contains W) and dim U + W 4 (as U + W R4). Possibilities

    60

  • 7/27/2019 m1p2

    61/96

    Algebra I lecture notes 3.2. THE RANK OF A MATRIX

    dim U + W = 3 Then U + W = W and so U W.

    dim U + W = 4 (in other words U + W = R4

    ). Thendim(U W) = dim U + dim W dim U + W = 1

    For example this happens:

    U = Sp(e1, e2) = {(x1, x2, 0, 0) | xi R}W = Sp(e1, e3, e4) = {(x1, 0, x2, x3)}

    3.2 The rank of a matrix

    Definition 3.2. Let A be an m n matrix with real entries. Definerow-space(A) = subspace ofRn spanned by the rows of A

    column-space(A) = subspace ofRm spanned by the columns of A

    Eg 3.5. A =

    3 1 20 1 1

    row-space(A) = Sp((3, 1, 2), (0, 1, 1))

    column-space(A) = Sp

    30

    ,

    1

    1

    ,

    21

    Definition 3.3. Let A be m n matrix. Definerow-rank(A) = dim row-space(A)

    column-rank(A) = dim column-space(A)

    Eg 3.6. In above example

    row-rank(A) = column-rank(A) = 2

    3.2.1 How to find row-rank(A)

    Procedure:

    (1) Reduce A to echelon form by row operations, say

    A =

    0 . . . 1 . . . . . . . . . . . . . . .0 . . . 0 . . . 1 . . . . . . . . .

    ...0 . . . 0 . . . 0 1 . . . . . .

    Then (we will prove this)

    row-space(A) = row-space(A)

    61

  • 7/27/2019 m1p2

    62/96

    3.2. THE RANK OF A MATRIX Algebra I lecture notes

    (2) Then row-rank(A) = number of nonzero rows in echelon form A and these nonzerorows are a basis for row-space(A).

    Proof.

    (1) Rows of A are linear combinations of the rows ofA (since they are obtained by rowoperations ri ri + rj, etc.) Therefore

    row-space(A) Sp( rows of A)= row-space(A)

    By reversing the row operations to go from A to A, we see that rows of A are linearcombinations of rows of A, so

    row-space(A) row-space(A)

    Therefore row-space(A) = row-space(A)

    (2) Let the nonzero rows of A be v1, . . . , vr

    A =

    0 . . . 1 . . . . . . . . . . . . . . .0 . . . 0 . . . 1 . . . . . . . . .

    ...

    0 . . . 0 . . . 0 1 . . . . . .

    v1v2...

    vr

    Then

    row-space(A) = Sp(v1, . . . , vr)

    Also v1, . . . , vr are linearly independent, since

    1v1 + + rvr = 0

    implies

    1 = 0 (since 1 is i1 coordinate of LHS)

    2 = 0 (since 1 is i2 coordinate of LHS)

    j = 0 (since 1 is ij coordinate of LHS)

    Therefore v1, . . . , vr is a basis for row-space(A), hence for row-space(A). So

    row-rank(A) = r = no. of nonzero rows ofA

    2

    62

  • 7/27/2019 m1p2

    63/96

    Algebra I lecture notes 3.2. THE RANK OF A MATRIX

    Eg 3.7. Find the row-rank of

    A = 1 2 52 1 0

    1 4 15

    Reduce to echelon form:

    A 1 2 50 3 10

    0 6 20

    1 2 5

    0 3 100 0 0

    Row-rank is 2.

    Eg 3.8. Find the dimension of

    W = Sp((1, 1, 0, 1), (2, 3, 1, 0), (0, 1, 2, 3)) R4

    Observe

    W = row-space(

    1 1 0 12 3 1 00 1 2 3

    ) = ASo dim W = row-rank(A).

    A 1 1 0 10 5 1 2

    0 1 2 3

    1 1 0 1

    0 5 1 20 0 9 12

    So dim W = row-rank(A) = 3.

    3.2.2 How to find column-rank(A)?

    Clearly

    column-rank(A) = row-rank(AT)

    63

  • 7/27/2019 m1p2

    64/96

    3.2. THE RANK OF A MATRIX Algebra I lecture notes

    Eg 3.9. column-rank() of A =

    1 2 52 1 0

    1 4 15

    AT =

    1 2 12 1 4

    5 0 15

    1 2 10 3 6

    0 0 0

    So column-rank(A) = 2.

    Theorem 3.2. For any matrix A,

    row-rank(A) = column-rank(A)

    Proof. Let

    A =

    a11 a1n... . . . ...

    am1 amn

    v1...

    vm

    So vi = (ai1, . . . , ain).Let

    k = row-rank(A)

    = dim Sp(v1, . . . , vm)

    Let w1, . . . , wk be a basis for row-space(A). Say

    w1 = (b11, . . . , b1n)...

    wk = (bk1, . . . , bkn)

    Each vi

    Sp(w1, . . . , wk), so (ij

    F)

    v1 = 11w1 + + 1kwk...

    vm = m1w1 + + mkwkEquating coordinates:

    ith coord ofv1 : a1i = 11b1i + + 1kbki...

    ith coord of vm : ami = m1b1i + + mkbki

    64

  • 7/27/2019 m1p2

    65/96

    Algebra I lecture notes 3.2. THE RANK OF A MATRIX

    This says

    ith column of A =a1i...

    ami

    = b1i11...m1

    + + bki1k...mk

    Hence each column of A is a linear combination of the k column vectors

    11...m1

    , . . . ,

    1k...

    mk

    = l1, . . . , lk

    So column-space(A) is spanned by these k vectors. So

    column-rank(A) = dim column-space(A) k = row-rank(A)

    So weve shown that

    column-rank(A) row-rank(A)

    Applying the same to AT:

    column-rank(AT) row-rank(AT)

    i.e.

    row-rank(A) column-rank(A)Hence row-rank(A) = column-rank(A). 2

    Eg 3.10. illustrating the proofLet

    A =

    1 2 1 01 1 0 1

    0 3 1 2

    v1v2v3

    As v3 = v1 + v2, basis of row-space is w1, w2 where

    w1 = v1

    w2 = v2

    Write each vi as a linear combination of w1, w2

    v1 = w1 = 1w1 + 0w2

    v2 = w2 = 0w1 + 1w2

    v3 = w1 + w2 = 1w1 + 1w2

    65

  • 7/27/2019 m1p2

    66/96

    3.2. THE RANK OF A MATRIX Algebra I lecture notes

    So the column vectors l1, l2 are

    101

    l2

    , 01

    1

    l2

    These span column-space(A). Check 11

    0

    = l1 l2

    213 = 2l1 + l2

    101

    = l1

    Definition 3.4. The rank of a matrix is its row-rank or its col-rank, written

    rank(A) or rk(A)

    Proposition 3.3. Let A be n

    n. Then the following four statements are equivalent:

    (1) rank(A) = n

    (2) rows of A are a basis ofRn

    (3) columns of A are a basis ofRn

    (4) A is invertible

    Proof.

    (1) (2)

    rank(A) = n

    dim row-space(A) = n

    the n rows of A span R3

    the n rows are a basis ofR3 (2.9)

    66

  • 7/27/2019 m1p2

    67/96

    Algebra I lecture notes 3.2. THE RANK OF A MATRIX

    (1) (3) Similarly

    (1) (4)rank(A) = n

    A can be reduced to echelon form

    A can be reduced to In

    A is invertible (M1GLA, 7.5)

    2

    67

  • 7/27/2019 m1p2

    68/96

    Algebra I lecture notes

    Chapter 4

    Linear Transformations

    Linear transformations are functions from one vector space to another which preserveaddition and scalar multiplication, i.e. for linear transformation T if

    v1 w1 = T(v1)v2 w2 = T(v2)

    thenv1 + v2 w1 + w2 = T(v1) + T(v2)

    andv1 w1 = T(v1)

    Definition 4.1. Let V, W be vector spaces. A function T : V W is a linear transfor-mation if

    1) T(v1 + v2) = T(v1) + T(v2) for all v1, v2 V2) T(v) = T(v) for all v V, F

    Eg 4.1.

    (1) Define T : R1 R1 byT(x) = sin x

    Then T is not a linear transformation: e.g.

    T() = sin = 0

    2T

    2

    = 2 sin

    2= 2

    So 2T2

    = T()(2) T : R2 R1,

    T(x1, x2) = x1 + x2

    T is a linear transformation:

    68

  • 7/27/2019 m1p2

    69/96

    Algebra I lecture notes

    1)

    T((x1, x2) + (y1, y2)) = T(x1 + y1, x2 + y2)= x1 + x2 + y1 + y2

    = T(x1, x2) + T(y1, y2)

    2)

    T((x1, x2)) = T(x1, x2)

    = x1 + x2

    = (x1 + x2)

    = T(x1, x2)

    (3) T : R2 R2T(x1, x2) = x1 + x2 + 1

    T is not linear: e.g.

    T(2(1, 0)) = T(2, 0) = 3

    2T(1, 0) = 4

    (4) V vector space of polynomials. Define T : V V by

    T(p(x)) = p(x)

    e.g.T(x3 3x) = 3x2 3

    Then T is linear transformation:

    1)

    T(p(x) + q(x)) = p(x) + q(x)

    = T(p(x)) + T(q(x))

    2)

    T(p(x)) = p(x) = T(p(x))

    Basic examples:

    Proposition 4.1. Let A be an m n matrix over R. Define T : Rn Rm by (for allx Rn, column vectors)

    T(x) = Ax

    Then T is a linear transformation.

    69

  • 7/27/2019 m1p2

    70/96

    4.1. BASIC PROPERTIES Algebra I lecture notes

    Proof.

    1)

    T(v1 + v2) = A(v1 + v2)

    = Av1 + Av2

    = T(v1) + T(v2)

    2)

    T(v) = A(v)

    = Av

    = T(v)

    2

    Eg 4.2.

    1. Define T : R3 R2

    T

    x1x2x3

    =

    x1 3x2 + x3x1 + x2 2x3

    Then

    T(x) =

    1 3 11 1 2

    x

    So T is a linear transformation.

    2. Rotation : R2 R2 is

    =

    cos sin

    sin cos

    x

    so is a linear transformation.

    4.1 Basic properties

    Proposition 4.2. Let T : V W be a linear transformation(i) T(0V) = 0W

    (ii) T(1v2 + + kvk) = 1T v1 + + kT(vk)Proof.

    70

  • 7/27/2019 m1p2

    71/96

    Algebra I lecture notes 4.2. CONSTRUCTING LINEAR TRANSFORMATIONS

    (i)

    T(0V) = T(0v)= 0T(v)

    = 0W

    (ii)

    T(1v1 + + kvk) = T(1v1 + + k1vk1) + T(kvk)= T(1v1 + + k1vk1) + kT(vk)

    Repeat to get (ii).

    2

    4.2 Constructing linear transformations

    Eg 4.3. Find a linear transformation T : R2 R3 which sends10

    e1

    11

    2

    w1

    01e2

    0

    13

    w2

    We are forced to define

    T

    x1x2

    = T(x1e1 + x2e2)

    = x1T(e1) + x2T(e2)

    = x1w1 + x2w2

    So the only possible choice for T is

    T

    x1x2

    =

    x1x1 + x2

    2x1 + 3x2

    This is a linear transformation, as it is

    T(x) =

    1 01 1

    2 3

    x

    And it does send e1 w1, e2 w2.

    71

  • 7/27/2019 m1p2

    72/96

    4.2. CONSTRUCTING LINEAR TRANSFORMATIONS Algebra I lecture notes

    In general:

    Proposition 4.3. Let V, W be vector spaces, and let v1, . . . , vn be a basis of V. For anyn vectors w1, . . . , wn in W there is a unique linear transformation T : V W such that

    T(v1) = w1...

    T(vn) = wn

    Proof. Let v V. Write

    v = 1v1 + + nvnBy 4.2(ii), the only possible choice for T(v) is

    T(v) = T(1v1 + + nvn)= 1T(v1) + + nT(vn)= 1w1 + + nwn

    So this is our definition of T : V W if v = 1v1 + + nvn, then

    T(v) = 1w1 + + nwn

    We show this function T is a linear transformation:

    1) Let v =

    ivi, w =

    ivi. Then v + w =

    (i + i)vi, so

    T(v + w) =

    (i + i)wi

    =

    iwi +

    iwi

    = T(v) + T(w)

    2) Let v = ivi, F. ThenT(v) = T

    ivi

    =

    iwi

    =

    iwi

    = T(v)

    So T is a linear transformation sending v1 w1 for all i and is unique.2

    72

  • 7/27/2019 m1p2

    73/96

    Algebra I lecture notes 4.3. KERNEL AND IMAGE

    Remark 4.1. This shows that once we know what a linear transformation does to thevectors in a basis, we know what it does to all vectors.

    Eg 4.4. V =vector space of polynomials over R of degree 2. Basis of V: 1, x, x2. Pick3 vectors in V: w1 = 1 + x, w2 = x x2, w3 = 1 + x2. By 4.3 there exists a unique lineartransformation T : V V sending

    1 w1x w2

    x2 w3Then

    T(a + bx + cx2) 4.2= aT(1) + bT(x) + cT(x2)

    = a(1 + x) + b(x x2) + x(1 + x2)= a + c + (a + b)x + (c b)x2

    4.3 Kernel and Image

    Definition 4.2. T : V V linear transformation. Define the image Im(T) to be

    Im(T) = {T(v) | v V} W

    The kernel Ker(T) isKer(T) = {v V | T(v) = 0} V

    Eg 4.5. T : R3 R2

    T

    x1x2

    x3

    = 3 1 21 0 1

    A

    x1x2

    x3

    = 3x1 + x2 + 2x3x1 + x3

    Then

    Ker(T) =

    x R3 | T(X) = 0=

    x R3 | Ax = 0 = solution space of Ax

    Im (T) = set of all vectors

    3x1 + x2 + 2x3

    x1 + x3

    = set of all vectors x1

    3

    1

    + x2

    10

    + x3

    21

    = col-space ofA

    73

  • 7/27/2019 m1p2

    74/96

    4.3. KERNEL AND IMAGE Algebra I lecture notes

    Proposition 4.4. T : V W linear transformation. Then

    i) Ker(T) is a subspace of V

    ii) Im(T) is a subspace of W

    Proof.

    i) Use 2.3:

    1) 0 Ker(T) since T(0V) 4.2= 0W2) Let v, w Ker(T). Then T(v) = T(w) = 0, so

    T(v + w) = T(v) + T(w)

    = 0 + 0 = 0

    So v + w Ker(T).3) Let v Ker(T), F. Then

    T(v) = T(v)

    = 0 = 0

    So v Ker(T)ii)

    1) 0 Im (T) as 0 = T(0)2) w1, w2 Im (T) so w1 = T(v1), w2 = T(v2)

    w1 + w2 = T(v1) + T(v2)

    = T(v1 + v2)

    so w1 + w2 Im (T)3) w Im (T), F. Then

    w = T(v)

    w = T(v)

    = T(v)

    so w Im (T).

    2

    74

  • 7/27/2019 m1p2

    75/96

    Algebra I lecture notes 4.3. KERNEL AND IMAGE

    Eg 4.6. Let vn = vector space of polynomials of degree n. Define T : Vn Vn1 by

    T(p(x)) = p

    (x)

    Then T is a linear transformation.

    Ker(T) = {p(x) | T(p(x)) = 0}= {p(x) | p(x) = 0}= V0 (the constant polynomials)

    and Im(T) = Vn1. This has basis 1, x , x2, . . . , xn1, dim n.

    Proposition 4.5. Let T : V

    W be a linear transformation. If v1, . . . , vn is a basis ofV, then

    Im (T) = Sp(T(v1), . . . , T (vn))

    Proof. Let T(v) Im (T). Write

    v = 1v1 + + nvnThen

    T(v)4.2= 1T(v1) + + nT(vn)

    Sp(T(v1), . . . , T (vn))This shows

    Im (T) Sp(T(v1), . . . , T (vn))All T(v1) Im T, so as Im(T) is a subspace, Sp(T(v1), . . . , T (vn)) Im (T). ThereforeSp(T(v1), . . . , T (vn)) = Im(T). 2

    Important class of kernels and images:

    Proposition 4.6. Let A be an m n matrix, and define T : Rn Rm by (x Rn)

    T(x) = Ax

    Then

    1) Ker(T) = solution space of the system Ax = 0.

    2) Im T = column-space(A)

    3) dimIm(T) = rank(A)

    Proof.

    75

  • 7/27/2019 m1p2

    76/96

    4.3. KERNEL AND IMAGE Algebra I lecture notes

    1)

    Ker(T) = {x | T(x) = 0}= {x Rn | Ax = 0}= solution space ofAx = 0

    2) Take a standard basis e1, . . . , en ofRn. By 4.5

    Im (T) = Sp(T(e1), . . . , T (en))

    Here

    T(ei) = Aei = A

    0...1...0

    = i-th column of A

    So Im(T) = Sp(columns of A) = column-space(A).

    3) dim(Im(T)) = dim (column-space(A)) = rank(A)

    2

    Eg 4.7. T : R3 R3

    T(x) =

    1 2 31 0 1

    1 4 7

    Find bases for Ker (T) and Im (T).

    Ker(T) 1 2 3 01 0 1 0

    1 4 7 0

    1 2 3 00 2 4 0

    0 2 4 0

    1 2 3 00 2 4 0

    0 0 0 0

    General solution (a, 2a, a). Basis for Ker (T) is (1, 2, 1).

    76

  • 7/27/2019 m1p2

    77/96

    Algebra I lecture notes 4.3. KERNEL AND IMAGE

    Im (T). Basis for Im (T) = column-space((A)). Dimension is rank((A)) = 2. So basis

    is 1

    11 ,

    2

    04.

    Theorem 4.1 (Rank-nullity Theorem). 1 Let V, W be vector spaces, and T : V W bea linear transformation. Then

    dim(Ker(T)) + dim (Im (T)) = dim (V)

    Proof. Let r = dim(Ker(T)). Let u1, . . . , ue be a basis of Ker (T). By 2.10 we can extendthis to

    u1, . . . , ur, v1, . . . , vs

    basis of V. So dim V = r + s.We want to show that dim (Im (T)) = s. By 4.5

    Im (T) = Sp(T(u1), . . . , T (ur), T(v1), . . . , T (vs))

    Each T(u1) = 0, as ui Ker T. SoIm T = Sp(T(v1), . . . , T (vs)) ()

    Claim: T(v1), . . . , T (vs) is a basis of Im T.

    Proof. Span shown by (). Suppose1T(v1) + + sT(vs) = 0

    Then

    T(1v1 + + svs) 4.2= 0So 1v1 + + svs Ker T. So

    1v1 + + svs = 1u1 + + rur(as u1, . . . , ur are basis of Ker T). That is

    1u1 +

    + rur

    1v1

    svs = 0

    As u1, . . . , ur, v1, . . . , vs is a basis ofV, it is linearly independent, and so

    i = i = 0 iThis shows T(v1), . . . , T (vs) is linearly independent, hence a basis of Im T. 2

    So dim (Im T) = s and

    dim (Ker T) + dim (Im T) = r + s = dim V.

    2

    1dim(Ker(T)) is sometimes called the nullity ofT

    77

  • 7/27/2019 m1p2

    78/96

    4.4. COMPOSITION OF LINEAR TRANSFORMATIONS Algebra I lecture notes

    Consequences for linear equations

    Proposition 4.7. Let A be m

    n matrix, and

    W = solution space ofAx = 0

    = {x Rn | Ax = 0}Then

    dim W = n rank(A)Proof. Define