f X x f x 6= 0 } f 0 : f K E U,Uweb.math.ku.dk/~jakobsen/replie/compact-work.pdfRepresentations of...

25
Department of M athematics F ebruary 2008 f X f f f = {x X | f (x) =0}. C c (X ) C c (X ) L V L : V C V = C c (X ) L f 0: L(f ) 0. C c (X ) σ μ X μ(K ) < K E μ(E ) = inf {μ(U ): E U, U } μ(E ) = sup{μ(K ): K E } E E μ(E ) < X L C c (X ) μ X L(f )= X f (x)(x) f C c (X ).

Transcript of f X x f x 6= 0 } f 0 : f K E U,Uweb.math.ku.dk/~jakobsen/replie/compact-work.pdfRepresentations of...

Page 1: f X x f x 6= 0 } f 0 : f K E U,Uweb.math.ku.dk/~jakobsen/replie/compact-work.pdfRepresentations of Lie Groups: Compact (Matrix Lie) Groups Hans Plesner Jakobsen DepartmentofMathematics

Notes for the course onRepresentations of Lie Groups:Compact (Matrix Lie) Groups

Hans Plesner JakobsenDepartment of MathematicsFebruary 2008

1. The Haar integral (and measure) on locallycompact groups

We begin with a number of de�nitions:

De�nition 1.1. Let f be a continuous function on a metric spaceX (or, more generally, on a topological space). The support of f ;supp f is given as

supp f = {x ∈ X | f(x) 6= 0}.Furthermore, Cc(X) denotes the space of (complex) continuous func-tions whose support is compact.

Notice that the space of real-valued continuous functions with compactsupport is a real (but not complex) subspace of Cc(X).

De�nition 1.2. A linear functional L on a vector space V is a linearmap L : V 7→ C. If V = Cc(X), the linear functional L is positiveif

∀f ≥ 0 : L(f) ≥ 0.

The following theorem represents positive linear functionals on Cc(X),the space of continuous complex valued functions of compact support.The Borel sets in the following statement refers to the σ-algebra gener-ated by the open sets.A non-negative countably additive Borel measure µ on a locally com-

pact Hausdor� space X is regular i�• µ(K) <∞ for every compact set K;• For every Borel set E,

µ(E) = inf{µ(U) : E ⊆ U,U open}• The relation

µ(E) = sup{µ(K) : K ⊆ E}holds whenever E is open or when E is Borel and µ(E) <∞.

Theorem 1.3. Theorem. Let X be a locally compact Hausdor� space.For any positive linear functional L on Cc(X), there is a unique Borelregular measure µ on X such that

L(f) =

∫X

f(x)dµ(x) for all f ∈ Cc(X).

Page 2: f X x f x 6= 0 } f 0 : f K E U,Uweb.math.ku.dk/~jakobsen/replie/compact-work.pdfRepresentations of Lie Groups: Compact (Matrix Lie) Groups Hans Plesner Jakobsen DepartmentofMathematics

2

In the case of a compact Hausdor� space, this is the so-called RieszRepresentation Theorem.

Consider now a compact Lie group K. A positive linear functional Ion C(K) is called a left invariant Haar integral if

∀f ∈ C(K);∀g ∈ K : I(L(g)f) = I(f).

Similarly, it is a right invariant Haar integral if

∀f ∈ C(K);∀g ∈ K : I(R(g)f) = I(f).

Theorem 1.4. On a compact Lie group K, there is a unique (up tomultiplication by a positive constant) left Invariant Haar integral I.It is also right invariant. We write I(f) =

∫K fdµ =

∫K f(g)dµ(g).

The following equations hold; the �rst two are just the stated invari-ance; the third is an additional identity1. We set f(g) = f(g−1):

∀f ∈ C(K);∀h ∈ K :∫K

f(hg)dµ(g) =

∫K

f(g)dµ(g)(I(L(h−1)f) = I(f)

)left inv,∫

K

f(gh)dµ(g) =

∫K

f(g)dµ(g)(I(R(h)f) = I(f)

)right inv.∫

K

f(g−1)dµ(g) =

∫K

f(g)dµ(g)(I(f) = I(f)

)inversion inv.

The measure, moreover, satis�es that the measure of K is �nite. It iscustomary, and we shall also do this, to normalize the Haar measuresuch that µ(K) = 1. The Haar measure satis�es furthermore that if fis a non-negative continuous function which is strictly positive in somepoint, then I(f) > 0.We denote the space of square integrable functions onK by L2(K, dµ)

or just by L2(K). The inner product2 is given by

(1) 〈f1, f2〉 =

∫K

f1(g)f2(g)dµ(g).

Since K is compact, and since the measure is �nite, it is clear thatC(K) ⊂ L2(K).

1The left regular and the right regular representation are given in (19) and (20), respectively2Incidentally, our inner products are all linear in the left variable!

Page 3: f X x f x 6= 0 } f 0 : f K E U,Uweb.math.ku.dk/~jakobsen/replie/compact-work.pdfRepresentations of Lie Groups: Compact (Matrix Lie) Groups Hans Plesner Jakobsen DepartmentofMathematics

3

2. The Haar measure on some groups

2.1. On U(n). The Cayley transform C

C : h 7→1 + 1

2ih

1− 12ih

maps the space H(n) of n × n hermitian matrices onto an open densesubset of U(n). Using it, one may de�ne an integral on U(n) for eachp ∈ R by∫

U(n)f(u)du =

∫H(n)

f

(1 + 1

2ih

1− 12ih

)| det(1 +

1

2ih)|pdh.

For a speci�c value of p (p = −n as far as I remember!) this is the Haarintegral. In the formula, dh is the Lebesgue measure on the real vectorspace H(n) of dimension n2.

2.2. The Haar measure on SU(2). Recall that SU(2) = S3 as atopological space. Speci�cally, write elements g of SU(2) as

g =(

x1 + ix4 x2 + ix3−x2 + ix3 x1 − ix4

).

Let us recall the de�nition of spherical coordinates in R4:

x1 = r cos(θ) x2 = r sin(θ) cos(ψ)x3 = r sin(θ) sin(ψ) cos(φ) x4 = r sin(θ) sin(ψ) sin(φ)

The Jacobian determinant (�change-of-coordinates� determinant) hasabsolute value

| det(∂x1, x2, x3, x4

∂r, θ, ψ, ψ| = r3 sin2(θ) sin(ψ).

The condition on g ∈ SU(2) is that r2 = x21 + x2

2 + x23 + x2

4 = 1. Thus,it follows that

(2)1

2π2

∫ π,π,2π

θ,ψ,φ=0f(θ, ψ, φ) sin2(θ) sin(ψ)dθdψdφ.

If f is constant in ψ, φ; i.e. if there exists a function F such that∀θ, ψ, φ : f(θ, ψ, φ) = F (θ), we obtain(3)1

2π2

∫ π,π,2π

θ,ψ,φ=0f(θ, ψ, φ) sin2(θ) sin(ψ)dθdψdφ =

2

π

∫ π

θ=0F (θ) sin2(θ)dθ.

Page 4: f X x f x 6= 0 } f 0 : f K E U,Uweb.math.ku.dk/~jakobsen/replie/compact-work.pdfRepresentations of Lie Groups: Compact (Matrix Lie) Groups Hans Plesner Jakobsen DepartmentofMathematics

4

This is relevant if for instance f is a class function3.

2.3. The Haar measure on GL(n,R). The Haar measure on thegroup (R \ {0}, ·) is given by∫

x6=0f(x)

dx

|x|.

More generally, the Haar measure on GL(n,R) is given as follows:Let dx = dx11 · · · dxn1dx12 · · · dxn2 . . . dxnn be the restriction of theLebesgue measure on Rn2

the open subset {x = {xij}ni,j=1 | det(x) 6=0}. Then the Haar measure is given as∫

f(x)dx

| det(x)|n.

3. Unitarity and complete reducibility

De�nition 3.1. A representation Π of a Matrix Lie Group is uni-tary if Π : G 7→ U(n) for some n ∈ N. Equivalently, Π is a rep-resentation in a �nite-dimensional Hilbert space (V, 〈·, ·〉) and eachΠ(g) preserves the inner product 〈·, ·〉:

∀v, w ∈ V ;∀g ∈ G : 〈Π(g)v,Π(g)w〉 = 〈v, w〉.De�nition 3.2. A representation Π in V is completely decomposableor completely reducible if there exists a direct sum decomposition

V = ⊕rk=1Vk

such that each Vk is an invariant subspace for Π and such that theresulting representation of G on Vk is irreducible for each k. Denotethese representations by Πk; k = 1, 2, . . . , r, and set Vk = VΠk

. Wethen may write

(4) VΠ = ⊕rk=1VΠk

, and Π = ⊕rk=1Πk.

Notice that the representations Πk are not necessarily all inequivalent.The matrices representing the operators Π(g) consists of k diagonalblocks when expressed according to the decomposition (4). We let[Π : Πk] denote the multiplicity of Πk in Π. This is the number oftimes Πk (or representations equivalent to it) occurs in the sum (4).

3A function f of a group G which satis�es: ∀g, h ∈ G : f(hgh−1) = f(g); i.e. which isconstant on conjugacy classes.

Page 5: f X x f x 6= 0 } f 0 : f K E U,Uweb.math.ku.dk/~jakobsen/replie/compact-work.pdfRepresentations of Lie Groups: Compact (Matrix Lie) Groups Hans Plesner Jakobsen DepartmentofMathematics

5

Proposition 3.3. If Π is a unitary representation on V and if W isan invariant subspace, then

W⊥ = {v ∈ V | ∀w ∈ W : 〈w, v〉 = 0}is an invariant subspace.

Proof: We assume that it is known that W⊥ indeed is a subspaceand that V = W ⊕W⊥ (otherwise: prove it!). We move directly toinvariance: Let v ∈ W⊥, w ∈ W . Then

〈Π(g)v, w〉 = 〈Π(g)v,Π(g)Π(g−1)w〉 = 〈v,Π(g−1) = 0.

Here, the �rst equality is just a re-writing, the second is unitarity, and thethird follows from the invariance of W . The claim is thus veri�ed. �The key observation now is the following:

Proposition 3.4. If G is a compact matrix Lie Group, and Π is a�nite-dimensional representation on VΠ, then there is an inner prod-uct 〈·, ·〉 on VΠ which is invariant, i.e. such that Π is unitary withrespect to this.

Proof: This is where the Haar measure dµ(g) comes in! Let us namelystart with an arbitrary inner product 〈〈·, ·〉〉 on V (such exist of coursein abundance). De�ne

〈v, w〉 =

∫G

〈〈Π(g)v,Π(g)w〉〉dµ(g).

That this indeed is a positive-de�nite inner product is left as an exercise.The crucial computation is, for h ∈ G:

〈Π(h)v,Π(h)w〉 =

∫G

〈〈Π(g)Π(h)v,Π(g)Π(h)w〉〉dµ(g)

=

∫G

〈〈Π(gh)v,Π(gh)w〉〉dµ(g)

=

∫G

〈〈Π(g)v,Π(g)w〉〉dµ(g)

= 〈v, w〉.Here, the �rst equality is the de�nition, the second is saying that Π isa homomorphism, the third is the right invariance of the Haar measure,and the last is the de�nition. �Combining Proposition 3.3 with Proposition 3.4 we obtain the following

result (which for �nite groups is Maschke's Theorem)

Theorem 3.5. Any �nite-dimensional representation of a compactMatrix Lie Group is completely reducible.

Page 6: f X x f x 6= 0 } f 0 : f K E U,Uweb.math.ku.dk/~jakobsen/replie/compact-work.pdfRepresentations of Lie Groups: Compact (Matrix Lie) Groups Hans Plesner Jakobsen DepartmentofMathematics

6

Notice that we only used that there was a right invariant Haar measureon G and that the total measure of G was �nite.

For completeness, as well as additional information, we state

Theorem 3.6 (Schur's Lemma; special version).

• If ΠV ,ΠW are irreducible representations of a group G, andΦ : V 7→ W is an intertwiner, i.e. if Φ is linear and

(5) ∀g ∈ G : Φ ◦ ΠV (g) = ΠW (g) ◦ Φ,

then either Φ is a bijection (and then the two representationsare equivalent) or Φ = 0.

• If ΠV is irreducible and if a linear map Ψ : V 7→ V commuteswith ΠV , i.e.

(6) ∀g ∈ G : Ψ ◦ ΠV (g) = ΠV (g) ◦Ψ,

then there is a λ ∈ C such that Ψ = λ · IV , where IV is theidentity operator on V .

• Suppose G is a compact (Matrix Lie) Group. Then the con-verse to the previous point holds. That is, if ΠV is a �nite-dimensional representation on V with the property that the onlyoperators that commute with ΠV are the operators λ · IV , thenΠV is irreducible.

Proof:(Sketched) The �rst two points are proved in Hall's book. As forthe third, begin by assuming that the representation is unitary (Propo-sition 3.4). If W is an invariant subspace consider the orthogonal pro-jection PW onto W . Use Proposition 3.3 to prove that this operatorcommutes with the representation. Hence PW = 0 or PW = IV . �

4. Matrix coefficients and characters

De�nition 4.1. Let Π be a �nite-dimensional complex representationof a group G on a vector space V . A function on G of the form

(7) Fv,v∗ : G 3 g 7→ (v∗,Π(g)v)

for any v ∈ V and any v∗ ∈ V ∗ is a matrix coe�cient (of Π). Welet

(8) MΠ = Span{(v∗,Π(·)v, v) | v∗ ∈ V ∗, v ∈ V }.In the case of a Matrix Lie Group, these functions are clearly continu-

ous.

Page 7: f X x f x 6= 0 } f 0 : f K E U,Uweb.math.ku.dk/~jakobsen/replie/compact-work.pdfRepresentations of Lie Groups: Compact (Matrix Lie) Groups Hans Plesner Jakobsen DepartmentofMathematics

7

Remark 4.2. If the vector space V = VΠ has an inner product 〈·, ·〉then, without additional assumptions about unitarity,

(9) MΠ = Span{〈Π(·)v1, v2〉 | v1, v2 ∈ V }.

The matrix coe�cients are of fundamental importance, as we shall seebelow. However, we �rst introduce

De�nition 4.3. The character χΠ of a representation Π of a groupG on a �nite-dimensional vector space V is the function from G toC given by

χΠ(g) = TrVΠΠ(g).

The following is obvious:

Proposition 4.4. Let {e1, e2, . . . , en} be a basis of V , and let {ε1, ε2, . . . , εn}be the corresponding dual basis of V ∗. Then

χΠ(g) =n∑k=1

(εk,Π(g)ek) =n∑k=1

(Π(g))kk.

We will from now on set Πkk(g) = (Π(g))kk. Notice that χΠ is a sum

of matrix coe�cients of Π.

In view of Theorem 3.5, the following is clear:

Lemma 4.5. The character χΠ of a �nite-dimensional representa-tion is a sum of the characters of the irreducible representations thatoccurs in its decomposition (c.f. De�nition 3.2);

χΠ =∑k

χΠk.

4.1. The character of the dual representation. Recall from thechapter on Linear Algebra, speci�cally (40) on p.10 that the dual repre-sentation in matrix form is given as

(10) Π′(g) = Π(g−1)t.

Since we by Proposition 3.4 may assume that Π is unitary, it follows thatwe may assume that Π(g−1) = Π(g)∗ (the complex conjugate matrix)

and hence that Π′(g) = Π(g) (the matrix whose entries are the complexconjugates of the entries (in the same position) of Π(g). In particularwe get

Page 8: f X x f x 6= 0 } f 0 : f K E U,Uweb.math.ku.dk/~jakobsen/replie/compact-work.pdfRepresentations of Lie Groups: Compact (Matrix Lie) Groups Hans Plesner Jakobsen DepartmentofMathematics

8

Proposition 4.6.

χΠ′(g) =∑i

Πii(g) = χΠ(g).

4.2. The character of a tensor product. Let Π,Σ be two �nitedimensional representations of G in VΠ,WΣ, respectively.

Proposition 4.7.

χΠ⊗Σ = χΠ · χΣ.

Proof: Let {e1, e2, . . . , en} and {f1, f2, . . . , fm} be bases of VΠ andWΣ, respectively. Since (Π ⊗ Σ)(g)(ei ⊗ fj) = Π(g)ei ⊗ Σ(g)fj, itfollows that the coe�cient of (Π⊗Σ)(g)(ei ⊗ fj) in ei ⊗ fj is given asΠii(g) · Σjj(g). But then

TrVΠ⊗WΣ(Π⊗Σ)(g) =

∑i,j

Πii(g)·Σjj(g) =

(∑i

Πii(g)

(∑j

Σjj(g)

).

The claim follows immediately from this. �

4.3. Schur orthogonality.

Lemma 4.8. Let ΠV and ΠW be complex representations of the com-pact Matrix Lie Group G. Let 〈·, ·〉 be an inner product on ΠV (notassumed invariant). If v1 ∈ V,w1 ∈ W then the map T : V 7→ Wgiven by

T (v) =

∫G

〈ΠV (g)v, v1〉ΠW (g−1)w1 dµ(g)

is G-equivariant (an intertwining map between ΠV and ΠW ).

Proof: The right invariance of the Haar measure gives that

T (ΠV (h)v) =

∫G

〈ΠV (gh)v, v1〉ΠW (g−1)w1 dµ(g) = ΠW (h)T (v).

Theorem 4.9 (Schur orthogonality 1). 4Suppose that ΠV ,ΠW are ir-reducible representations of the compact group G. Either every matrixcoe�cient of ΠV is orthogonal in L2(G) to every matrix coe�cientof ΠW , or the two representations are equivalent.

4See e.g. D. Bump: �Lie Groups� (Graduate Texts in Mathematics, Springer Verlag 2004)

Page 9: f X x f x 6= 0 } f 0 : f K E U,Uweb.math.ku.dk/~jakobsen/replie/compact-work.pdfRepresentations of Lie Groups: Compact (Matrix Lie) Groups Hans Plesner Jakobsen DepartmentofMathematics

9

Proof: Let us assume that there are invariant inner products 〈·, ·〉V and〈·, ·〉W on V and W , respectively. We wish to prove that if some matrixcoe�cients are not orthogonal, then the representations are equivalent.It su�ces to take matrix coe�cients of the form (9): 〈ΠV (g)v1, v2〉Vand 〈ΠW (g)w1, w2〉W . Our assumption is that for some v1, v2, w1, w2

0 6=∫G

〈ΠV (g)v1, v2〉V 〈ΠW (g)w1, w2〉Wdµ(g)(11)

=

∫G

〈ΠV (g)v1, v2〉V 〈ΠW (g−1)w2, w1〉Wdµ(g)(12)

= 〈T (v1), w1〉,(13)

where T is de�ned as in Lemma 4.8 in terms of the vectors v2, w2.But this means that T is not zero, hence, by Schur's Lemma, T is anisomorphism, i.e. the representations are equivalent. �

Theorem 4.10 (Schur orthogonality 2). 5Let ΠV be an irreduciblerepresentation of a compact group G with invariant inner product〈·, ·〉V . Then, for all v1, v2, v3, v4 ∈ V :(14)∫

G

〈ΠV (g)v1, v2〉V 〈ΠV (g)v3, v4〉V dµ(g) =1

dimV〈v1, v3〉〈v4, v2〉.

Theorem 4.11 (Schur orthogonality 3). Let ΠV ,ΠW be irreduciblerepresentations of the Matrix Lie Group G. Then

(15) 〈χV , χW 〉L2(G) =

∫G

χV (g)χW (g)dµ(g) ={

1 if ΠV ≡ ΠW0 otherwise

Proof: As remarked after Proposition 4.4, the characters are sums ofmatrix coe�cients, hence if ΠV and ΠW are inequivalent it follows fromTheorem 4.9 that 〈χV , χW 〉L2(G) = 0. In case ΠV = ΠW it follows fromProposition 4.6 and Proposition 4.7 that

〈χV , χV 〉L2(G) =

∫G

χΠ⊗Π′dµ(g).

Now notice that if Π is any �nite-dimensional representation, then χΠ isa sum of the characters of the irreducible summands:

χΠ =∑i

χΠi,

5For a proof see e.g. Bump's book p. 12 and p. 15

Page 10: f X x f x 6= 0 } f 0 : f K E U,Uweb.math.ku.dk/~jakobsen/replie/compact-work.pdfRepresentations of Lie Groups: Compact (Matrix Lie) Groups Hans Plesner Jakobsen DepartmentofMathematics

10

where the Πi are the irreducible summands (the same representation mayoccur several times!) Since the character of the trivial representation isthe constant function 1 and since this has integral 1 w.r.t. the normalizedHaar measure, it follows that∫

G

χΠdµ(g) = [Π : 1],

where [Π : 1] denotes the number of times the trivial representation 1occurs in the decomposition of Π into irreducible representations. Wenow show that [ΠV ⊗ Π′

V : 1] = 1, and then we are done:We �rst have to investigate, what it means for the trivial representation

to occur in ΠV ⊗Π′V : This means precisely that there is a non-zero vector

w ∈ V ⊗ V ∗ which is �xed by all Π(g) ⊗ Π′(g). And the multiplicitym = [ΠV ⊗ Π′

V : 1] = 1 is the number of mutually orthogonal suchvectors w1, · · · , wm. We want to show that m = 1: Let w be any oneof the w1, · · · , wm (we assume that there is at least one! - see later).We de�ne a map Tw : V 7→ V by

∀v ∈ V, v∗ ∈ V ∗ : (v∗, T (v)) = 〈v ⊗ v∗, w〉,where 〈·, ·〉 denotes an invariant inner product in V ⊗V ∗. We claim thatT commutes with the representation Π:

∀g ∈ G;∀v ∈ V, v∗ ∈ V ∗ :

(v∗, T (Π(g)v)) = 〈Π(g)v ⊗ Π′(g)Π′(g−1)v∗, w〉= 〈v ⊗ Π′(g−1)v∗, (Π(g)⊗ Π′(g))w〉 = (v ⊗ Π′(g−1)v∗, w〉

= (Π′(g−1)v∗, T (v)) = (v∗,Π(g)T (v)).

So, by Schur's Lemma, T is a scalar multiple λ of the identity. Thisconstant is clearly non-zero, and we may absorb it into w. But then wemay assume:

(16) ∀v ∈ V, v∗ ∈ V ∗ : (v∗, v) = 〈v ⊗ v∗, w〉.This, on the other hand, completely determines w: If two vectors havethe same inner products with all vectors v ⊗ v∗ (which, after all, spanV ⊗ V ∗), then they must be equal. Actually, equation (16) determinesuniquely a vector w as can be seen by inserting ei⊗ εj for v⊗ v∗ for allchoices of i, j. The result is that

w =∑i

ei ⊗ εi.

This proves that the trivial representation occurs exactly once. �

Page 11: f X x f x 6= 0 } f 0 : f K E U,Uweb.math.ku.dk/~jakobsen/replie/compact-work.pdfRepresentations of Lie Groups: Compact (Matrix Lie) Groups Hans Plesner Jakobsen DepartmentofMathematics

11

4.4. Representations from actions; the left (and the right)regular representation. Let G be a group which acts on a set Mi.e. such that there is a map G×M 3 (g0,m) 7→ go ? m satisfying

(17) (g1g2) ? m = g1 ? (g2 ? m).

We can turn any such action into a representation Ua of G in the vectorspace F(M) of complex valued functions on M by the formula

(18) (Ua(g)f)(m) = f(g−1 ? m).

Let G be any group. The left regular representation L is de�nedin the vector space F(G) of complex valued functions on G from theleft action g0 ?L g = g0g of the group on itself. Thus,

(19) (L(g0)f)(g) = f(g−10 g).

The right regular representation R is de�ned in the vector spaceF(G) of complex valued functions on G from the (�right�) action g0 ?Rg = g(g0)

−1 of the group on itself. Thus,

(20) (R(g0)f)(g) = f(gg0).

In the case of a compact group one usually considers L,R in L2(G, dµ)and call these representations, which by means of the invariance of theHaar measure are unitary, the left, resp. right, regular representations.

The crucial observations are the following:

Proposition 4.12. The matrix coe�cients Fv,v∗ in (7) belong toL2(G, dµ) and satisfy

(21) L(g0)Fv,v∗ = Fv,Π′(g0)v∗ ; R(g0)Fv,v∗ = FΠ(g0)v,v∗.

Proof: The functions Fv,v∗ are continuous, hence bounded and measur-able, hence belong to L2(G). The two identities are proved in much thesame way, so we only do one of them: (L(g0)Fv,v∗)(g) = (v∗,Π(g−1

0 g)v) =(Π′(g0)v

∗, v) = Fv,Π′(g0)v∗.

Lemma 4.13. Let ΠV be an irreducible �nite-dimensional represen-tation of the compact (Matrix Lie) Group G. Then ΠV ⊗ Π′

V is anirreducible representation of G⊗G.

Proof: (Sketched) We use Theorem 3.6. Suppose an operator T onV ⊗V ∗ commutes with ΠV ⊗Π′

V . Write T in block form correspondingto (60) in the Notes on Linear Algebra. (We are not assuming that Tis of the form A⊗B for some A,B!) Using the irreducibility of the tworepresentations, it follows from the fact that T commutes in particularwith all operators of the form Π(g) ⊗ IV ∗ that the blocks of T must

Page 12: f X x f x 6= 0 } f 0 : f K E U,Uweb.math.ku.dk/~jakobsen/replie/compact-work.pdfRepresentations of Lie Groups: Compact (Matrix Lie) Groups Hans Plesner Jakobsen DepartmentofMathematics

12

be scalar multiples λij of the identity in V ∗. Then use the operatorsIV ⊗ Π(g) to conclude that λij = λ · δij for some λ ∈ C. �

Combining Proposition 4.12 and Lemma 4.13, the following is ob-tained:

Proposition 4.14. Let Π = ΠV be an irreducible �nite-dimensionalrepresentation of the compact (Matrix Lie) Group G. The linear mapΨΠ : V ⊗ V ∗ 7→ L2(G, dµ) given by its action on the tensors v ⊗ v∗

as

(22) (Ψπ(v ⊗ v∗)) (g) = Fv,v∗(g) = (v∗,Π(g)v),

is an intertwiner between the representations of G×G given by, re-spectively,

(g1, g2) 7→ Π(g1)⊗ Π′(g2)on V ⊗ V ∗ and

(g1, g2) 7→ R(g1)⊗ L(g2)

on L2(G, dµ), that is

(23) ∀(g1, g2) ∈ G×G : Ψ ◦ (Π⊗Π′)(g1, g2) = (R⊗L)(g1, g2) ◦Ψ.

5. Further notes on Representations of compactgroups

5.1. The Peter-Weyl Theorem, proof in the case of compactmatrix groups. We begin by introducing the space G of equivalenceclasses of irreducible �nite-dimensional representations of the compact(Matrix Lie) Group G.

Theorem 5.1. Let G be a compact group. Then

L2(G) =⊕Π∈G

L =⊕Π∈G

(Π⊕ · · · ⊕ Π)︸ ︷︷ ︸dim(VΠ)

(orthogonal direct sum)

L⊗R6 =⊕Π∈G

Π⊗ Π′ (as representation of G×G on L2(G)).

To prove this fundamental theorem in the matrix case, we use anotherfundamental theorem, namely the Stone-Weierstrass Theorem. For this

6This notation is a little unfortunate, since it is not a usual tensor product representation,but just refers to the representation given in (23). The tensor products on the right hand sideof the equation are �correct� tensor product representations!

Page 13: f X x f x 6= 0 } f 0 : f K E U,Uweb.math.ku.dk/~jakobsen/replie/compact-work.pdfRepresentations of Lie Groups: Compact (Matrix Lie) Groups Hans Plesner Jakobsen DepartmentofMathematics

13

we need the concept of a *-algebra A of continuous functions on atopological space X. By this we mean a vector space A of continuousfunctions which has the additional property that ∀f, g ∈ A : f · g ∈ Aand f ∗ = f ∈ A. We say that A separates points if ∀x, y ∈ X∃f ∈A : f(x) 6= f(y). The theorem can now be formulated:

Theorem 5.2 (Stone-Weierstrass). Let A be a *-algebra of continu-ous functions on a compact topological space K which separates pointsand does not vanish identically at any point.7 Then A is uniformlydense in the space C(K) of continuous functions on K;

A‖·‖∞ = C(K).

Since uniform convergence implies L2 convergence we of course have

Corollary 5.3. Under the same assumptions,

A‖·‖2 = L2(K).

The application we are interested in is the following:

Corollary 5.4. Suppose that K is a compact subset of Rn. EquipK with the induced (subset) topology. Let A = PK

C (Rn) be the set ofrestrictions to K of real polynomials with complex coe�cients. Then

PKC (Rn)

‖·‖∞= C(K), (and hence)

PKC (Rn)

‖·‖2= L2(K).

Proof: This follows from Theorem 5.2 since it is clear that polynomialsare continuous on Rn and hence that their restrictions are continuouson K. Moreover, it is obviously a *-algebra, and it separates pointsx 6= y ∈ K since if x 6= y then for some coordinate number, say, i,xi 6= yi. But the polynomial Pi(r1, · · · , rn) = ri then separates x andy. �The Gram-Schmidt ortho-normalization process applied to a space of

polynomial functions clearly results in a basis of functions which still arepolynomials. It then follows from the preceding corollary that

Corollary 5.5. There is an orthonormal basis of L2(K) consistingof restrictions of polynomials.

We now consider a compact matrix group K ⊂ GL(n,R) or K ⊂GL(n,C). In the �rst case we consider K ⊂ Rn2

= Rn × · · · × Rn︸ ︷︷ ︸n

7This means that there is no point k ∈ K such that f(k) = 0 for all f ∈ A.

Page 14: f X x f x 6= 0 } f 0 : f K E U,Uweb.math.ku.dk/~jakobsen/replie/compact-work.pdfRepresentations of Lie Groups: Compact (Matrix Lie) Groups Hans Plesner Jakobsen DepartmentofMathematics

14

and in the second case K ⊂ R2n2

= Cn × · · · × Cn︸ ︷︷ ︸n

- where we view

(when it is convenient) C = R2. Let us refer to these cases as the realand, respectively, complex case. In both cases we work with polynomialswith complex coe�cients. In the real case we work with polynomials invariables x1,x2, · · · ,xn where each xi ∈ Rn; i = 1, 2, . . . , n, and inthe complex case we work with polynomials in variables z1, z2, · · · , zn,z1, z2, · · · , zn where each zi ∈ Cn; i = 1, 2, . . . , n and the variables z, zare viewed as independent real variables on C. Since in any real situationwe have K ⊂ GL(n,R) ⊂ GL(n,C) we will from now on only workwith the complex situation. Observe that for g ∈ K, left multiplicationby g in K; Mg : k 7→ g · k becomes

(z1, · · · , zn, z1, · · · , zn)Mg7→ (gz1, · · · , gzn, g z1, · · · , g zn),

where by de�nition, gz = g z.Now observe that the set Pall = P(z1, · · · , zn, z1, · · · , zn) of all poly-

nomials in the variables z1, . . . , zn, z1, · · · , zn has the form

Pall = P(z1, . . . , zn, z1, · · · , zn) = P(z1) · P(z2) · · · · · P(zn).

We obtain a representation UK of K on Pall as in (18). Speci�cally,in suggestive notation,

(24) ∀g ∈ K : (UK(g)p) (z, z) = p(g−1 ? z, g−1 ? z),

where g ? z = gz and g ? z = gz = gz.Next observe that any polynomial in P(z1) can be written as a sum

of products of the form∏r

i=1〈z1, vi〉 for vectors v1, . . . , vr ∈ C. LetρK denote the de�ning representation of K. The preceding analy-sis shows that any polynomial representation transforms according toa sub-representation of (possibly a sum of) representations of the form(ρK)⊗

r ⊗ (ρ′K)⊗s

for certain non-negative integers r, s. (If e.g. r = 0,(ρK)⊗

r

becomes the trivial 1-dimensional representation. To be com-pletely precise, (ρK)⊗

r

is the representation of K occurring in the spacePr(z).Now, to connect these spaces, consider more explicitly the restriction

map R from Pall to C(K); the space of continuous functions on K:

(25) Pall 3 p 7→ Rp ∈ C(K) : ∀k ∈ K : (Rp)(k) = p(k).

Proposition 5.6. R is an intertwiner between the representation UKon Pall and the left regular representation L on C(K).

The proof of this is left to the reader. Please observe that one mustalso explain why Rp is a continuous function.

Page 15: f X x f x 6= 0 } f 0 : f K E U,Uweb.math.ku.dk/~jakobsen/replie/compact-work.pdfRepresentations of Lie Groups: Compact (Matrix Lie) Groups Hans Plesner Jakobsen DepartmentofMathematics

15

EXAMPLES:In case K = S1 = {z ∈ C | |z|2 = zz = 1}, a polymomial p in

z has the form p(z) =∑n

k=0 akzk and a polynomial q in z has the

form q(z) =∑r

`=0 b`z`. Here the coe�cients ak and b` are all complex

numbers. The restrictions to K are in this case

p(eiθ) =n∑k=0

akeikθ

q(e−iθ) =r∑`=0

b`e−i`θ(26)

It is known from the theory of Fourier Series that both types of restric-tions are needed to expand a continuous function.

In case K = SU(2), we have the following perhaps unusual way ofdescribing the group:

K = SU(2) ={(z1 z3z2 z4

)∈ Gl(2,C) | z1z1 + z2z2 = 1,

z3z3 + z4z4 = 1, z1z3 + z2z4 = 0 and z1z4 − z2z3 = 1}.(27)

Upon closer scrutiny it is revealed that in this case, because we also have

that SU(2) =

{(α −ββ α

)∈ Gl(2,C) | |α|2 + |β|2 = 1

}, it su�ces to

consider polynomials in the variables z1, z2, z3, z4. That is, no conjuga-tions are needed (because the right hand column involves the conjugatesof the left hand column).

In case K = O(3), we have the following way of describing the group:

K = O(3) = {O(Z1, Z2, Z3) = (Z1 Z2 Z3) ∈ Gl(3,C) |∀i = 1, 2, 3 : Zi is a column vector in C3 and(28)

O(Z1, Z2, Z3)tO(Z1, Z2, Z3) = 1. Moreover, ∀i = 1, 2, 3 : Zi = Z i.}.

Clearly, also in this case one only needs the polynomials, not the conju-gates of such.

Because of Corollary 5.5 we have (essentially) proved the Peter-WeylTheorem in the case of compact matrix groups. What remains in a little�cleaning up� regarding the multiplicities. This can be done in analogywith the case of a �nite group and is left to the reader.Instead, let us notice that we have in fact proved more:

Proposition 5.7. For a compact matrix group G, any Π ∈ G occursin

(ρK)⊗r ⊗ (ρ′K)⊗

s

Page 16: f X x f x 6= 0 } f 0 : f K E U,Uweb.math.ku.dk/~jakobsen/replie/compact-work.pdfRepresentations of Lie Groups: Compact (Matrix Lie) Groups Hans Plesner Jakobsen DepartmentofMathematics

16

for some non-negative integers r, s.

EXAMPLE:Recall that the inverse of a matrix may be computed from the so-called

Cramer's Rule. Speci�cally, If a matrix A is invertible,

(A−1)ij =Cji

detA

where Crs is the determinant of the matrix obtained from A by placingzeros in row r and column s except that the position rs should havea 1. This is the rs �co-factor�. Since a matrix U in SU(n) satis�esdetU = 1 and U = (U ∗)−1 it follows by considering the inverse of U ∗

(!) that the last column in U satis�es:

Ujn = detDjn,

where Djn is the matrix obtained from U by replacing the jth row andnth column by zeros except for a 1 in the position jn. This implies that

Uncol = F (U 1

col, U2col, . . . , U

n−1col ),

where F is a multilinear function. It follows from this in the same wayas it did for SU(2) that one only needs consider polynomials, not con-jugations of polynomials (or: vice versa!). This leads immediately to theCorollary 5.8

It is also proved in a more advanced exercise that for SU(n), the dualrepresentation ρ′K occurs in (ρK)⊗

n−18.

Corollary 5.8. Any Π ∈ SU(n) occurs in (ρK)⊗r

for some non-negative integer r.

Notice that the representations of K that occur in C(K) (or L2(K))are those that are in the image of the intertwiner R. If we have the fulldescription of the representations in Pall this information is equivalentto knowing the answer to the following question - which is unknown incomplete generality to me:

QUESTIONS: What is the kernel K of R? We know from abstractalgebra that it is a �nitely generated ideal. But when is K equal to thezero set of this ideal? And, if not, is the zero set a group? (See the �rsttwo examples below.) What can be said about the quotient Pall/K ?

8Notice that this implies that the trivial representation occurs in (ρK)⊗n

.

Page 17: f X x f x 6= 0 } f 0 : f K E U,Uweb.math.ku.dk/~jakobsen/replie/compact-work.pdfRepresentations of Lie Groups: Compact (Matrix Lie) Groups Hans Plesner Jakobsen DepartmentofMathematics

17

EXAMPLE: In the K = S1 example from before, one can see that ifa polynomial

N,R∑k,`=0

ak,`zkz` 7→ 0 on S1,

then there exists a polynomial q such thatN,R∑k,`=0

ak,`zkz` = q(z, z)(zz − 1).

In other words, the kernel is generated, as an ideal, by (zz − 1).

If we now consider the representation theory of S1 we can see (asexpected) that Pall/K has a basis consisting of the equívalence classes[zn];n = 0, 1, 2, . . . , toghether with the classes [zk]; k = 1, 2, . . . ]. Letpn(z) = zn for n = 0, 1, 2, . . . and qk(z) = zk for k = 1, 2, . . . . Thenthe representation UK = US1 is given as follows:

(US1(eiθ)pn)(z) = pn(e−iθz) = e−inθzn = e−inθpn(z)(29)

(US1(eiθ)qk)(z) = qk(e−iθz) = eikθzk = eikθqk(z).(30)

In other words, we get the 1-dimensional representations of S1:

eiθ 7→ eirθ ; r = 0,±1,±2, . . . ,±n, . . . .These are precisely the irreducible �nite-dimensional reepresentations

of S1.

EXAMPLE/EXERCISE: Let r, s be positive relatively prime inte-gers. Carry out an analysis of the group

(31) K1 =

{(eirθ 00 eisθ

)| θ ∈ R

}along the lines of the previous example.

EXAMPLE: Let ΠN and ΠR be two irreducible represnetations of

SU(2) in spaces P (2)N and P (2)

R of homogeneous polynomials of degreeN and R, respectively in 2 complex variables. Then, as representationspaces

(32) P (2)N ⊗ P (2)

R = ⊕min{N,R}i=0 P (2)

|N−R|+2i = P (2)|N−R| ⊕ · · · ⊕ P

(2)N+R.

In particular,

(33) ΠN ⊗ Π= ⊕min{N,R}i=0 Π

(2)|N−R|+2i = Π|N−R| ⊕ · · · ⊕ ΠN+R.

Page 18: f X x f x 6= 0 } f 0 : f K E U,Uweb.math.ku.dk/~jakobsen/replie/compact-work.pdfRepresentations of Lie Groups: Compact (Matrix Lie) Groups Hans Plesner Jakobsen DepartmentofMathematics

18

Proof: The restriction map S (restriction to the �diagonal�): P (2)N ⊗

P (2)R 7→ P (2)

N+R given by

(34) (S(pN ⊗ qR)) (z1, z2) = pN(z1, z2)qR(z1, z2)

for pN ∈ P (2)N and qR ∈ P (2)

R is clearly a non-trivial intertwiner into

P (2)N+R . (Contemplate this.) We extend the map S to all of P (4)

N+R inthe obvious way. Let us for a moment denote the variables of pN by

w1, w2 and those of qR by w3, w4. The subspace K2 (ideal) of P (4)N+R

generated by the second order homogeneous polynomial w1w4 − w2w3(the similarity with the formula for a determinant is intended!) is clearly

mapped to zero by the map S. The map E : P (4)N+R−2 7→ P (4)

N+R givenby

(35) P (4)N+R−2 3 p 7→ (w1w4 − w2w3)p ∈ P (4)

N+R

is injective because there is Unique Factorization in P (4)N+R. The sub-

space E(P (2)N−1 ⊗ P

(2)R−1

)⊂ P (2)

N ⊗P (2)R has dimension NR, P (2)

N ⊗P (2)R

has dimension (N + 1)(R + 1) = NR + N + R + 1 and P (2)N+R has

dimension N +R + 1.It follows that there is an equivalence of representation spaces

(36) P (2)N ⊗ P (2)

R = P (2)N−1 ⊗ P

(2)R−1 ⊕ P

(2)N+R,

where, since we write ⊕, the summand P (2)N+R actually should be in-

terpreted as an irreducible subspace in P (2)N ⊗ P (2)

R isomorphic to thissummand. We use here the complete reducibility for �nite-dimensionalrepresentations of compact Matrix Lie Groups. The proof is now almostcomplete; an induction argument (downward) �nishes the job. �

6. The representation theory of finite (or compact)groups

The representation theory of �nite groups is covered by the above,since a �nite group has a Haar measure:

Proposition 6.1. The counting measure in a �nite group G is leftas well as right invariant.

Proof: Multiplication by a �xed element h ∈ G, be it from the left orfrom the right, simply introduces a permutation of the group elements;it clearly preserves the magnitude of any subset. �We will later list some special cases of the previous results.

Page 19: f X x f x 6= 0 } f 0 : f K E U,Uweb.math.ku.dk/~jakobsen/replie/compact-work.pdfRepresentations of Lie Groups: Compact (Matrix Lie) Groups Hans Plesner Jakobsen DepartmentofMathematics

19

7. The representation theory of SU(2) (once again)

7.1. Special representations of SU(2) - the group point ofview. We consider the representation Πn on the space Pn of homoge-neous polynomials of degree n in the complex variables z, w.9 Then

(37) (Πn((a bc d

))p)(z, w) = p(dz − bw,−cz + aw).

If we let pj,n(z, w) = zjwn−j for j = 0, 1, . . . , n, we obtain in particular

(Πn(

(eiθ 00 e−iθ

))pj,n) = e(n−2j)iθpj,n.

It follows that the character is given by

χΠn

(eiθ 00 e−iθ

)=(38)

eniθ + e(n−2)iθ + · · ·+ e(−n+2)iθ + e−niθ =(39)

eniθ − e−niθ

eiθ − e−iθ=

sin(n+ 1)θ

sin θ.(40)

We know that any element g =(

x1 + ix4 x2 + ix3−x2 + ix3 x1 − ix4

)∈ SU(2) is

conjugate to a diagonal element

(eiθ 00 e−iθ

)where, by considering

traces, x1 = cos θ, which means that the θ we use here is the same asin Section 2.2. Then, by the formula for the Haar measure,

〈χΠn, χΠr〉 =

∫SU(2)

χΠnχΠrdg =(41)

1

2π2

∫θ,ψ,φ

sin(n+ 1)θ

sin θ

sin(r + 1)θ

sin θsin2 θ sinψdψdφdθ =

2

π

∫ π

0sin(n+ 1)θ sin(r + 1)θdθ = δn,r.

7.2. Decomposition of tensor products.

Lemma 7.1. Assume that n ≥ m. Then

χΠn · χΠm = χΠn+m + χΠn+m−2 + · · ·+ χΠn−m.

9More precisely, we consider holomorphic polynomials; i.e. expressions of the form∑i aiz

iwn−i.

Page 20: f X x f x 6= 0 } f 0 : f K E U,Uweb.math.ku.dk/~jakobsen/replie/compact-work.pdfRepresentations of Lie Groups: Compact (Matrix Lie) Groups Hans Plesner Jakobsen DepartmentofMathematics

20

Proof: We have that

χΠn · χΠm

(eiθ 00 e−iθ

)=

(eniθ + e(n−2)iθ + · · ·+ e−niθ) · (emiθ + e(m−2)iθ + · · ·+ e−miθ)

It is easy to see that e(n+m)iθ appears exactly once in this product,e(n+m−2)iθ appears twice, e(n+m−4)iθ appears three times,. . . , e(n−m)iθ,appearsm+1 times, e(n−m−2)iθ appearsm+1 times, e(n−m−4)iθ appearsm + 1 times, e(−n+m)iθ appears m + 1 times,. . . , e−(n+m−4)iθ appearsthree times, e−(n+m−2)iθ appears twice, and e−(n+m)iθ appears exactlyonce. The claim follows from this, once one realizes that the decompo-sition in Lemma 4.5, by Theorem 4.11, is unique.. �

Corollary 7.2.

Πn ⊗ Πm = Πn+m ⊕ Πn+m−2 ⊕ · · · ⊕ Πn−m.

8. Unitarity; the algebra point of view.

We will in this section consider a number of situations, where one has arepresentation π of a 3-dimensional Lie algebra g = SpanR{L1, L2, L3}(but it easily generalizes to higher dimensional Lie algebras). Let usdenote the representation space by V . We are not assuming that V nec-essarily is �nite-dimensional. We will assume that there are 3 operatorsS,A,C that are complex linear combinations of the generators of theLie algebra in the given representation, and which satisfy

(42) [S,A] = 2 · s · A; [S,C] = −2 · s · C; and [A,C] = S

for some real constant s. We are not directly assuming irreducibility, butwe assume furthermore, that there is a non-zero vector v0 that satis�es

(43) A(v0) = 0 and S(v0) = λ · v0,

for some λ ∈ R. We are interested in the space

VC = SpanC{Cn(v0) | n = 0, 1, 2, . . . }.The space is �created� by C and its non-negative powers, and one oftencalls C a creation operator. Likewise, A is called an annihilation operator.There is no universal name for S; it depends on the speci�c situation.The vector v0 is sometimes called the highest weight vector, while inphysics, again depending on the situation, it might be called the vacuumvector.

We are interested in those situations, where there exists an inner prod-uct 〈·, ·〉 (positive de�nite, of course, but no assumptions at the momentabout completeness) on VC .

Page 21: f X x f x 6= 0 } f 0 : f K E U,Uweb.math.ku.dk/~jakobsen/replie/compact-work.pdfRepresentations of Lie Groups: Compact (Matrix Lie) Groups Hans Plesner Jakobsen DepartmentofMathematics

21

De�nition 8.1. If T is a linear operator VC 7→ VC, the formal ad-joint, T †, is de�ned by

∀n, r ∈ N ∪ {0} : 〈T †Cn(v0), Cr(v0)〉 = 〈Cn(v0), TC

r(v0)〉.De�nition 8.2. The representation π of g is unitarizable if the op-erators π(Lk); k = 1, 2, 3 are formally skew adjoint, i.e.

∀k = 1, 2, 3 : π(Lk)† = −π(Lk).

In the situations we are going to investigate, the assumption of unita-rizability implies

(44) Either C† = −A, or C† = A.

Let us cover both cases by writing

(45) C† = ε · Afor some �xed ε = ±1. Now observe

Proposition 8.3. With the above assumptions there is at most oneinner product on VC in which the representation is formally skewadjoint and such that v0 is a unit vector in the corresponding norm.

Proof: We need to compute 〈Cn(v0), Cr(v0)〉 for all n, r in N ∪ {0}.

Assume that n ≥ r and observe (verify #1):

(46) 〈Cn(v0), Cr(v0)〉 = εn〈v0, [A

n, Cr](v0)〉.Then observe that (42) implies that

(47) [An, Cr]v0 = An−rLn,r · v0, (verify #2)

where Ln,r = Ln,r(λ) is a real constant, and where, as usual, A0 := 1.But this means that we at most get something non-zero when n = r. Thecase n < r can be handled similarly (verify #3). Finally, the constantLn,r = can be computed explicitly. �

Remark 8.4. Let vn = Cn(v0) for all n = 0, 1, 2, . . . . It follows fromthe proof above that

(48) r 6= n⇒ 〈vr, vn〉 = 0.

If s 6= 010, this could also have been proved by observing that thesevectors are eigenvectors for S (verify #4).

To proceed, we need to compute [An, Cn].

10This extra assumption was added on June 14 (why is it necessary?)

Page 22: f X x f x 6= 0 } f 0 : f K E U,Uweb.math.ku.dk/~jakobsen/replie/compact-work.pdfRepresentations of Lie Groups: Compact (Matrix Lie) Groups Hans Plesner Jakobsen DepartmentofMathematics

22

Proposition 8.5.

(49) [An, Cn] =n∑k=0

CkPn,k(S)Ak,

where ∀k = 0, 1, . . . , k, Pk,n(S) is a polynomial in S, and where (ver-ify #5)

(50) P0,n(S) = n! · S · (S − s) · · · (S − (n− 1)s)).

Corollary 8.6. (verify #6)

(51) 〈vn, vn〉 > 0 ⇔ εn · λ · (λ− s) · · · · · (λ− (n− 1)s) > 0.

Example 8.7. To get started, one may notice that

[A,C] = S, [A2, C] = AS + SA = 2A(S + s),

[A,C2] = 2C(S − s), [A2, C2] = 2S(S − s) + 4C(S − 2s)A.

Also notice that (verify #7)

(52) ∀n : [A,Cn] = n · Cn−1(S − (n− 1)s).

We will now use these general observations to examine some speci�cLie algebras;

8.1. The Heisenberg algebra. The Heisenberg Lie algebra h is gen-erated by elements R,U,W satisfying

(53) [R,U ] = W, [W,R] = 0 = [W,U ].

Usually one considers the elements P,Q, Z in the set i·h given by P =−iR, Q = −iU , and Z = −iW . In terms of these, the commutatorsare

(54) [Q,P ] = iZ, [Z,Q] = 0 = [Z, P ].

One de�nes

(55) A =1√2(Q+ iP ), S = Z, and C =

1√2(Q− iP ).

Then the triple A, S,C satis�es (42) with s = 0 (verify #8)11 . More-over, it follows that in a unitarizable representation, one must have

(56) C† = A, which implies that here, ε = 1.

So (verify #9)

(57) There is unitarity if and only if λ ≥ 0.

11In this example, we denote by X both a Lie algebra element and its value in arepresentation

Page 23: f X x f x 6= 0 } f 0 : f K E U,Uweb.math.ku.dk/~jakobsen/replie/compact-work.pdfRepresentations of Lie Groups: Compact (Matrix Lie) Groups Hans Plesner Jakobsen DepartmentofMathematics

23

8.2. The Lie algebra su(2). This Lie algebra is generated by threeelements

(58)(

0 1−1 0

),(i 00 −i

),(

0 ii 0

).

In the complexi�cation one �nds the elements (verify #10)

(59) Y =(

0 01 0

), X =

(0 10 0

), H =

(1 00 −1

).

In a representation12 π set C = π(Y ), A = π(X), S = π(H).This triple then satis�es (42). Now one has (verify #11) s = 1 and

ε = 1, and any �nite-dimensional representation of su(2) is unitarizableaccording to the procedure given in this section (verify #12).

Before proceeding, observe that one has (verify #13)

Proposition 8.8. The Lie algebras su(1, 1), sl(2,R), and sp(1,R)are isomorphic.

8.3. The Lie algebra su(1, 1). This Lie algebra is generated by threeelements

(60)(

0 11 0

),(i 00 −i

),(

0 i−i 0

).

In the complexi�cation one �nds the elements (verify #14)

(61) Y =(

0 01 0

), X =

(0 10 0

), H =

(1 00 −1

).

In a representation13 π set C = π(Y ), A = π(X), S = π(H). Thistriple then satis�es (42) (verify #15). In contrast to the su(2) case,however, one now has s = 1, and ε = −1 (verify #16), and there arein�nite-dimensional unitarizable representations provided the constant λsatis�es a certain condition, which is left for the reader to determine(verify #17).

12June 19: The word unitary has been removed. What we mean is that we are looking for

unitarizable representations π.13June 19: The word unitary has been removed. What we mean is that we are looking for

unitarizable representations π.

Page 24: f X x f x 6= 0 } f 0 : f K E U,Uweb.math.ku.dk/~jakobsen/replie/compact-work.pdfRepresentations of Lie Groups: Compact (Matrix Lie) Groups Hans Plesner Jakobsen DepartmentofMathematics

24

8.4. The Heisenberg algebra AND the Lie algebra su(1, 1).Let C, S,A be the triple from the discussion of the Heisenberg algebra.Set

(62) C =i

2C2, A =

i

2A2, S = −(CA+

1

2)

Then this triple satis�es all the properties needed to de�ne a unitariz-able module for su(1, 1) provided that C,A, S come from a unitarizablerepresentation of the Heisenberg algebra (verify #18)14. There are twoindependent highest weight vectors (not necessarily both of norm 1),namely v0 and C(v0), where v0 is the highest weight vector for the rep-resentation of the Heisenberg algebra (verify #19). More precisely, set

V evenC = Span{Cn(v0) | n = 0, 2, 4, · · · } and

V oddC = Span{Cn(v0) | n = 1, 3, 5, · · · }.

Then these spaces are invariant under the su(1, 1) representation de�nedby the elements C, A, S (verify #20). The vectors v0 = v0, v0 = C(v0)�t into (43) for this triple. In particular, they are eigenvectors for S of

eigenvalues, λ and λ, respectively. This task is left for the reader (verify#21).

14This is what is sometimes called �The Harmonic Representation� or �The MetaplecticRepresentation�. It is connected to the �The Stone von Neumann Uniqueness Theorem� andto the fact that Sp(1,R) acts by automorphisms on the Heisenberg algebra.

Page 25: f X x f x 6= 0 } f 0 : f K E U,Uweb.math.ku.dk/~jakobsen/replie/compact-work.pdfRepresentations of Lie Groups: Compact (Matrix Lie) Groups Hans Plesner Jakobsen DepartmentofMathematics

25

9. Higher dimensional examples, e.g. SU(3)

One is here led by the same sort of reasoning to complexi�cation andto operators

e1 =

(0 1 00 0 00 0 0

), e2 =

(0 0 00 0 10 0 0

)(63)

f1 =

(0 0 01 0 00 0 0

), f2 =

(0 0 00 0 00 1 0

)(64)

h1 =

(1 0 00 −1 00 0 0

), h2 =

(0 0 00 1 00 0 −1

).(65)

(There are two more generators of su(3), but they are not as impor-tant.) A �nite dimensional irreducible unitary representation (π,Hπ) ofSU(3) is given by a pair (n1, n2) of non-negative integers (and any suchpair leads to a such representation) and is given by a highest weightvector v0 satisfying

dπ(e1)v0 = dπ(e2)v0 = 0

dπ(h1)v0 = n1v0 ; dπ(h2)v0 = n2v0

Hπ = Span{dπ(f1)r1dπ(f2)

s1 · · · dπ(f1)rkdπ(f2)

sk(v0)}.An irreducible representation of U(3) is also an irreducible represen-

tation of SU(3) and is simply given by such a representation tensoredtogether with a 1-dimensional representation of the form U(3) 3 u 7→detup for p ∈ Z. We here assume that p ≥ 0. One usually representssuch a representation by a triple (p+ n1 + n2, p+ n2, p) where (n1, n2)gives the SU(3) representation. Furthermore, it is convenient to give itin the form of a �Young Diagram� with three rows of boxes with resp.p+ n1 + n2, p+ n2, and p) boxes.

Figure 1. Young Diagram (9,4,2) corresponding to the irreduciblerepresentation of SU(3) given by (n1, n2) = (5, 2). As an irre-ducible representation of U(3) it has an extra factor of detu3 (be-cause of the 2 boxes in the third row)