U.U.D.M. Project Report 2008:8

Examensarbete i matematik, 30 hp

Handledare och examinator: Volodymyr Mazorchuk Maj 2008

Department of Mathematics Uppsala University

*Simple weight sl(2)-modules and their *
tensor products

Antoine Durdek

Simple weight sl(2)-modules and their tensor products

Uppsala Universitet Antoine Durdek

May 20, 2008

Contents

1 Theory about the sl(2) Lie Algebra 4

1.1 Denitions . . . . 4

1.1.1 Lie Algebra . . . . 4

1.1.2 Universal enveloping algebra . . . . 7

1.1.3 The Lie algebra sl(2) . . . . 9

1.1.4 Casimir operator . . . . 10

1.2 Finite dimensional sl(2)-modules . . . . 11

1.2.1 Structure of module generated by a primitive element 11 1.2.2 Decomposition of nite dimensional module . . . 15

2 Application 21 2.1 About the tensor product . . . . 21

2.1.1 The general answer . . . . 21

2.1.2 An example . . . . 22

2.1.3 Generalization . . . . 22

2.2 An example of innite dimensional sl(2)-module . . . . . 23

2.2.1 Construction of an innite dimensional sl(2)-module 23 2.2.2 Study of this module . . . . 24

2.2.3 Tensor product with W1 . . . . 26

2.2.4 Action of C on this module . . . . 27

2.2.5 Generalization . . . . 31

1 Theory about the sl(2) Lie Algebra

1.1 Denitions

1.1.1 Lie Algebra

Denition 1 A Lie algebra g is a vector space together with a skew symmetric bilinear map

[ , ] : g × g → g (1.1)

satisfying the Jacobi identity, where the Jacobi identity is:

[X, [Y, Z]] + [Y, [Z, X]] + [Z, [X, Y ]] = 0 (1.2) for all vectors X,Y and Z in g. The product [x, y] is called the Lie bracket or just bracket.

Remark 1 1. We can see that a Lie Algebra g is neither associative nor unitary.

2. We can dene in a very natural way the notion of Lie subalgebra.

Let us say that h is a subalgebra of g if h is a subvector space of g and if h is stable for the bracket operation. Thus h has a structure of Lie algebra.

Example 1 1. The rst example that we can give is maybe the easiest one. Let E be a vector space and dene [x, y] = 0 for all x, y ∈ E.

This denition satises all the points of the denition of Lie algebra.

Such a Lie algebra is called an abelian algebra.

2. Let (A, •) be an associative algebra (in the usual sense) and dene [x, y] = x • y − y • x ∀x, y ∈ A

This bracket operation is clearly bilinear and skew symmetric. Let us verify for a rst calculation that A is a Lie algebra by calculating

the Jacobi identity:

[x, [y, z]] + [y, [z, x]] + [z, [x, y]]

= [x, y • z − z • y] + [y, z • x − x • z] + [z, x • y − y • x]

= (x • y • z − x • z • y − y • z • x + z • y • x) +(y • z • x − y • x • z − z • x • y + x • z • y) +(z • x • y − z • y • x − x • y • z + y • x • z)

= 0

3. For example we can take the associative algebra of matrix of size n, Mn(C) with the matrix product. The Lie algebra corresponding is called gl(n). For a special Lie subalgebra of gl(n) we will look at the subspace of matrix with trace zero, sl(n). We can verify very quickly that this subspace is stable under the bracket operation, only by using the property of the trace from linear algebra.

Let A, B be two elements of sl(n), we just need to show T r([A, B]) = 0 T r([A, B]) = T r(AB − BA)

= T r(AB) − T r(BA)

= T r(AB) − T r(AB)

= 0

What is a Lie algebra of dimension one or two over the complex eld C?

• Let L be a Lie algebra of dimension one. Then L =< l > and the
bracket operation forced L to be abelian^{1}. Thus we have just:

L = {λl|λ ∈ C}

Then L is isomorphic to the Lie algebra associated with (C, ·).

• Let now L be a Lie algebra of dimension two. L =< l1, l_{2} >. We
have of course [li, l_{i}] = 0, i = 1, 2and then L is only determined by
the bracket [l1, l_{2}]. We have two dierent cases.

Case 1 [l1, l_{2}] = 0

Then we have here that all the brackets are zero, and then L is
abelian. We can conclude by saying that L is isomorphic to C^{2} .
Case 1 [l1, l_{2}] 6= 0

Then we obtain an element of L, say for example that [l1, l_{2}] = l =

1[l, l] = 0because [ , ] is skew symmetric

al_{1}+ bl_{2} ∈ L. We can work a little bit around this equation in order
to have a nicer way to describe this Lie algebra.

The image of the bracket is one dimensional, so we can change the
basis in order to have the result of the bracket equal to the rst
vector of the basis. We know that [l1, l_{2}] = al_{1} + bl_{2} 6= 0. Then
without losing of generalities^{2}, we have a 6= 0. Then let us rechoose
the basis by writing:

l^{0}_{1} := al_{1}+ bl_{2}
l^{0}_{2} := 1

al_{2}

thus [l^{0}_{1}, l^{0}_{2}] = [al_{1}+ bl_{2},1
al_{2}]

= [al_{1},1
al_{2}]

= [l_{1}, l_{2}]

= al1+ bl2

= l^{0}_{1}

This bracket completely determines L which is the unique non- abelian Lie algebra of dimension two over C.

Now we would like to classify dierent kinds of Lie algebra but we need a crop of denitions. We can dene for a Lie algebra the notion of ideal as for a classical algebra. We say that the Lie subalgebra h of g is an ideal if it satises :

[x, y] ∈ h,for all x ∈ h and y ∈ g

Moreover, we can say as for an algebra that we have the equivalence between:

g/h is an Lie subalgebra ⇔ h is an ideal of g

We can dene two descending chains of subalgebras which are called
the lower central series {Dkg} and the derivated series D^{k}g
dene them by induction. . We

D_{1}g = [g, g]

D_{k}g = [g, D_{k−1}g]

2if a = 0 then b 6= 0 and after permuting l1 and l2, we have a 6= 0

and

D^{1}g = [g, g]

D^{k}g = [D^{k−1}g, D^{k−1}g]

Denition 2 Let g be a Lie algebra.

• We say that g is simple if g contains no nontrivial ideal and dim g ≥ 2

• We say that g is nilpotent if Dkg = 0 for some k

• We say that g is solvable if D^{k}g = 0 for some k

• We say that g is semisimple if g has no nonzero solvable ideals Example 2 1. An example of simple Lie algebra is sl(2) and sl(n) in

general with n ≥ 2

2. A nilpotent algebra is the algebra of strictly upper-triangular matrix of size n, UnC. Of course, abelian Lie algebras are nilpotent algebras too (k=1).

3. An example of solvable Lie algebra is given by the algebra of upper- triangular matrix of size n. Of course we have that any nilpotent algebra is a solvable algebra.

4. A semi simple algebra is isomorphic to a product of simple algebra.

1.1.2 Universal enveloping algebra

We have seen in the rst example (2), how to build a Lie algebra from an associative algebra. In this section, we want to reverse the problem, i.e. we want from a Lie algebra L over the complex to obtain an asso- ciative algebra U(L) with unity and with the property that the bracket operation in U(L) corresponds to the bracket in L. This algebra is called the Universal enveloping algebra and is really bigger than L. We will build up to isomorphism a such algebra.

Construction of the Universal enveloping algebra

We can construct very explicitely the universal enveloping algebra. It is the quotient of the tensor algebra of L by a two-sided ideal. The tensor algebra T (L) of a Lie algebra is dened as below:

T (L) = ⊕^{∞}_{i=0}T_{i}(L)
where T_{i}(L) = L^{⊗(i)}

= L ⊗ . . . ⊗ L

| {z }

i times

and T_{0}(L) = C

We dene the multiplication m on T (L) by:

m : T (L) × T (L) −→ T (L)

(a, b) 7−→ m(a, b) = a ⊗ b

where a ∈ Ti(L) and b ∈ Tj(L) and m(a, b) ∈ Ti+j(L) We dene the two-sided ideal I by its generator:

I =< a ⊗ b − b ⊗ a − [a, b] |for all a and b in L >

Then we can write : U(L) = T (L)/I.

Moreover, the canonical injection L ,→ T (L) denes a morphism ι : L → U (L)

The universal enveloping algebra built as above satises the Universal property^{3}.
Recall that the universal property can be written as:

U (L) is the unique associative algebra such that for each associative al- gebra A and all Lie algebra morphism φ : L → A, there is a unique associative algebra morphism ψ such that the diagramm commutes, i.e.

φ = ψ ◦ ι

L ,→^{ι} U (L)
φ & ↓ ψ

A

3it explains why we call this algebra, "universal" enveloping algebra

1.1.3 The Lie algebra sl(2)

In this section, we will detail and present the Lie algebra sl(2) which will be the example in all the next sections.

First, we can explain a basis of sl(2). sl(2) is the space of the squarema-
trix of size 2 with trace null. The dimension of sl(2) is three^{4}, hence a
basis is, for example:

H = 1 0

0 −1

X = 0 1

0 0

Y = 0 0 1 0

And we have the following relations:

[H, X] = 2X, [H, Y ] = −2Y, [X, Y ] = H (1.3) The relations (1.3)are the only relations we need to verify to have a sl(2)- structure. It is necessary to know the representation of sl(2) to study its structure.

Denition 3 A representation of sl(2) is a morphism ρ : sl(2) → GL(V ) for some vector space V over C. The dimension of V is called the degree of ρ.

We know from representation theory that the notion of representation of g and the notion of g −module are equivalent. Here we will only carry on the module point of view. Recall that a g − module is a vector space V, endowed with a linear g-action, g × V → V . We gp through from one to the other by dening

ρ_{l}(v)^{def}= lv

where the left side corresponds to representation and right side to g- module structure. This preliminaries justify the fact that we will study the sl(2)-modules.

Let V be an sl(2)-module. We dene V^{λ} the eigenspace of V relatively
to H for the eigenvalue λ, i.e. the set of all x ∈ V such that Hx = λx.

An element of V^{λ} is called of weight λ.

Theorem 4 1. P_{λ∈C}V^{λ} is a direct sum.

4the four entries in the matrix and one condition for the trace, so 4 − 1 = 3

2. if v ∈ V^{λ}, then Xv ∈ V^{λ+2} and Y v ∈ V^{λ−2}
Proof

1. To prove the rst part of the theorem, we just have to show that the
intersection of two dierent V^{λ}for dierent eigenvalues is only {0}.

It is clearly true, because eigenvector are linearly independant, i.e.

if v 6= 0, then v /∈ V^{λ}∩ V^{µ} for λ 6= µ.

2. Let v ∈ V^{λ}. Then Hv = λv. Thus we have:

HXv = [H, X]v + XHv

= 2Xv + Xλv

= (λ + 2)Xv

By the same way, we have that HY v = (λ − 2)Y v, which ended the proof.

1.1.4 Casimir operator

We will dene an element in the universal enveloping algebra U(sl(2))
called the Casimir operator and denoted by C: C = H^{2}+1+2XY +2Y X.
Of course we can write it by dierent way:

C = H^{2}+ 1 + 2XY + 2Y X

= H^{2}+ 1 + 2XY + 2([Y, X] + XY )

= H^{2}+ 1 + 2XY + 2(−H + XY )

= (H − 1)^{2}+ 4XY

By the same way, we can show that C = (H + 1)^{2}+ 4Y X.

We will prove now that C is in the center of the universal enveloping.

To proove this assertion, we just need to show that C commutes with every element of a basis of sl(2) and thus, it will commute with any element of U(sl(2)).

HXY = [H, X] Y + CHY

= 2XY + X [H, Y ] + XY H

= 2XY − 2XY + XY H

= XY H

By a similar way, we can easily obtain HY X = Y XH Hence,
H · C = H · (H^{2}+ 1 + 2XY + 2Y X)

= H^{3}+ H + 2HXY + 2HY X

= H^{3}+ H + 2XY H + 2Y XH

= (H^{2}+ 1 + 2XY + 2Y X) · H

= C · H

So H and C commute.

X · C = X · (H^{2}+ 1 + 2XY + 2Y X)

= XH^{2}+ X + 2X^{2}Y + 2XY X

= [X, H] H + HXH + X + 2(X [X, Y ] + XY X) + 2XY X

= −2XH + H [X, H] + H^{2}X + X + 2(XH + [X, Y ] X + Y X^{2}) + 2XY X

= −2XH − 2HX + H^{2}X + X + 2(XH + HX + Y X^{2}) + 2XY X

= H^{2}X + X + 2Y X^{2}+ 2XY X

= (H^{2}+ 1 + 2XY + 2Y X) · X

= C · X

So C and X commtute too. We can nish by proving that C and Y commute by exactly the same calculation as above for X ·C. So, we have C in the center of our universal enveloping.We will not need it but we can say that in general, C can be construct by choosing a basis of our Lie algebra and a bilinear form (the Killing form for example), and to construct a basis for the dual with respect to this form. Then we just dene C as:

C =X
x_{i}· x^{∗}_{i}

In the case of a simple Lie algebra, we have by Schur's lemma that C is proportional to the identity.

1.2 Finite dimensional sl(2)-modules

1.2.1 Structure of module generated by a primitive element

Denition 5 The subalgebra of sl(2) generated by H and X is called Borel algebra and is denoted B.

Theorem 6 B is solvable Proof

B = hX, Hi

In order to prove that B is solvable, we need to show that it exists a k
such that D^{k}B = 0. Recall that D^{1}B = [B, B] = {[x, y]|x, y ∈ B} In
this case we have that :

[x, y] = [λ1H + µ1X, λ2H + µ2X]

= λ_{1}λ_{2}[H, H] + λ_{1}µ_{2}[H, X] + λ_{2}µ_{1}[X, H] + µ_{1}µ_{2}[X, X]

= 0 + 2λ_{1}µ_{2}X − 2λ_{2}µ_{1}X + 0

= λ · X

for some λ ∈ C. Then D^{1}B = {λ · X|λ ∈ C} and thus D^{2}B = 0

Denition 7 Let V be an sl(2)-module and let λ ∈ C. A non-zero ele- ment v ∈ V is called primitive of weight λ if it satises:

Xv = 0, Hv = λv (1.4)

Theorem 8 A non-zero element v ∈ V is primitive if and only if the line generated by v is stable for the Borel algebra B.

Proof

Let v be primitive. Then we have:

Xv = 0, Hv = λv

And thus, the line generated be v is stable for B. Reciproquely, let C · v be stable for the action of B. Then we have Hv = λv and Xv = µv, λ, µ ∈ C.

[H, X]v = 2Xv = 2µv

= HXv − XHv

= Hµv − Xλv

= λµv − µλv = 0 T hus 2µ = 0

It means that Xv=0 and thus that v is primitive.

Theorem 9 Let V be an sl(2)-module and v ∈ V be a primitive element
of weight λ. Denote vn = Y^{n}v/n! for n ≥ 0 and v−1= 0. Then we have:

1. Hvn = (λ − 2n)v_{n}
2. Y vn= (n + 1)vn+1

3. Xvn = (λ − n + 1)v_{n−1}
for all n ≥ 0

Proof 1.

n! · Hv_{n} = HY^{n}v

= [H, Y ]Y^{n−1}v + Y HY^{n−1}v

= −2Y^{n}v + Y [H, Y ]Y^{n−2}v + Y^{2}HY^{n}^{2}v

= · · ·

= −2nY^{n}v + Y^{n}Hv

= −2nv_{n}n! + Y^{n}λv

= (λ − 2n)v_{n}· n!

Then by divising by n! we have the expression of Hvn. 2.

Y v_{n} = Y · Y^{n}v/n!

= (n + 1)Y^{n+1}v/(n + 1)!

= (n + 1)v_{n+1}

3. For this case, we will do it by induction on n.

Xv_{0} = Xv = 0

because v is primitive and with the convention that v−1 = 0.
nXv_{n} = XY v_{n−1}

= [X, Y ]v_{n−1}+ Y Xv_{n−1}

= Hv_{n−1}+ (λ − n + 2)Y v_{n−2}

= (λ − 2n + 2 + (λ − n + 2)(n − 1))v_{n−1}

= n(λ − n + 1)v_{n−1}

Then by divising by n, we obtain the expression we were looking for.

Theorem 10 There are only two dierent cases:

1. All the (en)_{n∈N} are linearly independant

2. It exists an integer m such that λ = m. Moreover ei = 0, ∀i ≥ m+1
and e0, · · · , e_{m} are linearly independant

Proof First of all, the fact that the (en)_{n} are linearly independant is
clearly true because the en have dierent weights (or belong to dierent
eigenspace). Then now we have two dierent cases: all the en 6= 0 or
there is an integer m such that em+1 = 0, which are the dierent cases of
the theorem. Furthermore, in the second case, we have ei = 0, ∀i ≥ m + 1
because of the action of Y . The action of X ends the proof:

Xe_{m+1} = (λ − m)e_{m} e_{m} is a non-zero vector

= 0 thus λ − m = 0 λ = m

Theorem 11 Let V be a nite dimensional sl(2)-module and let v ∈ V be a primitive element.

Then the submodule of V generated by v is an irreducible sl(2)-module.

Proof V is nite dimensional, then of course, we are in the second case of the preceding theorem. Let m be the weight of v and denote W = hvi the submodule of V generated by v.

We claim that W contains vi, for all i between 0 and m. In fact, W is an sl(2)-module, therefore it is stable for the action of H,X and Y . By theorem (9), the relations (and more precisely the action of X and Y) imply that W contains all of this vectors.

In order to prove that W is irreducible, let W^{0} be a non-zero submodule
of W . Because W^{0} is non-zero, it contains one of the vi. Once again we
use the theorem (9) to say that it contains vi+1, · · · , v_{m} by the action of
Y and vi−1, · · · , v_{0} by the action of X. And hence we have W = W^{0},
which means that W is irreducible.

1.2.2 Decomposition of nite dimensional module

In this section, we will see an important result about sl(2)-module. It is the fact that every nite dimensional module can be decomposed into a (direct) sum of irreducible module. Before that, we will just describe the structure of an irreducible sl(2)-module of dimension m + 1.

Denition 12 Let m ≥ 0 and Wm be a vector space, together with a
basis v0, · · · , v_{m}. We equipped Wm as the irreducible module seen in the
preceding section with the following relations:

1. Hvn = (m − 2n)vn

2. Y vn= (n + 1)v_{n+1}
3. Xvn = (m − n + 1)v_{n−1}

with the convention that v−1 = 0 = v_{m+1}.

Theorem 13 Wm is an irreducible sl(2)-module of dimension m + 1.

Proof To show this we just need to check the sl(2) relations (1.3).

[H, Y ]v_{n} = (n + 1)Hv_{n+1}− (m − 2n)Y v_{n}

= (n + 1)(m − 2n − 2)v_{n+1}− (m − 2n)(n + 1)v_{n+1}

= −2(n + 1)v_{n+1}

= −2Y vn

[H, X] v_{n} = (m − n + 1)Hv_{n−1}− (m − 2n)Xv_{n}

= (m − n + 1)(m − 2n + 2)v_{n−1}− (m − 2n)(m − n + 1)v_{n−1}

= 2(m − n + 1)v_{n−1}

= 2Xv_{n}

[X, Y ] v_{n} = (n + 1)Xv_{n+1}− (m − n + 1)Y v_{n−1}

= (n + 1)(m − n)vn− (m − n + 1)nvn

= (m − n − n)v_{m}

= (m − 2n)v_{n}

= Hv_{n}

And thus the relations are true and Wm is an sl(2)-module of dimension m + 1.

Theorem 14 Every irreducible sl(2)-module of dimension m + 1 is iso- morphic to Wm.

Proof Let V be an irreducible sl(2)-module of dimension m + 1. We
claim that it exists a primitive element in V . In fact, let x ∈ V be an
eigenvector of H and consider the sequence x, Xx, X^{2}x, · · · As we have
proved in theorem (4), Xx (and thus X^{2}x, · · ·) is still an eigenvector for
H with a dierent eigenvalue. It follows that we have an innite sequence
of linearly independant vectors in a nite dimensional module. Of course
one of the X^{k}x is a null vector. Take the smallest k such that X^{k}x = 0,
then X^{k−1}x := v is our primitive element!

The theorem (10) shows that the weight of v is an integer m^{0}. Let W be
the submodule of V generated by v. We know from the preceding section
that dim W = m^{0}+ 1and W is irreducible. But V is irreducible too, and
W is a submodule of V . Then of course, V = W and hence m^{0} = m.
Then the relations in the denition show that V is isomorphic to Wm.

Example 3 1. W0 is the module with only one vector v0 and the fol-

lowing relations:

Hv_{0} = v_{0}, Y v_{0} = 0, Xv_{0} = 0
It is the trivial sl(2)-module.

2. W1 is the irreducible module of dimension 2. If we note the vector
of W1, w1, w−1 instead of v0, v_{1} respectively, it is easier to write the
action of H. Let V^{1} and V^{−1} the eigenspace corresponding. Then
we have:

Hw_{1} = w_{1}
Hw_{−1} = −w_{−1}

Xw_{1} = 0
Xw−1 = w1

Y w_{1} = w−1

Y w−1 = 0

Then we can represent the action of sl(2) on W1 by the action of H, X and Y on the eigenspaces.

X

y
V^{−1} ←− V^{1}

Y

H H

To conclude, we can say that W1 is isomorphic to C^{2} seen as an
sl(2)-module. Let us remind that if v ∈ C^{2} is an eigenvector of
H, with eigenvalue λ, then Xv (resp. Y v) is still eigenvector with
eigenvalue λ + 2 (resp. λ − 2).

3. sl(2) viewed as an sl(2)-module by the adjoint representation is isomorphic to W2. We will detail this example a little bit longer in the following paragraph.

We recall that sl(2)-module and representation of sl(2) are equivalent notions to justify that sl(2) can be seen as an sl(2)-module. We dene the adjoint represention as below:

ad : sl(2) × sl(2) −→ sl(2) (g, h) 7−→ ad(g)(h) := [g, h]

The adjoint representation is in fact a representation through the equality which preserves the bracket operation:

[ad(x), ad(y)](z) = ad(x) ◦ ad(y)(z) − ad(y) ◦ ad(x)(z)

= ad(x)([y, z]) − ad(y)([x, z])

= [x, [y, z]] + [y, [z, x]]

= −[z, [x, y]] by the Jacoby identity

= [[x, y], z]

= ad([x, y])(z)

It is enough to consider the action of H to determine the isomorphism φ between sl(2) and W2. Indeed, in the one hand, we have the relations:

ad(H)(H) = [H, H] = 0 ad(H)(X) = 2X

ad(H)(Y ) = −2Y

and thus we have ad(H) =

0 0 0 0 2 0 0 0 −2

. In the other hand, we have the action of H on W2:

Hv_{0} = 2v_{0}
Hv1 = 0
Hv_{2} = −2v_{2}

and thus φ maps: H 7→ v1 X 7→ v0 and Y 7→ v2.

We come now to the main result about the complete reducibility of sl(2)- modules.

Theorem 15 Every nite dimensional sl(2)-module V can be written as a direct sum of irreducible modules. In other words:

for each nite dimensional module V, there exists integer (mi)_{i∈I} such
that

V =X

i∈I

W_{m}_{i}

Proof Let V be a nite dimensional sl(2)-module.

We can decompose V in terms of generalized eigenspaces for the Casimir operator C. Then we have:

V =X

θ∈C

V_{θ}
where

V_{θ} =v ∈ V |∃k ∈ N s. th. (C − θ)^{k}v = 0

This spaces are all sl(2)-modules. Indeed C commutes with every element of sl(2), thus it means that Vθ is stable for the action of sl(2) as we can see:

Let v ∈ Vθ and Z ∈ sl(2)

(C − θ)^{k}Zv = Z(C − θ)^{k}v

= Z · 0

= 0

⇒ Zv ∈ V_{θ}

and the relations are satised because Vθ is a subspace of V .

Now we can say that an irreducible module of Vθ have dimension√ θ. In fact, let v be a primitive element in Vθ,then

(C − θ)v = ((H + 1)^{2}+ 4Y X − θ)v

= (H + 1)^{2}v − θv

= 0 T hus (H + 1)v =

√ θv Hv = (

√

θ − 1)v

Then v is of weight√

θ − 1which means that hvi = W_{(}^{√}_{θ−1)} of dimension

√θ. For reason of notation we will write k :=√

θ − 1 (or θ = (k + 1)^{2}).

Then we just need to prove that we can decompose Vθ into irreducible modules; indeed V is a sum of Vθ so if we can write them as a sum of irreducible module, it will nish the proof. We will do it by induction on the number of copy of Wk in Vθ.

• Case n=2

Let v be a primitive element. Then hvi := W . Let x be a prim-
itive element of Vθ/W. We have W, Vθ/W ∼= Wk. We dene a
basis of W (resp. Vθ/W) by the sequence (v0, Y v_{0}, · · · , Y^{k}v_{0}) =
(v_{0}, v_{1}, · · · , v_{k})(resp.(x0, Y x_{0}, · · · , Y^{k}x_{0}) = (x_{0}, x_{1}, · · · , x_{k})). Then
we write [C]k−2i the matrix of C on the eigenspace Vk−2i = hvi, xii.
We just need to show that for each i ∈ [0, k], [C]k−2i is diagonaliz-
able. Because of our denition, we have that [Y^{i}] = id for each i.

Then we have:

[Y^{k}C] = [Y^{k}] · [C]k

= [C]_{k}

Y^{k}C

= [CY^{k}] because C and Y commute

= [C]−k[Y^{k}]

= [C]−k

But in the special case of [C]k and [C]−k, we can explicitate the matrices. Indeed, we have:

[C]k = [(H + 1)^{2} + 4Y X]k

= [(H + 1)^{2}]_{k}

[C]_{−k} = [(H − 1)^{2}+ 4XY ]−k

= [(H − 1)^{2}]−k

We claim that [H]λ = [H]_{λ+2}− 2^{5} Then we can conclude:

[C]−k = [(H − 1)^{2}]−k

= [(H − 2k − 1)^{2}]_{k}

= [C]_{k}

= [(H + 1)^{2}]_{k}
Thus [(H − 2k − 1)^{2}]k = [(H + 1)^{2}]k

2(−2k − 1)H + 4k^{2}+ 4k

k = [2H]_{k}
[(4k + 4)H]_{k} = k(4k + 4)

[H]_{k} = k because k 6= −1

5Just write in terms of matrices the equality HY = Y H − 2Y

=⇒ [C]_{k} = [(H + 1)^{2}]_{k}

=⇒ [C]_{k} = (k + 1)^{2}

=⇒ [C]_{k} = θ

Then C = θ, and thus v and x are linearly independant for the
action of C. Then Vθ ∼= W ⊕ Vθ/W and thus Vθ ∼= Wk⊕ W_{k}.

• Case n ≥ 3

Let W be a submodule of Vθ containing (n-1) copies of Wk. Then
V_{θ}/W ∼= Wk. By induction hypothesis we can write W as:

W = W_{k}⊕ · · · ⊕ W_{k}

| {z }

n−1times

Then we can repeat the rst step (n-1) times with the quotient mod- ule and the copies and we obtain that Vθ/W is linearly independant with the other modules and thus we have the decomposition of Vθ.

2 Application

2.1 About the tensor product

We have seen in the theorical part that we can decompose every nite dimensional sl(2)-module into a (direct) sum of irreducible module Wi

(of dimension i+1). We will illustrate this result by an example: the general decomposition of the tensor product of two irreducible modules.

2.1.1 The general answer

Theorem 16 Let n and m be two integers, such that n ≥ m. Let Wn

and Wm be the irreducible sl(2)-module and V = Wn ⊗ W_{m} the tensor
product associated with this two modules.

Then,

W_{m}⊗ W_{n} ∼=

m

X

i=−m

W_{n+i}

Proof First of all, let v = (v0, . . . , vn)(resp. w = (w0, . . . , wm))be a basis of Wn (resp Wm). Let us recall that Wi is an sl(2)-module of dimension i + 1 if:

• w = (w_{0}, ..., w_{i}) is a basis of Wi

• H(w_{j}) = (i − 2 · j) · w_{j}

• X(w_{j}) = (i − j + 1) · w_{j−1}

• Y (w_{j}) = (i + 1) · w_{j+1}

We know from the theory that the eigenvalue for H of the sl(2)- module
W_{n} are the integers −n, −n + 2, . . . , n − 2, n (with eigenvector the wi).

We can explain a basis of V = Wn⊗ W_{m}:

(v_{i}⊗ w_{j}) for all (i, j) such that 0 ≤ i ≤ n and 0 ≤ j ≤ m
We can rst look at the action of H on vi⊗ w_{j}.

H(v_{i}⊗ w_{j}) = H(v_{i}) ⊗ w_{j} + v_{i}⊗ H(w_{j})

= (n − 2i) · v_{i}⊗ w_{j}+ (m − 2j) · v_{i}⊗ w_{j}

= (n + m − 2(i + j)) · vi⊗ wj

Then we see that vi⊗ w_{j} is still an eigenvector for H with eigenvalue the
sum of the eigenvalue of vi and wj. Thus the eigenvalue of Wn⊗ W_{m}
are in the set −n − m, −n − m + 2, . . . , n + m. We just need to compute
the multiplicity of the eigenvalue by a dimensional argument^{1}. Without
losing of generality, we can assume n ≥ m. The multiplicity of the
eigenvalue n + m − 2k for k ∈ 0, . . . , n + m is :

( k + 1 if k ≤ m m + 1 if k ≥ m

To have this result, we just need to compute the dierent vectors which
have this eigenvalue. Thus, by induction, one shows that Wn ⊗ W_{m} is
isomorphic to Wn+m ⊕ W_{n+m−2} ⊕ . . . ⊕ W_{n−m}. We can notice that of
course this isomorphism preserves the dimension:

(n + 1) · (m + 1) =

m

X

k=0

n + m − 2k + 1

We will not precise the isomorphism and how to get this decomposition.

2.1.2 An example

Let n = 2 and m = 2.

Then we have W2⊗ W_{2} ∼= W_{4}⊕ W_{2}⊕ W_{0} ∼= W_{4}⊕ W_{2}

In this case we can formulate the isomorphism between this sl(2)-modules.

We just need to nd an element with eigenvalue 4 (or −4) and to look
at the module generated by this element, in the sense that we look at
the action of H,X and Y. The module we obtained is clearly isomorphic
to W4. Then the complement of this module will be isomorphic to W2.
An eigenvector w such that H(w) = 4 · w is for example w2 ⊗ w_{2} (with
the same notation as befor). Then < w >∼= W4. A generator of the
complement can be for example w0⊗ w_{1}− w_{1} ⊗ w_{0}.

2.1.3 Generalization

In the rst section, we have seen the decomposition of the tensor prod- uct of two irreducible sl(2)-modules. We can deduce in a more general case that for arbitrary nite dimensional modules, the decomposition of their tensor product follows directly from the theorem (2.1.1) and using the additiviy of the tensor product, i.e. in other words we can write

1we have a basis of eigenvectors

explicitely:

V ⊗ W =

n

X

i=0

W_{l}_{i}⊗

m

X

j=0

W_{k}_{j}

= X

i,j

W_{l}_{i}⊗ W_{k}_{j}

= X

i,j τ (i,j)

X

p=0

W_{|l}_{i}_{−k}_{j}_{|+2p} where τ (i, j) := l_{i}+ k_{j}− |l_{i}− k_{j}|
2

All the sums in the preceding calculation are of course direct as we have seen in (2.1.1).

2.2 An example of innite dimensional sl(2)-module

2.2.1 Construction of an innite dimensional sl(2)-module

We will now dene a vector space with some relations. In this section we will just show that this space is an sl(2)-module.

Denition 17 Let V (λ, c) be a innite dimensional vector space with
basis (vi)_{i∈Z}. We dene on V (λ, c) the following relations:

• H(v_{i}) = (λ + 2i) · v_{i}

• Y (v_{i}) = v_{i−1}

• X(vi) = α(i) · vi+1 where α(i) := ^{c−(λ+2i+1)}_{4} ^{2}.

The value of λ and c are complex, they are just elements in our eld.

Theorem 18 1. V (λ, c) is an sl(2)-module.

2. C(vi) = c · vi where C is the Casimir operator Proof

1. To prove that we have an sl(2)-module, we just need to check that the sl(2)-relations (1.3) hold.

HY (v_{i}) − Y H(v_{i}) = H(v_{i−1}) − Y ((λ + 2i) · v_{i})

= (λ + 2i − 2) · v_{i−1}− (λ + 2i) · v_{i−1}

= −2 · v_{i−1}

= −2Y (v_{i})

HX(v_{i}) − XH(v_{i}) = H(α(i) · v_{i+1}) − X((λ + 2i) · v_{i})

= (λ + 2i + 2)α(i) · v_{i+1}− α(i)(λ + 2i) · v_{i+1}

= 2α(i) · v_{i+1}

= 2X(vi)

XY (v_{i}) − Y X(v_{i}) = X(v_{i−1}) − Y (α(i) · v_{i+1})

= α(i − 1) · v_{i}− α(i) · v_{i}

= [α(i − 1) − α(i)] · v_{i}

= (λ + 2i + 1)^{2}− (λ + 2i − 1)^{2}

4 · v_{i}

= 2(2λ + 4i)
4 v_{i}

= (λ + 2i)v_{i}

= H(v_{i})

Thus we have [H, X] = 2X, [H, Y ] = −2Y and [X, Y ] = H. So it means that V (λ, c) is an sl(2)-module.

2.

C(v_{i}) = ((H − 1)^{2}+ 4XY )(v_{i})

= (H − 1)(H − 1)(v_{i}) + 4XY (v_{i})

= (H − 1)((λ + 2i − 1) · v_{i}) + 4X(v_{i−1})

= (λ + 2i − 1)^{2} · v_{i}+ 4α(i − 1)(v_{i})

= (λ + 2i − 1)^{2} · vi+ (c − (λ + 2i − 1)^{2})vi

= c · v_{i}

2.2.2 Study of this module

The aim of this section is to determine if V (λ, c) is irreducible or not and of course it will depend on the values of λ and c.

First of all, it is easy to show that the eigenspace of H are the sub- space generated by each vi and that they are all one-dimensionnal. For

notation, we will write Vi =< v_{i} >= V^{λ+2i}, the eigenspace with eigen-
value λ + 2i. The action of C on this subspaces is diagonalizable in the
sense that C(Vi) ⊂ V_{i}^{2}. In the same idea, Y carries Vi into Vi−1. The
action of Y is bijective for each i (Recall that Y (vi) = v_{i−1}). The only
question is how acts X.

Generally, X acts bijectively from Vi to Vi+1. We just need to have α(i) 6= 0. If for each i, α(i) 6= 0 then the action of X is bijective for each i, it means that V (λ, c) is irreducible. So we need to know what are the conditions on λ and c to have α(k) = 0 for some k.

α(k) = 0 ⇔ (c − (λ + 2k + 1)^{2})

4 = 0

⇔ c = (λ + 2k + 1)^{2}

So we have a relation between c and λ. In general for each value of c, there are innitely many value of λ for which there is a k ∈ Z such that X(vk) = 0. But because k is an integer, there is some special cases where we can nd two roots and therefore two vectors vk1 and vk2 such that X(vki) = 0 for i = 1, 2. we can summarize this dierent cases:

• c is an integer and we can precise that c is a square, i.e. it exists
an element n ∈ N∗ such that c = n^{2}. Then if λ is of the form:

n − 1 − 2k for k ∈ N∗ we nd two vectors vk and vk−n as we said, i.e. X(vk) = 0 and X(vk−n) = 0.

• c is any complex number except the square of an integer. Then
we can write c = |c| e^{iθ}; if λ = p|c|e^{i(θ/2+π)} − 1 − 2k or λ =
p|c|e^{iθ/2}− 1 − 2k for some k ∈ N we have X(vk) = 0and X(vi) 6= 0
for all i 6= k.

• for all the other possibilies of c and λ, we have X(vi) 6= 0 for all i ∈ N

In the rst case, V (λ, c) is not irreducible. If we look at the submodule
W generated by any vector vi with i ≤ k − n we nd that W is equal
to < vi |i ≤ k − n >. It is stable by the action of sl(2) because of
X(v_{k−n}) = 0. Then we look at the quotient V^{0} := V (λ, c)/W. The sub-
space W^{0} generated by any vector vi, i between k−n and k is a submodule
too for the same reason. Thus the quotient V^{0}/W^{0} is a submodule (sta-
ble for the action of sl(2) and it is irreducible). Then V (λ, c) is the sum
of three irreducible subquotients, of which two are innite dimensional
and one is nite dimensional (that is isomorphic to Wn−1of dimension n).

2we have this inclusion because C acts by multiplication by a complex c

In the second case, we only have one vector vk such that X(vk) = 0.
It is exactly the same idea as above. We nd that V (λ, c) is isomorphic
to the sum of two (innite dimensional) irreducible subquotients, W and
W^{0}, where W is < vi |i ≤ k > and W^{0} = V (λ, c)/W.

In the last case, as we said in the introduction of this section, we have no vector such that X(vi) = 0, hence the action of X is bijective, which means that V (λ, c) is irreducible.

2.2.3 Tensor product with W1

In this section, we will look and study the tensor product of V (λ, c)
and W1, the irreducible sl(2)-module of dimension 2. The aim is to
understand the structure of this representation. Let us recall that W1was
described earlier^{3} and that the action of sl(2) on W1 can be summarized
as:

w−1 w_{1}
H −w−1 w_{1}

X w_{1} 0

Y 0 w−1

First, we will look at the action of H, X and Y on the two vectors of rank
i, vi⊗ w_{1} and vi⊗ w_{−1}:

H(v_{i}⊗ w_{1}) = H(v_{i}) ⊗ w_{1}+ v_{i}⊗ H(w_{1})

= (λ + 2i + 1) · v_{i}⊗ w_{1}

H(vi⊗ w−1) = H(vi) ⊗ w−1+ vi⊗ H(w−1)

= (λ + 2i − 1) · v_{i}⊗ w−1

X(v_{i}⊗ w_{1}) = X(v_{i}) ⊗ w_{1}+ v_{i}⊗ X(w_{1})

= α(i) · v_{i+1}⊗ w_{1}

X(v_{i}⊗ w−1) = X(v_{i}) ⊗ w−1+ v_{i}⊗ X(w−1)

= α(i) · v_{i+1}⊗ w−1+ v_{i}⊗ w_{1}
Y (v_{i}⊗ w_{1}) = Y (v_{i}) ⊗ w_{1}+ v_{i}⊗ Y (w_{1})

= vi−1⊗ w1 + vi⊗ w−1

Y (v_{i}⊗ w−1) = Y (v_{i}) ⊗ w−1+ v_{i}⊗ Y (w−1)

= v_{i−1}⊗ w−1

We dene on V the spaces V^{λ+2i+1}, the eigenspaces of H. As we can see

3See the example (2)

from the above relations, V^{λ+2i+1}, ∀i ∈ Z are of dimension two. Indeed,
V^{λ+2i+1} = hv_{i}⊗ w_{1}, v_{i+1}⊗ w_{−1}i

For reasons of notation, for each i, we will write (and order) the two basis
vectors of V^{λ+2i+1}:

w_{i,1} := v_{i}⊗ w_{1}
w_{i,2} := v_{i+1}⊗ w−1

Then, we can rewrite the action on V^{λ+2i+1} of X and Y :
X : V^{λ+2i+1} −→ V^{λ+2i+3}

w_{i,1} 7−→ α(i) · w_{i+1,1}

w_{i,2} 7−→ α(i + 1)w_{i+1,2}+ w_{i+1,1}

Y : V^{λ+2i+1} −→ V^{λ+2i−1}
w_{i,1} 7−→ w_{i−1,1}+ w_{i−1,2}
w_{i,2} 7−→ w_{i−1,1}

Hence, in terms of matrices, we have:

X =α(i) 1 0 α(i + 1)

Y =1 0 1 1

Then we can sum up the situation as:

· · · V^{λ+2i−1} V^{λ+2i+1} V^{λ+2i+3} · · ·

2.2.4 Action of C on this module

Now in this section we will study the action of C, the Casimir operator on V (λ, c). First of all, we have the following result:

Theorem 19 1. ∀i ∈ Z, C ∈ End(V^{λ+2i+1}), i.e. for each i ∈ Z, C
acts linearly from V^{λ+2i+1} to V^{λ+2i+1}

2. if c 6= 0, then the action of C is bijective

3. Moreover, if c 6= 0, then the eigenvalues of C on V^{λ+2i+1} depend
only on c (and not on λ nor i)

Proof