• No results found

Representations of SO(3) in C [x, y, z]

N/A
N/A
Protected

Academic year: 2021

Share "Representations of SO(3) in C [x, y, z]"

Copied!
48
0
0

Loading.... (view fulltext now)

Full text

(1)

IN

DEGREE PROJECT TECHNOLOGY, FIRST CYCLE, 15 CREDITS

STOCKHOLM SWEDEN 2019,

Representations of SO(3) in C [x, y, z]

OTTO SELLERSTAM

KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ENGINEERING SCIENCES

(2)

INOM

EXAMENSARBETE TEKNIK, GRUNDNIVÅ, 15 HP

STOCKHOLM SVERIGE 2019,

Representationer av SO(3) i C [x,y,z]

OTTO SELLERSTAM

KTH

SKOLAN FÖR TEKNIKVETENSKAP

(3)

Abstract

This paper will present a concrete study of representations of the special orthogonal group, SO(3), a group of great importance in physics. Specifically, we will study a natural representation of SO(3) in the space of polynomials in three variables with complex coefficients, C[x, y, z]. We will find that this special case provides all irreducible representations of SO(3), and also present some corollaries about spherical harmonics. Some preparatory theory regarding abstract algebra, linear algebra, topology and measure theory will also be presented.

Sammanfattning

Denna rapport kommer att redog¨ora en konkret unders¨okning av representationer av den speciella ortogonala gruppen, SO(3), en grupp med viktiga till¨ampningar inom fysik. Mer specifikt kommer vi att studera en naturlig representation av SO(3) i rummet av polynom i tre variabler med komplexa koefficienter, C[x, y, z]. Vi kommer att se att alla irreducibla representationer av SO(3) dyker upp ur detta specialfall, samt presentera n˚agra f¨oljdsatser ang˚aende klotytefunktioner. F¨orberedande teori ang˚aende abstrakt algebra, linj¨ar algebra, topologi och m˚atteori kommer ¨aven att redovisas.

(4)

Contents

1 Introduction 4

2 Some Algebraic Structures 5

3 Linear Algebra 8

3.1 Basic concepts . . . 8

3.2 Multilinear algebra . . . 11

4 Topology 15 4.1 Point-set topology . . . 15

4.1.1 Topological spaces and continuity . . . 15

4.1.2 Compactness . . . 16

4.2 Topological groups . . . 17

5 Measure Theory 19 5.1 Construction of the Lebesgue integral . . . 19

5.1.1 Lp-spaces . . . 20

5.2 Borel algebras and the Haar measure . . . 21

6 Representation Theory 23 6.1 Basic definitions . . . 23

6.2 Some important theorems . . . 24

6.3 Character theory . . . 26

6.4 Canonical decomposition of a representation . . . 30

6.5 From finite to compact groups . . . 30

7 The 3D Rotation Group SO(3) 32 7.1 Parameterization . . . 32

7.2 Conjugacy classes of SO(3) . . . 33

7.3 The Haar measure on SO(3) . . . 34

8 Representations of SO(3) 35 8.1 Representations in C[x, y, z] . . . 35

8.1.1 Introduction . . . 35

8.1.2 The natural representation . . . 36

8.1.3 Characters of SO(3) . . . 38

8.1.4 An inner product on C[x, y, z] . . . 39

8.2 Tensor products of irreducible representations . . . 44

(5)
(6)

1 Introduction

The goal of representation theory is to reduce the study of complicated algebraic structures to the study of vector spaces; elements are represented as bijective linear operators on a vector space, which often helps to concretize the underlying algebraic structure. Some of the most common structures to study using representation theory include groups, associative algebras and Lie algebras.

In this paper we will discuss an analysis of the 3D rotation group from a representation theoretic per- spective. We will go through some fundamental concepts of abstract and linear algebra, which will be used to define representations and to present some basic ideas from representation theory of finite groups. SO(3) is, however, not a finite group, so we will also need to discuss a framework of how to translate the ideas from the finite case to the infinite case. To discuss this framework, we also need to develop some concepts from topology and measure theory, since notions such as compactness and integration with respect to a measure are required.

(7)

2 Some Algebraic Structures

We start of by defining some basic algebraic structures, together with some examples.

Definition 1 (Group). A group is a set G equipped with a binary operation ? : G × G → G which satisfies the following axioms for all a, b, c ∈ G.

1. Associativity: a ? (b ? c) = (a ? b) ? c

2. Identity: there exists an element e ∈ G such that a ? e = e ? a = a 3. Inverse: there exists an element a−1∈ G such that a−1? a = a ? a−1= e

When we want to emphasize both the underlying set G and the group operation ?, we will write (G, ?). A group where the elements commute, i.e. a ? b = b ? a for all a, b ∈ G, is called an abelian group, named after Niels Henrik Abel.

Example 1. We provide some examples of groups. Notice that the first two examples are abelian, while the last one is not.

1. The integers Z under addition, (Z, +) form a group. Note however that the integers under multi- plication, (Z, ·), does not, since the multiplicative inverse of for example 2 is 1/2, which is not an integer.

2. The rational numbers without the number 0 form a group under multiplication, denoted (Q − {0}, ·).

3. The general linear group GL(n) consisting of all invertible n × n-matrices with real entries is a group under matrix multiplication. We will later on consider this group to be a subset of Rn2.

4. The orthogonal group O(n) = {A ∈ GL(n) : ATA = AAT = I} ⊂ GL(n) and the special orthogonal group SO(n) = {A ∈ O(n) : det(A) = 1} ⊂ GL(n) form subgroups of the general linear group, i.e.

they are subsets of GL(n) that also form groups.

Next we define a field, a structure which is a bit more complex than a group, since its definition involves two binary operations instead of one, together with more axioms.

Definition 2 (Field). A field is a set F together with two binary operations, addition + : F × F → F and multiplication · : F × F → F, that satisfies the following axioms for all a, b, c ∈ F.

1. Commutativity of addition: a + b = b + a

2. Associativity of addition: (a + b) + c = a + (b + c)

3. Additive identity: there exists an element 0 ∈ F such that a + 0 = a 4. Additive inverse: there exists an element −a ∈ F such that a + (−a) = 0 5. Commutativity of multiplication: a · b = b · a

6. Associativity of multiplication: (a · b) · c = a · (b · c)

7. Multiplicative identity : there exists an element 1 ∈ F such that 1 · a = a

8. Multiplicative inverse: for a 6= 0, there exists an element denoted a−1 such that a−1· a = 1 9. Distributivity: a · (b + c) = (a · b) + (a · c)

Notice that this definition can be summarized by saying that a field is an abelian group under addition with additive identity denoted 0, and that the nonzero elements form an abelian group under multiplication, with multiplicative identity denoted 1. The multiplication also needs to be distributive over the addition.

Example 2. Next, we provide some examples of fields.

(8)

1. The rational numbers Q, the real numbers R and the complex numbers C together with regular addition and multiplication all form a field.

The reader should be comfortable with the notion of a vector space, but we include the definition for the sake of completeness.

Definition 3 (Vector space). A vector space over a field F is a set V together with two binary operations, addition + : V × V → V and scalar multiplication · : F × V → V that satisfies the following axioms for all x, y, z ∈ V and a, b ∈ F.1

1. Commutativity of addition: x + y = y + x

2. Associativity of addition: (x + y) + z = x + (y + z)

3. Additive identity: there exists a vector 0 such that 0 + x = x 4. Additive inverse: there exists a vector −x such that −x + x = 0 5. First distributive law : (a + b) · x = a · x + b · x

6. Second distributive law : a · (x + y) = a · x + a · y

7. Multiplicative identity : 1 · x = x, where 1 denotes the multiplicative identity in F 8. Relation to ordinary multiplication: (ab) · x = a · (b · x)

Example 3. Again, we include some examples, also for the sake of completeness.

1. One of the most simple examples of a vector space is R3, the space in which we live, consisting of 3-tuples with vector addition and scalar multiplication defined coordinate wise.

2. More generally, by letting F be an arbitrary field we can construct the set Fn, where n is some positive whole number, of n-tuples of elements of F.

Next we define a special kind of vector space with additional structure.

Definition 4. Let A be a vector space over a field F equipped with an additional binary operation × : A × A → A. We call A an algebra if it satisfies the following for all x, y, z ∈ A and a, b ∈ F.

1. Right distributivity: (x + y) × z = x × z + y × z 2. Left distributivity: x × (y + z) = x × y + x × z 3. Compatibility with scalars: (ax) × (by) = (ab)(x × y)

A substructure is a subset of some algebraic structure, that itself is the same type of structure. Examples of substructures are

1. Subgroups, a subset of a group that is a group 2. Subfield, a subset of a field that is a field

3. Subspace, a subset of a vector space that is a vector space 4. Subalgebra, a subset of an algebra that is an algebra

For algebras, we also have a more strict class of substructures: a left ideal of an algebra A is a subalgebra I ⊂ A such that aI = {ax : x ∈ I} ⊂ I for all a ∈ A. Analogously, a right ideal of A is a subalgebra I ⊂ A such that Ia = {xa : x ∈ I} ⊂ I for all a ∈ A. A two-sided ideal is an ideal that is both a left and a right ideal. For an algebra A we need to consider an ideal, and not just a subalgebra, I to define the quotient algebra A/I, or else we might run into problems.

A central concept in the study of algebraic structures is the concept of homomorphisms. A homomorphism is a structure-preserving map between two algebraic structures of the same type, in the sense that is preserves the operations of the structures. We will define group homomorphisms and vector space homomorphisms, but it should be fairly clear how the concept carries over to the case of fields and algebras.

1We will only consider the case where the field F is equal to the complex numbers C.

(9)

Definition 5 (Group homomorphism). Let (G, ?) and (H, ) be groups. A function f : G → H is called a group homomorphism if

f (a ? b) = f (a)  f (b) for all a, b ∈ G.

Definition 6 (Vector space homomorphism). Let V and W be two vector spaces over the same field F. A function f : V → W is called a vector space homomorphism (or linear map) if

f (u + v) = f (u) + f (v) f (cu) = cf (u) for all u, v ∈ V and c ∈ F.

The set of all homomorphisms between two algebraic structures A and B is denoted Hom(A, B). A bijective homomorphism is called an isomorphism, and the set of all isomorphisms from A to B is denoted Iso(A, B). A homomorphism from an object to itself is called an endomorphism, and the set of all endomor- phisms on A is denoted End(A). A bijective endomorphism is called an automorphism, and the set of all automorphisms on A is denoted Aut(A). In conclusion, we have that

1. Hom(A, B) = {f : A → B : f is a homomorphism}

2. Iso(A, B) = {f ∈ Hom(A, B) : f is bijective}

3. End(A) = Hom(A, A) 4. Aut(A) = Iso(A, A).

If there exists an isomorphism between two algebraic structures A and B, we say that they are isomorphic, and denote this as A ∼= B. In some sense, two isomorphic structures are the same. Even if they look different, they behave in the same way. As an analogy, a regular chess board and a themed one are practically the same. They represent the same game and behave in the same way, even though they might look different.

A monopoly board, however, does not behave in the same way as a chess board.

(10)

3 Linear Algebra

3.1 Basic concepts

In this section we will simply repeat some basic definitions and results from linear algebra that should be familiar to the reader. We start by describing a way to build new vector spaces with the help of old ones.

Definition 7. Let V amd W be two vector spaces over the same field F. The external direct sum of V and W , denoted V ⊕ W , is the set of pairs (v, w), with v ∈ V and w ∈ W , with the following operations.

(v, w) + (v0, w0) = (v + v0, w + w0) c(v, w) = (cv, cw)

for all v, v0∈ V , w, w0 ∈ W and c ∈ F .

Note the the direct sum of two vector spaces is yet another vector space.2

Definition 8. Let V be a vector space, and W1, W2⊂ V be subspaces of V such that any vector v ∈ V can be written uniquely as

v = w1+ w2

where w1∈ W1 and w2∈ W2. Then V is said to be the internal direct sum of W1 and W2.

Theorem 1. Let V be a vector space that is equal to the internal direct sum of two subspaces W1, W2⊂ V . Then V is isomorphic to the external direct sum of W1 and W2.

Proof. We construct a linear map L : W1⊕ W2→ V by L(w1, w2) = w1+ w2. It is clear that this map is both surjective and injective from the definition of the internal direct sum. It is therefore also an isomorphism.  Since the concept of internal and external direct sum are isomorphic and therefore practically the same, we will write W1⊕ W2for both the internal and external direct sum of W1 and W2.

Suppose that V is a vector space and that W ⊂ V is a subspace of V . A subspace W0 ⊂ V is called a complement of W in V if V = W ⊕ W0. The mapping p which sends each v ∈ V to its component w ∈ W is called the projection of V onto W associated with the decomposition V = W ⊕ W0. It follows that

range(p) = W

p(w) = w, for all w ∈ W.

If p ∈ End(V ) is any linear operator on V satisfying the two properties above, we see that V is the direct sum of W and the kernel of p. That is,

V = W ⊕ ker(p).

We therefore have a bijective correspondence between the projections of V onto W and the complements of W in V .

Proposition 1. Let V be a vector space, and let V1, V2 ⊂ V be subspaces such that V = V1⊕ V2. Let L : V → W be an injective linear transformation to some other vector space W . Then

L(V ) = L(V1) ⊕ L(V2).

Proof. It is clear that any element w in L(V ) can be written as w = w1+ w2, where w1 ∈ L(V1) and w2∈ L(V2). We show that L(V1) ∩ L(V2) = {0}.

Let a ∈ L(V1)∩L(V2). Then there exists a v1∈ V1such that a = L(v1) and a v2∈ V2such that a = L(v2).

This means that L(v1) = L(v2). Since L is injective it must be the case that v1= v2, but V1∩ V2= {0}, so

v1= v2= 0, and therefore also a = 0. 

Next we examine how we can remove a subspace from a space.

2In fact, the direct sum is the coproduct in the category of vector spaces.

(11)

Definition 9 (Quotient space). Let V be a vector space over a field F, and let W be subspace of V . We define the coset of x in W as

x + W = {x + w : w ∈ W }.

The quotient space V /W is defined to be the space all cosets V /W := {x + W : x ∈ V } with addition and scalar multiplication defined by

1. (x + W ) + (x0+ W ) = (x + x0) + W 2. c(x + W ) = cx + W

for all x, x0∈ V and c ∈ F. It is easy to check that these operations are well-defined, and that dim(V /W ) = dim(V )/ dim(W ).

Definition 10 (Norm). Let the field F either be the real numbers R or the complex numbers C, and let V be a vector space over F. A norm of V is a non-negative scalar-valued function || · || : V → [0, ∞) that satisfies the following properties for all x, y ∈ V and a ∈ F.

1. Triangle equality: ||x + y|| ≤ ||x|| + ||y||

2. Absolute scalability: ||ax|| = |a| · ||x||

3. Positive-definiteness:

||x|| = 0 ⇐⇒ x = 0 A vector space equipped with a norm is called n normed product space.

Definition 11 (Inner product). Let the field F either be the real numbers R or the complex numbers C, and let V be a vector space over F. An inner product on V is a map h·, ·i : V × V → F that satisfies the following axioms for all x, y, z ∈ V and a ∈ F

1. Conjugate symmetry: hx, yi = hy, xi, where the bar denotes the complex conjugate.

2. Linearity in the second argument :

hx, ayi = ahx, yi hx, y + zi = hx, yi + hx, zi 3. Positive-definiteness:

hx, xi ≥ 0

hx, xi = 0 ⇐⇒ x = 0

A vector space equipped with an inner product is called an inner product space.

Note that every inner product space V is also a normed vector space by defining the induced norm

||x|| := hx, xi for all x ∈ V . However, the converse is not always true; not every norm can be produced by an inner product.

Let V be an inner product space equipped with the inner product h·, ·i, and let W ⊂ V be a subspace of V . The subspace of all element which are orthogonal to all elements in W is called the orthogonal complement of W in V , and is denoted by W. Written out, we have that

W = {x ∈ V : hx, yi = 0 for all y ∈ W }.

It is clear that W is a complement of W in V , that is, V = W ⊕ W.

Suppose that we have two inner product spaces V and W , both over the same field F, endowed with inner products h·, ·iV and h·, ·iW respectively. There is a natural way to use the (external) direct sum to

(12)

create a new inner product space V ⊕ W , by assigning it the inner product h·, ·iV ⊕W : V ⊕ W × V ⊕ W → F defined by

h(v, w), (v0, w0)iV ⊕W = hv, v0iV + hw, w0iW.

This is then naturally extended to the sum of multiple product spaces. Notice that if V = V1⊕ V2 is the internal direct sum of two subspaces V1 and V2, each equipped with a special inner product, then the inner product definition is the same as saying

hv, v0iV = hv1, v01iV1+ hv2, v02iV2

for all v, v0∈ V , where v1, v01∈ V1 and v2, v02∈ V2 is the projection of v and v0 onto V1 and V2respectively.

Definition 12. Let V and W be two finite dimensional inner product spaces with inner products h·, ·iV and h·, ·iW respectively, and let A : V → W be a linear transformation between these spaces. The adjoint of A is a linear transformation A: W → V such that

hx, AyiW = hAx, yiV

for all x ∈ W and y ∈ V .

It is possible to show that the adjoint of a linear transformation always exists, and that it is unique.3 However, we will not prove it.

Theorem 2. Let V and W be two finite dimensional inner product spaces, and let A : V → W be a linear transformation. The relationship between the image of A and the kernel of its adjoint is given by

ker(A) = A(V ). Proof. We have the following equivalences.

x ⊥ A(V ) ⇐⇒ hx, AyiW = 0, ∀y ∈ V

⇐⇒ hAx, yiV = 0, ∀y ∈ V

⇐⇒ Ax = 0.

The last equivalence is true because if hAx, yiV = 0 for all y ∈ V , then it it is also zero for the specific case

of y = Ax. 

Definition 13 (Banach space). Let the field F either be the real numbers R or the complex numbers C, and let V be a vector space over F equipped with a norm || · ||. A sequence {xi}i=1 in V is said to be a Cauchy sequence, if for every positive real number  > 0 there exists a positive integer N such that for all positive integers m, n ≥ N it holds that

||xm− ym||.

V is said to be a Banach space if for every Cauchy sequence {xi}i=1 in V there exists an element x in V such that

n→∞lim ||xn− x|| = 0.

Recall that every inner product space is also a normed space under the induced norm.

Definition 14 (Hilbert space). An inner product space V equipped with the inner product h·, ·i that is also a Banach space with respect to the induced norm ||x|| = hx, xi for all x ∈ V is called a Hilbert space.

We state the following proposition without proving it.

Proposition 2. Every finite-dimensional normed vector space is a Banach space.

Note that this also means that every finite dimensional inner product space is a Hilbert space.

3This is not necessarily true if we allow the vector spaces to be infinite-dimensional.

(13)

Example 4. Let V and W be two finite-dimensional normed vector spaces over R or C with norms || · ||V and || · ||W respectively, and let Hom(V, W ) denote the vector space of all linear functions from V to W . We can equip this space with a norm called the operator norm. Let T ∈ Hom(V, W ) be a linear function from V to W , and define the operator norm of T as

||T ||op= sup

v∈V :v6=0

||Av||W

||v||V

.

Since both V and W are finite-dimensional, so is Hom(V, W ).4 By Proposition 2, Hom(V, W ) equipped with the operator norm is therefore a Banach space.

3.2 Multilinear algebra

In this section our main goal will be to define a new way to build new vector spaces out of old ones, namely the tensor product. The definition of the tensor product follows the one given in [Roman, 2008]. We begin by defining the notion of a bilinear map.

Definition 15. Let U , V and W be vector spaces over a field F, and let U × V denote the Cartesian product of U and V as sets.5 A function f : U × V → W is said to be bilinear if it is linear in both variables separately, that is if

f (αu + βu0, v) = αf (u, v) + βf (u0, v) and

f (u, αv + βv0) = αf (u, v) + βf (u, v0) for all u, u0 ∈ U , v, v0∈ V and α, β ∈ F.

We will now define a type of universality that will help us define the tensor product.

Definition 16. Let U × V be the Cartesian product of two vector spaces over a field F, as sets. A pair (T, t : U × V → T ), where T is a vector space, is said to be universal for bilinearity if for any vector space W , and every bilinear map f : U × V → W , there is a unique linear transformation τ : T → W such that

f = τ ◦ t.

This can be summarized by the following commutative diagram.

U × V T

W

t

f ∃!τ

Definition 17. Let V and W be vector spaces over a field F. Any universal pair for bilinearity (T, t : U × V → T ) is called the tensor product of V and W . The vector space T is denoted by V ⊗ W , and the image t(v, w) of any two vectors v ∈ V , w ∈ W is denoted v ⊗ w.

This definition of the tensor product is rather abstract, but we will provide a more concrete, but coordinate dependant, construction of the tensor product.

Again, let V and W be vector spaces, and let B = {ei: i ∈ I} and D = {fi: i ∈ J } be bases for V and W respectively, for some index sets I and J . A bilinear map t on V × W is uniquely determined by its value on the ”basis pairs” (ei, fj): if V 3 v =P

i∈Iαiei and W 3 w =P

j∈JβJfj then t(u, v) = t(X

i∈I

αiei,X

j∈J

βJfj) =X

i∈I

X

j∈J

αiβjt(ei, fj)

4This can be seen by identifying Hom(V, W ) with Mdim(V )×dim(W ), the space of all dim(V ) × dim(W )-matrices.

5It is important to note that U × V is just the Cartesian product of sets, and that we therefore do not have any algebraic structure on U × V . For example, expressions like

(x, y) + (z, w) and α(x, y) are meaningless.

(14)

For each of the images t(ei, fj) we now invent a new symbol and write ei⊗ fj and define T to be the vector space with the basis

E = {ei⊗ fj : i ∈ I, j ∈ J }.

One can indeed verify that this construction satisfies the definition of the tensor product. If g : U × V → W is bilinear, then the function τ : T → W defined by

τ (ei⊗ fj) = g(ei, fj) is a unique linear map satisfying the definition.

An obvious consequence from the previous construction is the following.

Proposition 3. For two finite-dimensional vector spaces V and W , we have that dim(V ⊗ W ) = dim(V ) · dim(W ).

We can now define some important algebras, starting with the tensor algebra T (V ): an algebra on a vector space V over a field F with multiplication of elements being the tensor product ⊗. It is defined as

T (V ) =

M

n=0

V⊗n,

where

V⊗n= V ⊗ V ⊗ · · · ⊗ V

| {z }

n times

and V⊗0:= F.

Now let I ⊂ T (V ) be the ideal generated by elements of the form v ⊗ v for v ∈ V , that is the smallest ideal of T (V ) containing v ⊗ v for v ∈ V . We then define the exterior algebra of V , denoted Λ(V ), to be the quotient algebra

Λ(V ) := T (V )/I.

The exterior product of two elements of Λ(V ) is the product induced by the tensor product ⊗ on T (V ).

More explicitly, first define the canonical surjection

π : T (V ) → Λ(V ) v 7→ v + I.

Then for any a, b ∈ Λ(V ) there are v, w ∈ T (V ) such that a = π(v) = v + I and b = π(w) = w + I. The exterior product of a and b is then defined by

a ∧ b := π(v ⊗ w) = v ⊗ w + I.

It follows from the definition of the exterior product that it is anticommutative, which in turn implies that a ∧ a = 0 for all a ∈ Λ(V ).

The subspace Λk(V ) ⊂ Λ(V ) spanned by elements of the form v1∧ v2∧ · · · ∧ vk, where each vi∈ Λ(V ), is called the k-th exterior power of V . Suppose the V is finite dimensional of dimension n, and that {ei}ni=1 is a basis for V , then the set {ei1∧ ei2∧ · · · ∧ eik : 1 ≤ i1 < i2 < · · · < ik ≤ n} is a basis for Λk(V ). This shows that the dimension of Λk(V ) is

dim(Λk(V )) =n k

 .

Lastly we define the symmetric algebra, Sym(V ), by letting J ⊂ T (V ) be the ideal generated by elements of the form v ⊗ w − w ⊗ v, for all v ∈ V and w ∈ W .

Sym(V ) := T (V )/J.

Just as with the exterior power, we can construct the k-th symmetric power of V , by Symk(V ) = Tk(V )/J,

(15)

where Tk(V ) = V⊗k. We do not use any special symbol for multiplication in this algebra; instead we just type s1s2 for the product of two elements s1, s2∈ Sym(V ).

If we again suppose that {ei}ni=1is a basis for V , then a basis for Symk(V ) is given by {ei1ei2· · · eik: 1 ≤ i1≤ i2≤ · · · ≤ ik ≤ n}. This shows that the dimension of the k-th symmetric power of a finite-dimensional vector space V is

dim(Symk(V )) =n + k − 1 k

 , where n = dim(V ).

Now define φ : V ⊗ V → Sym2(V ) ⊕ Λ2(V ) via

φ(ei⊗ ej) = (ei⊗ ej+ J, ei⊗ ej+ I).

One can show that this defines an isomorphism, which gives that V ⊗ V ∼= Sym2(V ) ⊕ Λ2(V ).

An interesting property of the tensor product that we will use later on is that it distributes over the direct sum. The proof of the following proposition is taken out of [Dummit and Foote, 2004].

Proposition 4. Let U , V and W be vector spaces. Then there are unique isomorphisms so that (U ⊕ V ) ⊗ W ∼= (U ⊗ W ) ⊕ (V ⊗ W )

U ⊗ (V ⊕ W ) ∼= (U ⊗ V ) ⊕ (U ⊗ W ).

Proof. We start with the first isomorphism. Consider the map

(U ⊕ V ) × W → (U ⊕ W ) ⊗ (V ⊗ W ) ((u, v), w) 7→ ((u ⊗ w), (v ⊗ w)).

This map is clearly bilinear, so by the universal property of the tensor product it induces a unique linear transformation

τ : (U ⊕ V ) ⊗ W → (U ⊕ W ) ⊗ (V ⊗ W ) ((u, v) ⊗ w) 7→ ((u ⊗ w), (v ⊗ w)).

We now want to define a map in the other direction. Consider the maps U × W → (U ⊕ V ) ⊗ W

(u, w) 7→ (u, 0) ⊗ w and

V × W → (U ⊕ V ) ⊗ W (v, w) 7→ (0, v) ⊗ w.

Again, both these maps are bilinear, so they both induce unique linear maps φ1 and φ2 described by the commutative diagram

U ⊗ W V ⊗ W

(U ⊕ V ) ⊗ W

φ1 φ2

where φ1(u ⊗ w) = (u, 0) ⊗ w and φ2(v ⊗ w) = (0, v) ⊗ w. Therefore, the map τ: (U ⊕ W ) ⊗ (V ⊕ W ) → (U ⊕ V ) ⊗ W

(u ⊗ w1, v ⊗ w2) 7→ φ1(u, w1) + φ2(v, w2)

= (u, 0) ⊗ w1+ (0, v) ⊗ w2

is a well defined linear transformation. In fact, τ and τ are inverses of each other, which proves the first

isomorphism in the theorem. The other one is proved analogously. 

Remark. Notice the power of the coordinate-free definition of the tensor product by the universal property.

(16)

Now suppose that we have two linear maps between vector spaces, L1: V1→ W1 and L2: V2→ W2. It is natural to ask if we can build a new linear map from L1 and L2, with inputs from V1⊗ V2and outputs in W1⊗ W2. This can be done by defining the tensor product of linear maps:

L1⊗ L2: V1⊗ V2→ W1⊗ W2 v1⊗ v27→ L1(v1) ⊗ L2(v2).

Let L : V → W be a linear map, then in analogy with the tensor product of vector spaces we define the linear map L⊗nas

L⊗n: V⊗n→ W⊗n

v1⊗ v2⊗ · · · ⊗ vn 7→ L(v1) ⊗ L(v2) ⊗ · · · ⊗ L(vn).

The map L⊗n can then be used to construct induced linear maps L∧n : Λn(V ) → Λn(W ) and LSn : Symn(V ) → Symn(W ) by

L∧n(v1∧ v2∧ · · · ∧ vn) = L(v1) ∧ L(v2) ∧ · · · ∧ L(vn) = L⊗n(v1⊗ v2⊗ · · · ⊗ vn) + I and

LSn(v1v2· · · vn) = L(v1)L(v2) · · · L(vn) = L⊗n(v1⊗ v2⊗ · · · ⊗ vn) + J, where I and J is the relevant ideals from the definition of the two algebras.6,7

6It is clear that if dim(V ) = n, then dim(Λn(V )) = 1, i.e. Λn(V ) is one-dimensional. Let L ∈ End(V ) be an operator on V . Since Λn(V ) is one-dimensional, the operator L∧n corresponds to a scalar. After some closer analysis, one can conclude that this scalar corresponds to the usual determinant of L, that is, L∧n = det(L) · id. This can be taken as the definition of the determinant of a linear operator, thereby giving a coordinate-free definition.

7This can all be made more general and natural in the language of category.

(17)

4 Topology

4.1 Point-set topology

4.1.1 Topological spaces and continuity

Continuity is the foundation of topology, and as a topological property it only relies on the concept of open sets. We will begin the preliminaries with the definition of a topological space. The following section is mostly based on the material presented in [Basener, 2006].

Definition 18. Let X be a set, and let τ be a collection of subsets of X. τ is a topology on X if it satisfies the following.

1. Any (finite or infinite) union of members of τ belong to τ 2. Any finite intersection of members of τ is in τ

3. Both X and the empty set ∅ is in τ

The members of τ are called open sets, and the set X together with the topology τ forms a topological space.

This is the topological definition of open sets. When we want to emphasize the specifics of a topology, we will write a topological space as the tuple (X, τ ), otherwise we will just refer to the topological space by the symbol of the set (in this case X).

One often defines the open sets of Rn with the help of open balls: a subset O ⊂ Rn is defined to be open if there for every x ∈ O exists an r > 0 such that Br(x) = {y ∈ Rn : |y − x| < r} ⊂ O. One can show that this definition does indeed satisfy the three axioms in definition 18.

We will now define the notion of continuity in a topological setting.

Definition 19. Let (X, τX) and (Y, τY) be two topological spaces. A function f : X → Y is said to be continuous if the inverse of every open set is open. That is if

f−1(O) = {x ∈ X : f (x) ∈ O} ∈ τX, for all O ∈ τY.

This definition is a generalization of the regular -δ definition often used in calculus.

Example 5. We will provide some examples of topological spaces.

1. Let X be a set, and define τ = P(X), where P(X) denotes the power set of X. This is called the discrete topology on X, and a space with the discrete topology is called a discrete space. Any function from a discrete space to another topological space is continuous.

2. A set X together with the topology τ = {X, ∅} is called an indiscrete space

3. Let (X, τX) be a topological space, and let Y be a subset of X. We can define a natural topology on Y called the subspace topology, in which every subset of Y is open if it is the intersection of an open subset of X with Y . That is,

τY = {Y ∩ O : O ∈ τX}.

4. Let (M, d) be a metric space. The metric topology is the topology induced by the metric by defining open sets to be the union of open balls

Br(x) = {y ∈ X : d(x, y) < r}, where x ∈ X and r > 0.

With this general definition of continuity, it is very easy to prove that the composition of two continuous functions is continuous.

Proposition 5. Let X, Y and Z be topological spaces, and let f : X → Y and g : Y → Z be two continuous functions. Then the composition g ◦ f is also continuous.

(18)

Proof. Let O ⊂ Z be an open subset of Z. Then g−1(O) is open, hence also f−1(g−1(O)) is open. Therefore

we have that (g ◦ f )−1(O) = f−1(g−1(O)) ⊂ X is open. 

Next we define closed sets in a topological setting.

Definition 20. Let X be a topological space. A subset A of X is closed if its complement Ac= X − A is open.

Proposition 6. Let X and Y be topological spaces. A function f : X → Y is continuous if and only if f−1(A) ⊂ X is closed for all closed subsets A of Y .

Proof. First let f : X → Y be continuous, and let A be a closed subset of Y , i.e. Ac is open. Then f−1(Ac) is open, hence f−1(Ac)c= f−1(A) is closed.

Now let f : X → Y be a function such that f−1(A) ⊂ X is closed for all closed subsets A of Y , and let O be an open subset of Y . We know that f−1(Oc) = f−1(O)c is closed, hence f−1(O) is open.  Definition 21. Let X be a topological space, and x be a point in X. We say that a subset N ⊂ X is a neighborhood of x if there is an open set O ⊂ X such that

x ∈ O ⊂ N.

We call N an open neighborhood if N itself is open.

Intuitively we see a neighborhood of a point x as all the points ”near” x. Notice, however, that the concept of ”near” does not really make sense in an arbitrary topological space, where no metric is defined.

Before we move on to our next definition, we need to define a special function. Let A1, A2, . . . , An be sets. The ith canonical projection is the map pi :Qn

j=1Aj → Ai given by pi(x1, x2, . . . , xn) = xi. Next we define a topology which we typically endow Cartesian products of topological spaces, with the help of this canonical projection.

Definition 22. Let X1, X2, . . . , Xn be topological spaces. The product topology on the Cartesian product X =Qn

i=1Xiis the topology in which a subset O ∈ X is open if and only if pi(O) is open for each i, where pi: X → Xi is the canonical projection.

Definition 23. A topological space X is said to be a Hausdorff space if for any two points x and y in X, there exist open neighborhoods Nx and Ny of x and y respectively, such that Nx∩ Ny = ∅.

Observe that every metric space is a Hausdorff space.

4.1.2 Compactness

Here we will generalize the notion of compactness, which should be a concept the reader should recognize from calculus. The definition is a rather long one, but we will dive into it.

Definition 24. Let (X, τ ) be a topological space and let E ⊂ X be a subset of X. A collection C of open subsets of X is said to be an open cover of E if the union of all elements in C contain E, that is

E ⊂ [

A∈C

A.

A subcollection of an open cover that is itself a cover is called a subcover. That is, if C0 ⊂ C is a cover of E, then C0 is a subcover of C. A subcover is called a finite subcover if it contains only finitely many open sets.

A subset F of X is said to be compact if every open cover of F has a finite subcover. The topological space (X, τ ) is itself compact if every open cover of X has a finite subcover.

We include a famous theorem, for the sake of completeness. However, we do not prove it.

Theorem 3 (Heine-Borel Theorem). A subset of Euclidean space Rn is compact if and only if it is closed and bounded.

(19)

Theorem 4. A closed subset of a compact space is compact.

Proof. Let X be a compact space and let A be a closed subset of X. Let C = {Oα} be a collection of open sets in X that covers A. Observe that X − A is open. The collection of open sets in X consisting of the sets in C together with the set X − A is an open cover of X. Since X is compact, this cover has a finite subcover {O1, O2, . . . , On, X − A}. Then C0= {O1, O2, . . . , On} is a finite subfamily whose union contains A.  Although R is not compact, every point in R has a compact neighbourhood. We can generalize this property to arbitrary topological spaces.

Definition 25. A topological space X is locally compact if every point in X has a compact neighbourhood.

4.2 Topological groups

A topological group is a set with both topological and algebraic properties that fit nicely together.

Definition 26. A topological group is a group (G, ?) that is also a Hausdorff space such that the group operations of multiplication and taking inverses are continuous. That is, the functions

µ : G × G → G (a, b) 7→ ab and

ι : G → G a 7→ a−1,

where G × G is viewed as a topological space with the product topology, are continuous.

The concept of homomorphisms carries over to the case of topological groups, however, we now require the function to be continuous.

Example 6. Some examples of simple topological groups are the following.

1. Any group can be made into a topological group trivially by equipping it with the discrete topology.

A group equipped with the discrete topology is called a discrete group.

2. Rn with the usual Euclidean topology (i.e. the topology induced by the ordinary Euclidean metric) is a topological group under addition.

Proposition 7. The matrix group GL(n) is a topological group.

Proof. First recall that a function f : Rn→ Rn given by

f (x) = (f1(x), f2(x), . . . , fn(x))

is continuous if and only if each fi is continuous. Now let A, B ∈ GL(n) be matrices. The ijth entry of the product AB is given by the polynomialPn

k=1aikbkj. Thus, matrix multiplication is continuous.

By Cramer’s rule (see [Lay, 2006]), the ijth entry of A−1 is given by (ijth cofactor of A)

det(A) ,

which shows that taking inverses is continuous. 

We now take O(n), SO(n) ⊂ GL(n) to be subsets of GL(n) equipped with the subspace topology. This makes multiplication and taking inverses continuous in O(n) and SO(n) automatically. Thus we have the following corollary.

Corollary 1. The groups O(n) and SO(n) are topological groups.

(20)

Next we will consider compact groups, i.e. topological groups that are compact as topological spaces. It is easy to see that GL(n) is not compact. For example, the matrix cI is in GL(n) for all c ∈ R − {0} and hence GL(n) is not bounded as a subsets of Rn2. However, we have the following proposition.

Proposition 8. The topological groups O(n) and SO(n) are compact.

Proof. To prove that O(n) is compact, we need to show that it is closed and bounded as a subset of Rn×n. For a matrix A ∈ O(n), the ijth entry of the equation AAT = I is given by

n

X

k=1

aikajk= δij. (1)

For each i, j, let Sij denote the set of solutions to the polynomial equation above. Each Sij is closed, since the continuous inverse image of a point is closed. 8 Thereby O(n) is closed, since it is the intersection of all Sij, O(n) =Tn

i,j=1Sij.

To show that O(n) is bounded, we observe that, putting i = j in equation (1) gives

n

X

k=1

aikaik= 1

which implies that |aij| ≤ 1.

The group SO(n) is a closed subset of O(n), since it is the preimage of 1 under the continuous function

det : O(n) → R, and hence also compact. 

A topological group that is locally compact as a topological space is called a locally compact group.

We can also give other algebraic structures topological qualities.

Definition 27. Let the field F either be the real numbers R or the complex numbers C. A topological vector space is a vector space V over F that is endowed with a topology such that vector addition + : V × V → V and scalar multiplication · : F × V → V are continuous functions.

8This comes from the fact the Rn×nis a Hausdorff space, where every singleton set is closed, which can be proven as follows:

let A be a Hausdorff space, and let x ∈ A be a point in A. Then for every point y distinct from x there is an open set Oy

containing y but not x. Then

{x}c= [

y∈A−{x}

Oy

is open, hence {x} is closed.

(21)

5 Measure Theory

In this section we will cover some very basic measure theory and the construction of the Lebesgue integral.

We will also define the Haar measure and state some important theorems; however, we will not provide proofs for the sake of brevity.9

Definition 28 (σ-algebra). Let S be some set, and let P(S) denote its power set. A subsetS ⊂ P(S) is called a σ-algebra if it satisfies the following three properties.

1. S is inS

2. If A is inS , then so is its complement Ac

3. If each Ai in (Ai)i=1 is in S , then the countable union Si=1Ai is also inS .

Notice that it follows from property 1. and 2. that ∅ is also inS . The elements of a σ-algebra are called measurable sets. If S is a set andS is a σ-algebra over S, then the tuple (S, S ) is called a measurable space.

Definition 29 (Measurable function). Let (S,S ) and (T, T ) be two measurable spaces. A function f : X → Y is called a measurable function if the inverse image of every measurable set is measurable. That is, if

f−1(E) = {x ∈ S : f (x) ∈ E} ∈S , for all E ∈ T .

If f : S → T is a measurable function we will often write f : (S,S ) → (T, T ) to emphasize the dependency on the measurable spaces.

Note the resemblance between the definition of a measurable function and a topologically continuous function. Next we define a way to measure the size of different sets, by the way of a function called a measure.

Definition 30 (Measure space). Let (S,S ) be a measurable space. A function µ : S → [−∞, ∞] is called a measure if it satisfies the following properties.

1. For all E inS , it holds that µ(E) ≥ 0 2. The value of the empty set is zero: µ(∅) = 0

3. For all countable collections of {Ei}i∈I, where I is some countable index set, of pairwise disjoint sets in S it holds that

µ [

i∈I

Ei

!

=X

i∈I

µ(Ei)

If µ is a measure, the triple (S,S , µ) is called a measure space. A measure µ such that µ(S) = 1 is called a probability measure.

If (S,S , µ) is a measure space, a property P is said to hold µ-almost everywhere if there exists a set N ∈S such that µ(N) = 0 and all x ∈ S − N have the property P .

5.1 Construction of the Lebesgue integral

After all these definitions, one may wonder how all of this actually connects to the theory of integration.

This is all answered by the Lebesgue integral, a construction that extends the ordinary Riemann integral to a larger class of functions. This version of the integral can be constructed in several ways, but we will construct it using so called simple functions, which are linear combinations of indicator functions:

9The main point of this section is to state that there is a unique way to integrate on a compact group.

(22)

Definition 31 (Indicator function). Let S be a set, and A ⊂ S be a subset of S. The indicator function of A is the function 1A: S → {0, 1} defined by

1A(x) :=

(1, if x ∈ A 0, if x /∈ A.

The idea is then to start with a measure space (S,S , µ) and define the integral of a measurable function f : S → C with respect to the measure µ. Let S ∈ S be some measurable set. We start by assigning a value to the integral of an indicator function 1A, where A ∈S is some measurable set, as

Z

S

1A(x)dµ(x) := µ(A ∩ S).

As previously mentioned, a linear combination of indicator functionsPn

i=1ai1Ai, where aiare real numbers and the sets Ai∈S are measurable sets, is called a simple function. We extend the definition of the integral above to non-negative simple functions. When all the coefficients ai are non-negative, we set

Z

S n

X

i=1

ai1Ai(x)

!

dµ(x) =

n

X

i=1

ai

Z

S

1Ai(x)dµ(x) =

n

X

i=1

aiµ(Ai∩ S),

where the convention 0 · ∞ = 0 is used. We can now define the Lebesgue integral for arbitrary non-negative measurable functions f : S → R+ as

Z

S

f (x)dµ(x) := sup

Z

S

φ(x)dµ(x) : 0 ≤ φ ≤ f, φ is simple

 . To handle the case of arbitrary real-valued functions f : S → R, we decompose f as

f = f+− f where

f+(x) =

(f (x), if f (x) > 0 0, otherwise

f(x) =

(−f (x), if f (x) < 0 0, otherwise We say that the Lebesgue integral of f exists if at least one of R

Sf+(x)dµ(x) andR

Sf(x)dµ(x) is finite.

In this case we define the integral of f to be Z

S

f (x)dµ(x) :=

Z

S

f+(x)dµ(x) − Z

S

f(x)dµ(x).

For complex valued functions f = g + ih we simply define Z

S

f (x)dµ(x) :=

Z

S

g(x)dµ(x) + i Z

S

h(x)dµ(x).

As it turns out, this integral is equivalent to the Riemann integral for Riemann integrable functions.

5.1.1 Lp-spaces

In this subsection we will construct Lp-spaces rigorously using the Lebesgue integral. We begin by letting (S,S , µ) be a measure space, and p be a real number such that p ∈ [0, ∞). Let Lp(S,S , µ) denote the vector space of all measurable functions f : S → C such that

||f ||p:=

Z

S

|f (x)|pdµ(x)

1p

< ∞

(23)

together with pointwise function multiplication. This space, when equipped with the function || · ||p, forms a seminormed vector space.10 To make it a normed space, we take the quotient space with respect to the kernel of || · ||p. We therefore define

N= {f : f = 0 µ-almost everywhere} = ker(|| · ||p) in order to construct the Lp-space as a quotient

Lp(S,S , µ) = Lp(S,S , µ)/N.

In Lp(S,S , µ), two functions f and g are identified if f = g µ-almost everywhere. Lp(S,S , µ) is a Banach space for all p ∈ [1, ∞), and for the special case of p = 2, we can define an inner product compatible with the norm || · ||2 as

hf, giL2 = Z

S

f (x)g(x)dµ(x).

This shows that L2(S,S , µ) is also a Hilbert space.

Notice that the elements of Lp(S,S , µ) are not functions, but cosets of the form f(x)+N. It is, however, usual to abuse notation and speak of the elements as functions. We will do this for the rest of this text.

5.2 Borel algebras and the Haar measure

Our goal is now to define a special kind of measure that will be of particular interest: the Haar measure.

But before we define it, we need to go over a few preliminary definitions and results. The definitions are taken from [Gleason, 2010].

Suppose that we have a set X, and a collection of subsets of X (a topology on X, for example). It is possible to generate a σ-algebra from the collection of subsets.

Definition 32 (Generated σ-algebra). Let X be a set, and let C and D be two collections of subsets of X. Then for any collection of subsets E of X, we define the σ-algebra generated by E to be the smallest σ-algebra that contains E . This σ-algebra is denoted σ(E ), and is constructed as the intersection of all σ-algebras that containE

σ(E ) :=\

{A ⊂ P(X) : A is a σ-algebra and E ⊂ A }

Definition 33 (Borel σ-algebra). Let (X, τ ) be a topological space. The Borel σ-algebra on X is the smallest σ-algebra containing all open subsets of X. That is, it is the σ-algebra σ(τ ) generated by τ . A subset A ⊂ X of X is called a Borel subset of X if A ∈ σ(τ ).

If (X, τ ) is a topological space, and µ is a measure on the measurable space (X, σ(τ )), we will call the measure space (X, σ(τ ), µ) a topological measure space. If (X, τ ) is Hausdorff, we will call a measure µ on (X, σ(τ )) a Borel measure and (X, σ(τ ), µ) a Borel measure space. A measurable function on a Borel space is called Borel measurable.

Definition 34 (Regular measure). Let (S,S , µ) be a Borel measure space. Then µ is said to be a regular Borel measure if

1. If K ⊂ S is a compact subset of S, then µ(K) < ∞ 2. If A ∈S , then

µ(A) = inf{µ(O) : A ⊂ O, O is open}.11 3. If O ⊂ S is open, then

µ(O) = sup{µ(K) : K ⊂ O, K is compact}.12

10A seminormed vector space is a vector space V equipped with a norm || · ||, that does not need to satisfy the property

||v|| = 0 if and only if v = 0, for v ∈ V

11This is referred to as outer regularity.

12This is referred to as inner regularity.

(24)

We finally arrive at the definition of the Haar measure.

Definition 35 (Haar measure). Let G be a topological group. A left (right) Haar measure on G is a nonzero regular Boreal measure µ on G such that µ(gA) = µ(A) (µ(Ag) = µ(A)) for all g ∈ G and measurable subsets A ⊂ G of G.

Now we state some existence and uniqueness theorems for the Haar measure on locally compact groups.

Theorem 5 (Existence). Let G be a locally compact group. Then there exists a left Haar measure on G.

Theorem 6 (Uniqueness). Let G be a locally compact group, and let µ and µ0 be two left Haar measures on G. Then µ = aµ0 for some positive real number a ∈ R+.

This theorem tells us that the left Haar measure on a locally compact group G is essentially unique, in the sense that two left Haar measures only differ by a positive multiplicative constant. Proofs can be found in [Gleason, 2010].

Using the theory of Lebesgue integration, one can define an integral for all Borel measurable functions f on a topological group G with respect to a Haar measure µ.

Z

G

f (g)dµ(g).

If µ is a left Haar measure on G, then we have that Z

G

f (hg)dµ(g) = Z

G

f (g)dµ(g) (2)

for all h ∈ G. This is clear for indicator functions, since Z

G

1A(hg)dµ(g) = Z

G

1h−1A(g)dµ(g) = µ(h−1A) = µ(A) = Z

G

1A(g)dµ(g) because of the left invariance. We will make use of this integral frequently.

(25)

6 Representation Theory

The theory presented in this section mainly follows [Serre, 1977].

6.1 Basic definitions

Definition 36 (Representation). Let V be a vector space over the field over complex numbers C, let Aut(V ) denote the group of automorphisms on V , and suppose that G is a group.13 A representation of G in V is a homomorphism ρ : G → Aut(V ). In other words, we associate with each element g ∈ G an operator ρ(g) ∈ Aut(V ) such that that

ρ(gh) = ρ(g) ◦ ρ(h), for all g, h ∈ G.

(We will often write ρg instead of ρ(g) to avoid cluttering with parentheses.) Observe that this implies that ρ(e) = id, ρ(g−1) = ρ(g)−1,

where e ∈ G denotes the identity element in G, and id ∈ Aut(V ) is the identity function. When ρ is given, V is called a representation space of G, however, we will often abuse notation and language and call V itself a representation of G when there is no chance of confusion.

In the rest of this section, we will only consider representations in finite-dimensional vector spaces. This is not too limiting, as one can often decompose an infinite-dimensional vector space into finite-dimensional subspaces. We will now define how we can view two different representations as being practically equal.

Definition 37. Let ρ : G → Aut(V ) and φ : G → Aut(W ) be two representations of the same group G.

These representations are said to be isomorphic if there exsists a vector space isomorphism T ∈ Iso(V, W ) that ”transforms” one representation into the other, that is, it satisfies the following identity.

T ◦ ρg= φg◦ T, for all g ∈ G.

We denote two isomorphic representations by ρ ∼= φ.

We will see that most notions for representations only depend on the equivalence classes of representations.

Suppose that G is a group and that V is a vector space, and let ρ : G → Aut(V ) be a representation of a G in V . A vector subspace W ⊂ V is said to be G-invariant if it is stable under the action of G, that is, if ρg(w) ∈ W for all w ∈ W . Suppose that W ⊂ V is a G-invariant subspace, then the restriction ρg|W of ρg

to W is an element of Aut(W ), hence the function

ρW : G → Aut(W ) g 7→ ρg|W

is a representation of G in W . This function is called a subrepresentation of ρ.

Definition 38 (Direct sum of representations). Suppose that two representations ρ : G → Aut(V ) and φ : G → Aut(W ) are given. Then their (external) direct sum

ρ ⊕ φ : G → Aut(V ⊕ W ) is defined by

(ρ ⊕ φ)g(v, w) = (ρg(v), φg(w)), for all g ∈ G.

Now let ρ : G → Aut(V ) be a representation. If V1, V2⊂ V are two G-invariant subspaces of V such that V = V1⊕ V2is equal to the (internal) direct sum of V1and V2. Then it is easy to show that ρ is isomorphic to the (external) direct sum of its restriction to V1and V2, that is ρ ∼= ρV1⊕ ρV2. Simply let T : V → V1⊕ V2

be defined by T (v) = (v1, v2). It is clear that T is an isomorphism, and that T ◦ ρ = ρV1⊕ ρV2◦ T . We can also define the tensor product of two representations.

13Note that in the case where V is finite dimensional, Aut(V ) can be identified with the general linear group GL(n) (where n = dim(V )) once a basis for V has been chosen.

(26)

Definition 39. Let ρ : G → Aut(V ) and φ : G → Aut(W ) be two representations. The tensor product representation is given by the homomorphism

ρ ⊗ φ : G → Aut(V ⊗ W ) g 7→ ρ(g) ⊗ φ(g),

where ρ(g) ⊗ φ(g) is the tensor product of linear maps, as defined in the section about multilinear algebra.

A representation ρ ∈ Hom(G, Aut(V )) of a group G in a vector space V also induces representations of G in Λk(V ) and Symk(V ). Simply define ρ∧k ∈ Hom(G, Aut(Λk(V ))) and ρSk∈ Hom(G, Aut(Symk(V ))) by

∧k)(g) = (ρ(g))∧kSk)(g) = (ρ(g))Sk

where (ρ(g))∧k and (ρ(g))Sk are the linear maps discussed in the section about multilinear algebra.

It is now time to define a special class of representations, which, in some way, constitute the fundamental building blocks in representation theory.

Definition 40 (Irreducible representations). A representation ρ : G → Aut(V ) of a group G is said to be irreducible if the only G-invariant subspaces of V are {0} and V .

Suppose that ρ : G → Aut(V ) is a representation, and that W ⊂ V is a G-invariant subspace of V . If the restriction of ρ to W , ρW, is irreducible, we will often abuse notation and call the vector space W irreducible. This is to avoid cluttering, when there is no confusion about the underlying representation.

It is clear that all one-dimensional representations are irreducible, since there are no proper non-zero subspaces.

Definition 41 (Completely reducible). Let G be a group, and let V be a vector space. A representation ρ : G → Aut(V ) is said to be completely reducible if V =Ln

i=1Vi, where each Vi is a non-trivial G-invariant subspace of V , and each ρVi is irreducible.

Definition 42. Let G be a group, and let V be a vector space. A representation ρ : G → Aut(V ) is said to be decomposable if V = V1⊕ V2where V1 and V2are two non-trivial G-invariant subspaces. Otherwise, ρ is called indecomposable.

6.2 Some important theorems

It is time to state and prove some important theorems about group representations that will be used through- out this paper.

Proposition 9. Let ρ : G → Aut(V ) be isomorphic to a decomposable representation. Then ρ is decompos- able.

Proof. Let φ : G → Aut(W ) be a decomposable representation that is isomorphic to ρ, and let T : V → W be an isomorphism such that T ◦ ρg = φg◦ T for all g ∈ G. Suppose that W1, W2 are two non-trivial G-invariant subspaces of W such that W = W1⊕ W2. It is then clear that V = T−1(W1) ⊕ T−1(W2), and

that both T−1(W1) and T−1(W2) are G-invariant. 

We then have the following two propositions for the other types of representations. The proofs of these propositions are analogous to that of Proposition 9, and are therefore omitted for the sake of brevity.

Proposition 10. Let ρ : G → Aut(V ) be isomorphic to an irreducible representation. Then ρ is irreducible.

Proposition 11. Let ρ : G → Aut(V ) be isomorphic to a completely reducible representation. Then ρ is completely reducible.

Before we state our next theorem, recall the bijective correspondence between the projections of a vector space V onto a subspace W ⊂ V and the complements of W in V . We will from now on also assume that all groups are finite, unless stated otherwise.

References

Related documents

Stöden omfattar statliga lån och kreditgarantier; anstånd med skatter och avgifter; tillfälligt sänkta arbetsgivaravgifter under pandemins första fas; ökat statligt ansvar

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Exakt hur dessa verksamheter har uppstått studeras inte i detalj, men nyetableringar kan exempelvis vara ett resultat av avknoppningar från större företag inklusive

Both Brazil and Sweden have made bilateral cooperation in areas of technology and innovation a top priority. It has been formalized in a series of agreements and made explicit

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Av tabellen framgår att det behövs utförlig information om de projekt som genomförs vid instituten. Då Tillväxtanalys ska föreslå en metod som kan visa hur institutens verksamhet

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större