• No results found

An Introduction to Orthogonal Geometry

N/A
N/A
Protected

Academic year: 2022

Share "An Introduction to Orthogonal Geometry"

Copied!
41
0
0

Loading.... (view fulltext now)

Full text

(1)

An Introduction to Orthogonal Geometry

Emil Eriksson

Bachelor’s Thesis, 15 ECTS

Bachelor’s degree in Mathematics, 180 ECTS

(2)
(3)

Abstract

In this thesis, we use algebraic methods to study geometry. We give an introduction to orthogonal geometry, and prove a connection to the Clifford algebra.

Sammanfattning

I denna uppsats använder vi algebraiska metoder för att studera geometri. Vi ger en

introduktion till ortogonal geometri och visar en koppling till Cliffordalgebran.

(4)
(5)

Acknowledgments

First, I wish to thank my supervisor and mathematical mentor Per Åhag, who gen- erously shared both his time and wisdom to support me through the development of this thesis. I would also like to thank my examiner Olow Sande, and peer reviewer Stig Lundgren for reading, and providing invaluable feedback.

Finally, I wish to thank my partner Vania, for the proofreading of this text, and for

her endless support.

(6)
(7)

Contents Acknowledgments

1. Introduction 3

2. Vector Spaces 5

2.1. Homomorphisms and Matrices 8

3. Orthogonal Geometry 13

3.1. Metric Structures 13

3.2. Quadratic Forms 16

3.3. Orthogonal Geometry 17

4. The Clifford Algebra 27

4.1. The Lipschitz Group 32

5. References 35

(8)
(9)

1. Introduction

In the 1820s, Nikolai Lobachevsky and János Bolyai started a revolution. Indepen- dently from each other, they had both discovered the existence of non-Euclidean ge- ometry. Following this, the first consistent non-Euclidean geometry was constructed by Eugenio Beltrami in 1868. Thereby, Beltrami had solved a problem that had remained unsolved for over 2000 years, providing definite proof that the fifth postulate of Euclid’s Elementa [8] is independent from the other four.

Another crucial part in this revolution was played by Bernhard Riemann, who in- troduced spherical geometry as well as geometries of higher dimensions. As a result, it became clear that there are many different kinds of non-Euclidean geometries.

In an attempt to bring some internal structure to these new geometries, Felix Klein published what would be known as the Erlanger Programm [13] in 1893. Therein, a method for classifying geometries using group theory was presented.

In this method, each geometry is associated with a group of transformations on the whole space. Should the space have a metric structure, we can speak of the group of distance-preserving transformations called the isometry group. The structural relation- ships between these groups are what determine the relationships between the different geometries.

We are going to use these ideas to study orthogonal geometry, which is a general geometry including both Euclidean and hyperbolic geometry. Our final goal is to show an important connection between orthogonal geometry and a certain associative algebra, called the Clifford algebra. This algebra was introduced in 1878 by William Clifford in his paper Applications of Grassmann’s Extensive Algebra [4]. In this paper, Clifford introduces what he calls a geometric algebra that manages to generalize the quaternions discovered by William Hamilton in 1843 [11]. As the title suggests, this algebra is built upon the idea of inner and outer products, introduced in Hermann Grassmann’s Ausdehnungslehre [10].

The Clifford algebra was essential to Élie Cartan’s discovery of the mathematical the- ory of spinors in 1913 [16]. The spin group, described mathematically as a subgroup of the Clifford algebra, shortly became fundamental in quantum mechanics. The the- ory of spinors in mathematical physics was developed by Wolfgang Pauli [14], and Paul Dirac [6], among others. In more recent years, the Clifford algebra, under its original name of geometric algebra, has been proposed as a unified language for mathematics and physics by David Hestenes and Garret Sobczyk [12], being influenced by Marcel Riesz [15]. These theories have also generated numerous applications in computer sci- ence [7].

The traditional applications of Clifford algebra in physics are based on its relationships with the isometry group of physical space. Since this space is usually described as having a Euclidean, or in certain cases hyperbolic geometry, and sometimes extended to higher dimensions, it is meaningful to study the relationships with the more general orthogonal geometry.

In the present thesis we study vector spaces V over general fields F , where V has

an orthogonal geometry. In this case, the isometry group of V is the orthogonal group

O(V ). In the final section we construct the Clifford algebra, and our final result will be

(10)

a proof of Theorem 4.26, where we connect O(V ) to a subgroup of the Clifford algebra called the Lipschitz group, Γ(V ). We achieve this by showing that the factor group SΓ(V )/F is isomorphic to the special orthogonal group, SO(V ):

Theorem 4.26. Let SΓ(V ) be all α ∈ C + (V ) that have an inverse, and for which α ◦ X ◦ α −1 ∈ V for all X ∈ V . Then SΓ(V )/F is isomorphic to SO(V ),

where C + (V ) is an important subalgebra of the Clifford algebra C(V ), called the even Clifford algebra.

The proof of this theorem, which is given as the conclusion of this thesis, relies heav- ily on the Cartan–Dieudonné Theorem (Theorem 3.44). It states that, in orthogonal geometry, all elements of O(V ) are products of symmetries. This was first proven for vector spaces over R or C by Élie Cartan in the beginning of the twentieth century, and a version of the theorem is included in his book The Theory of Spinors [17]. Jean Dieudonné would later provide a generalized proof for vector spaces over an arbitrary field [5]. In this thesis, however, we will follow the proof given by Emil Artin in Geometric Algebra [1].

In order to arrive at Theorem 4.26, we begin by introducing vector spaces and showing some important properties needed to study orthogonal geometry in Section 2. We then move on to Section 3, which is dedicated to the study of orthogonal geometry, and ends with a proof of the Cartan–Dieudonné Theorem. Finally, in Section 4, we introduce the Clifford algebra and prove Theorem 4.26.

This thesis relies heavily on the excellent book Geometric Algebra [1] by famous math- ematician Emil Artin. A few definitions have been updated to modern standard using Juliusz Brzezinski’s Linjär och multilinjär algebra [3], and John B. Fraleigh’s A First Course In Abstract Algebra [9]. The last theorem has been rephrased in terms of the Lipschitz group using [3].

The reader of this thesis should be familiar with abstract and linear algebra at the

undergraduate level. Should there be any uncertainty regarding the meaning of funda-

mental concepts, the reader may consult [9] or any other introductory textbook in the

subject.

(11)

2. Vector Spaces

We begin by introducing vector spaces over fields and showing some important prop- erties that we will need later on in our study of orthogonal geometry.

Definition 2.1. A right vector space V over a field F is an additive, abelian group together with a composition Aa of an element A ∈ V and a ∈ F such that Aa ∈ V , and such that the following rules hold:

(1) (A + B)a = Aa + Ba, (2) A(a + b) = Aa + Ab, (3) (Aa)b = A(ab), (4) A · 1 = A,

where A, B ∈ V , a, b ∈ F , and 1 is the multiplicative identity element of F . A left vector space can be defined in a similar way by using the composition aA.

We should clarify some of the symbols we are going to use right away.

Definition 2.2. If S is a set of vectors in V , we denote the space spanned by S by hSi.

Definition 2.3. By the dimension of V we mean the number of elements n in a basis of V , and we write dim V = n.

A detailed description of the span and basis of a vector space can be found in Section 30 of [9].

If we have a subspace U of V , we can consider the factor space V /U . Its elements are the cosets A + U , meaning that for each coset A + U , each vector A ∈ V is added to all vectors U i of U , or in symbols: A + U = {A + U i | U i ∈ U }.

Theorem 2.4. Let U be a subspace of V . The factor space V /U is also a vector space.

Proof. Since V is abelian we can consider the additive factor group V /U whose elements are the cosets A + U , addition is explained by (A 1 + U ) + (A 2 + U ) = (A 1 + A 2 ) + U and is obviously abelian.

We now define the composition of an element A + U of V /U and an element a ∈ F by:

(A + U ) · a = Aa + U

This composition is well defined because if A + U = B + U , then A − B ∈ U , hence (A − B)a ∈ U , which shows Aa + U = Ba + U . The rules of Definition 2.1 are easily

checked. 

We should move towards a definition of homomorphisms of vector spaces. From group theory, we recall that the natural homomorphism and isomorphism of groups have a special meaning. They are characterized by the following important theorem.

Theorem 2.5 (The Fundamental Homomorphism Theorem). Let G, and G 0 be groups,

and let g be an element of G. Let φ : G → G 0 be a group homomorphism with kernel

K. φ(G) is a group, and the map µ : G/K → φ(G) given by µ(gK) = φ(g) is an

(12)

isomorphism. If γ : G → G/K is the homomorphism given by γ(g) = gK, then φ(g) = µγ(g) for each g ∈ G.

Proof. See [9, p. 140]. 

The isomorphism µ and the homomorphism γ are referred to as the natural isomor- phism and homomorphism, respectively. There may be other isomorphisms and homo- morphisms for these same groups, but the maps µ and γ have a special status with φ, and are uniquely determined by Theorem 2.5. We see that the diagram commutes.

G φ(G)

G/K

φ

γ µ

We can now consider the natural map γ : V → V /U that maps A ∈ V onto the coset A + U . γ is an additive group homomorphism of V onto V /U . However, since V /U is a vector space, we also have

γ(Aa) = Aa + U = (A + U ) · a = γ(A) · a, and we can define homomorphisms of vector spaces as:

Definition 2.6. Let V and W be two right vector spaces over F . A map φ : V → W is called a homomorphism of V into W , if

(1) φ(A + B) = φ(A) + φ(B), and (2) φ(Aa) = φ(A) · a,

where A, B ∈ V and a ∈ F . As for groups, we call φ an isomorphism if φ is bijective.

The notion of a kernel U of φ follow from the homomorphism of the additive group.

It is the set of all A ∈ V , for which φ(A) = 0. If A ∈ U , then φ(Aa) = φ(A) · a = 0 so that Aa ∈ U . This shows that U is not only a subgroup, but also a subspace of V . Example 2.7. Let U be an arbitrary subspace of V , and γ : V → V /U be the natural group homomorphism. It is clear that γ is a vector space homomorphism of V onto V /U . The zero element of V /U is the image of 0, hence U itself. The kernel consists of all A ∈ V for which

γ(A) = A + U = U.

It is therefore the given subspace U . 

One should mention the special case U = 0. Each coset A + U is now the set with the single element A, and may be identified with A. Strictly speaking, we only have a natural isomorphism V /0 ∼ = V , but we may write V /0 = V .

Theorem 2.8. Let φ : V → W be a homomorphism with kernel U . There is a natural

isomorphism µ mapping V /U onto the image space φ(V ).

(13)

Proof. Since φ is a homomorphism of the additive groups, we already have the natural group homomorphism γ : V → V /U , and the natural group isomorphism µ : V /U → φ(V ) from Theorem 2.5. Since γ(A) = A + U and µ(A + U ) = φ(A), we have

µ((A + U )a) = µ(Aa + U ) = φ(Aa) = φ(A)a = µ(A + U )a.

All three maps are consequently homomorphisms between the vector spaces, and µ is an isomorphism. Therefore, the diagram commutes.

V φ(V )

V /U

φ

γ µ

 We can add subspaces of V together, and in some circumstances they will make up the whole space. We define a certain sum.

Definition 2.9. Let U and W be subspaces of V . We use the notation U +W to describe the space spanned by the union U ∪ W . The elements of this space are the vectors A + B, where A ∈ U , and B ∈ W .

Definition 2.10. Let U be a subspace of V . We say that a subspace W of V is supplementary to U if V = U + W and U ∩ W = 0. The sum is direct, and we write

V = U ⊕ W.

We have the following theorem:

Theorem 2.11. To every subspace U of V , one can find supplementary spaces W . Each of these W is naturally isomorphic to the space V /U .

Proof. Let U be a subspace of V , and let S be a basis of U . Complete S to a basis S ∪ T of V , where S and T are disjoint sets. Put W = hT i, then U + W = V and obviously V = U ⊕ W .

Now, suppose that U and W are given subspaces of V . Let φ be the natural map φ : V → V /U . We construct the natural homomorphism ψ : W → V /U by letting ψ be the restriction of φ to W . The image ψ(W ) consists of all cosets A + U with A ∈ W . The union of these cosets forms the space W + U , the cosets A + U are, therefore, the stratification of W + U by cosets of the subspace U of W + U . This shows that ψ(W ) = (U + W )/U .

We now turn to the kernel of ψ. For all elements A ∈ W , we have ψ(A) = φ(A).

However, φ has the kernel U in V so that ψ has U ∩ W as kernel. Therefore, we can construct the natural isomorphism W/(U ∩ W ) → (U + W )/U .

Finally, if V = U ⊕ W , we have W/(U ∩ W ) = W/0 = W and (U + W )/U = V /U .

W is therefore naturally isomorphic to V /U . 

(14)

The dimension of the space V , in relation to the dimension of its subspaces, gives us a way to define some important geometrical notions in a precise way.

Definition 2.12. The dimension of the space V /U is called the codimension of U : codim U = dim V /U.

Definition 2.13. Spaces of dimension one are called lines, of dimension two planes, and spaces of codimension one are called hyperplanes.

2.1. Homomorphisms and Matrices. Now, let us take a closer look at the vector space homomorphisms and try to find some internal structure. We will show that the set of all homomorphisms of V into V form a ring. In order to do this, we will start by constructing the additive, abelian group. This can be done in a slightly more general setting.

Theorem 2.14. Let V and V 0 be right vector spaces over a field F , and let hom(V, V 0 ) be the set of all homomorphisms of V into V 0 . One can define an addition such that hom(V, V 0 ) becomes an abelian group.

Proof. Suppose f, g ∈ hom(V, V 0 ) and let f +g be the map which sends the vector X ∈ V onto the vector f (X) + g(X) of V 0 . In other words,

(f + g)(X) = f (X) + g(X).

It is easily checked that f + g is a homomorphism, and that the addition is associative and commutative. The map which sends every vector X ∈ V onto the 0 vector of V 0 is obviously the 0 element of hom(V, V 0 ), and shall also be denoted by 0. If f ∈ hom(V, V 0 ), then the map −f which sends X onto −(f (X)) is a homomorphism, and, indeed, f + (−f ) = 0. hom(V, V 0 ) is therefore an abelian group.  In order to find some additional structure, we must limit ourselves to the case when V 0 = V . An element of hom(V, V ) maps V into V , and such a homomorphism is called an endomorphism of V .

Theorem 2.15. One can define a multiplication, by function composition, such that hom(V, V ) becomes a ring.

Proof. From Theorem 2.14 we know that hom(V, V ) is an abelian group under addition.

Let f, g ∈ hom(V, V ) and consider the map gf : V→ V f→ V such that gf (X) = g(f (X)) g by function composition. gf is an element of hom(V, V ), and the operation is associative.

Since

(g 1 + g 2 )f (X) = g 1 f (X) + g 2 f (X) = (g 1 f + g 2 f )(X), and

g(f 1 + f 2 )(X) = g(f 1 (X) + f 2 (X)) = gf 1 (X) + gf 2 (X) = (gf 1 + gf 2 )X,

we see that both distributive laws hold, and hom(V, V ) is therefore a ring. 

Remark 2.16. The ring hom(V, V ) also has a multiplicative identity element, namely the

identity map. The subset of maps f which are bijective have an inverse map f −1 that is

also in hom(V, V ). Therefore, these maps f form a group under multiplication. If dim V

is finite, this group is called the general linear group GL(V ).

(15)

Now suppose that dim V = n is finite. There is a well known isomorphism between the ring hom(V, V ) and the ring of all n × n matrices with elements in F . This allows us to use results defined in terms of matrices, such as determinants, without the abstract concepts that are otherwise required.

Theorem 2.17.

a) The ring hom(V, V ) is isomorphic to the ring of all n × n matrices with elements in F . The isomorphism depends on the choice of basis.

b) Let A and D be two n × n matrices with elements in F , and let f, g ∈ hom(V, V ).

Let g be an element that carries a selected new basis into the old one, and suppose that g 7→ D describes g in terms of the old basis. If f 7→ A is the description of any f in terms of the old basis, then DAD −1 is the description of this same f in terms of the new basis.

Proof.

a) Let A 1 , A 2 , · · · , A n be a basis of V and define an arbitrary mapping f (A i ) = B i . If X = A 1 x 1 + A 2 x 2 + · · · + A n x n ∈ V then

f (X) = B 1 x 1 + B 2 x 2 + · · · + B n x n . (2.1) Conversely, choose any n vectors B i ∈ V and define a map f by (2.1). We see that f ∈ hom(V, V ), and that f (A i ) = B i . Therefore, f is completely determined by the images B i of the basis elements A i , and the B i can be any system of n vectors of V . If we let j = 1, 2, · · · , n, we can express each B i by the basis A ν , as

f (A j ) = B j =

n

X

ν=1

A ν a νj .

Therefore, we see that f is described by an n × n matrix (a ij ), where i is the index of the rows and j the index of the columns. Now, let g ∈ hom(V, V ) be given by the matrix (b ij ), which means that

g(A j ) =

n

X

ν=1

A ν b νj . Adding f and g we get

(f + g)(A j ) =

n

X

ν=1

A ν (a νj + b νj ), and if we multiply f and g we get

(f g)(A j ) = f

n

X

ν=1

A ν b νj

!

=

n

X

ν=1

f (A ν )b νj =

n

X

ν=1 n

X

µ=1

A µ a µν

! b νj =

n

X

µ=1

A µ n

X

ν=1

a µν

! b νj .

We see that f + g is described by the matrix (a ij + b ij ), and f g by ( P n

ν=1 a b νj ).

If we define addition between matrices as

(a ij ) + (b ij ) = (a ij + b ij ),

(16)

and matrix multiplication as (a ij ) · (b ij ) =

n

X

ν=1

a b νj

! ,

the correspondence f 7→ (a ij ) becomes an isomorphism between hom(V, V ) and the ring of all n × n matrices. This definition of addition and multiplication of matrices is, of course, equal to our usual matrix operations. We see that this isomorphism depends on the choice of the basis A i for V .

b) For the next part of the theorem, let g be another element of hom(V, V ), but suppose that g is bijective. Let (b ij ) be the matrix associated with the element gf g −1 of hom(V, V ), meaning that

gf g −1 (A j ) =

n

X

ν=1

A ν b νj .

If we apply g −1 to this equation, it becomes f (g −1 (A j )) =

n

X

ν=1

g −1 (A ν ) · b νj .

Since g −1 is any bijective map of V , the vectors g −1 (A ν ) are another basis of V , and g can be chosen in such a way that g −1 (A ν ) is any given basis of V . Looking at the equation from this point of view, we see that the matrix (b ij ) is the one which would describe f if we had chosen g −1 (A ν ) as basis of V . It follows that the matrix describing f in terms of the new basis is the same as the one describing gf g −1 in terms of the old basis A ν . In this statement, g was the map that carries the new basis g −1 (A ν ) into the old one,

g(g −1 (A ν )) = A ν .

This g is, therefore, a fixed map once the new basis is given. Now suppose that f 7→ A and g 7→ D are the descriptions of f and g in terms of the original basis.

Then gf g −1 7→ DAD −1 . The attitude should be that g is fixed, determined by the old and the new basis, and that f ranges over hom(V, V ).

 We can now use this isomorphism to define the determinant in a simple way, and, since we are exclusively dealing with vector spaces over fields, we do not need a more general definition.

Definition 2.18. Let f ∈ hom(V, V ), and let A be the matrix describing f . The determinant of f is the determinant of A, i.e.

det f = det A.

Theorem 2.19. Let f and g be elements of hom(V, V ). The determinant of f is well defined, independent of the choice of basis, and satisfies

det f g = det f · det g.

(17)

Proof. If we use a new basis, A has to be replaced by DAD −1 , and the determinant becomes det D · det A · (det D) −1 by the multiplication theorem of determinants. The det D cancels and we see that the map given by

f 7→ det f = det A

is well defined. If g corresponds to the matrix B, then f g corresponds to the matrix AB,

and the multiplication theorem shows det f g = det f · det g. 

The determinant can also be defined without referring to matrices. Such a definition

can be found in Algèbre [2] by Nicolas Bourbaki.

(18)
(19)

3. Orthogonal Geometry

In this section, we study the generalization of euclidean and hyperbolic geometry called orthogonal geometry. We start by defining a general metric structure on a vector space and gather enough knowledge to arrive at a formal definition. Henceforth, V is a vector space of finite dimension n over a field F with characteristic 6= 2.

3.1. Metric Structures. The first step towards defining a geometry of a vector space is to define a metric structure describing a notion of length and angles between vectors.

We start by making a two-sided vector space from a left vector space V by defining Xa = aX, a ∈ F, X ∈ V.

Since V has become a left space as well as right space, we can consider bilinear maps of V and V into F .

Definition 3.1. A bilinear form is a map φ(X, Y ) : V × V → F that satisfies the following rules:

(1) φ(X 1 + X 2 , Y ) = φ(X 1 , Y ) + φ(X 2 , Y ), (2) φ(X, Y 1 + Y 2 ) = φ(X, Y 1 ) + φ(X, Y 2 ), (3) φ(aX, Y ) = a · φ(X, Y ),

(4) φ(X, aY ) = a · φ(X, Y ),

where X, Y ∈ V and a ∈ F . If φ is a bilinear form we often write φ(X, Y ) = XY , and think of XY ∈ F as the product of X and Y . We also say that the bilinear form defines a metric structure on V . If XY = Y X, we say that the bilinear form is symmetric.

In order to gain a more intuitive understanding of this, one may think of X 2 = XX as something like the length of the vector X, and of XY as something related to the angle between X and Y .

Example 3.2. Suppose that A 1 , A 2 , · · · , A n is a basis of V , and X, Y ∈ V . Then X =

n

X

i=1

x i A i , Y =

n

X

j=1

y j A j ,

and we can write the product XY as XY =

n

X

i,j=1

g ij x i y j , (3.1)

where g ij = A i A j ∈ F . We see that if we know g ij , we know XY .  We see that the term g ij , and hence the metric structure, depend on the choice of basis. If we want to change the basis of V , we can express the new basis in terms of the old one.

Example 3.3. Let B 1 , B 2 , · · · , B n be a new basis of V . We can write each B j in terms of the old basis as B j = P n

ν=1 A ν a νj with certain a ij ∈ F . The B j will form a basis of

V if and only if the determinant of the matrix (a ij ) is 6= 0.

(20)

The product of these new basis vectors can, therefore, be expressed as

¯

g ij = B i B j =

n

X

ν,µ=1

A ν a νi A µ a µj =

n

X

ν,µ=1

a νi g νµ a µj .

We can write this using matrices as

g ij ) = (a ji )(g ij )(a ij ), (3.2)

where (a ji ) is the transpose of (a ij ). 

Next, we have to define what we mean by orthogonality in our metric structure.

Definition 3.4. If A, B ∈ V , and XY : V × V → F is a bilinear form, we say that A is left orthogonal to B if the product AB = 0. Similarly, if BA = 0 , we say that A is right orthogonal to B. If U and W are subspaces of V , we say that U is left orthogonal to W if AB = 0, where A ∈ U and B ∈ W . The set of all vectors in V that are left orthogonal to W form a subspace of V , denoted W

L

.

The following subspace is of special importance.

Definition 3.5. The subspace V

L

of V , consisting of the vectors of V that are left orthogonal to all vectors of V , is called the left kernel of our bilinear form. The right kernel is defined in a similar way.

One can show that both kernels must have the same dimension. A proof can be found on page 21 in [1]. We are especially interested in the case when the left kernel V

L

is the zero subspace of V . This implies, of course, that V

L

= V

R

= 0. We should use some special terminology.

Definition 3.6. We call a vector space V with a metric structure non-singular if the kernels of the bilinear form are the zero subspace.

Definition 3.7. Let A 1 , A 2 , · · · , A n be a basis of V , and let g ij = A i A j ∈ F , where i, j = 1, 2, · · · , n. The determinant G = det(g ij ) is called the discriminant of V .

There is a familiar property of the discriminant, regarding singularity.

Theorem 3.8. If V is a vector space with a metric structure, then V is non-singular if and only if the discriminant G 6= 0.

Proof. Let A j be a basis of V , where j = 1, 2, · · · , n. A vector X = P n

ν=1 x ν A ν will be in V

L

if and only if XY = 0 for all Y , and then XA j = 0 for all j. On the other hand, if XA j = 0 for all j we get X ˙( P n

µ=1 y µ A µ ) = 0. X will be in V

L

if and only if

n

X

ν=1

g νj x ν = 0. (3.3)

If V is non-singular, then V

L

= V

R

= 0, which implies that (3.3) should only have the trivial solution. This is the case if and only if the determinant of the matrix (g ij ) is

different from 0, i.e., G 6= 0. 

(21)

Let us now return to the expression (3.2) and denote the discriminant of V , as defined by the basis B ν , by ¯ G. If we take the determinants of (3.2), we get

G = G · (det(a ¯ ij )) 2 .

The discriminant of V is therefore only uniquely determined up to a square factor.

Definition 3.9. Let V and W be vector spaces over F , and σ : V → W be a homo- morphism of V into W . Suppose that metric structures are defined on both V and W . We call σ an isometry of V into W if σ preserves products in the following sense:

XY = (σX)(σY ) for all X, Y ∈ V.

The most important case is that of isometries that are isomorphisms of V onto V . In this case, σ −1 is also such an isometry, and if σ and τ are isometries of V onto V , then so is στ . We can therefore state the following definition:

Definition 3.10. Let V be a space with a metric structure. The isometries of V onto V form a group that we will call the group of V .

Theorem 3.11. Let V be a non-singular vector space. Let σ be an endomorphism V → V , A i a basis of V , and σA i = B i . Then σ will be an isometry if and only if A i A j = B i B j for all i, j = 1, 2, · · · , n.

Proof. Suppose that σ is an isometry. We must then have A i A j = B i B j . Now suppose instead that B i B j = A i A j . If X = P n

ν=1 x ν A ν and Y = P n

µ=1 y µ A µ then

σX =

n

X

ν=1

x ν (σA ν ) =

n

X

ν=1

x ν B ν , and

σY =

n

X

µ=1

y µ B µ . We get

XY =

n

X

ν,µ=1

x ν y µ A ν A µ =

n

X

ν,µ=1

x ν y µ B ν B µ = σX · σY.

We see that σ is an isometry if the kernels of σ are 0, but this is given since V is

non-singular. 

Theorem 3.12. If V is non-singular and σ an isometry of V onto V , then det σ = ±1.

Proof. Let σA i = B i . If we write B j = P n

ν=1 A ν a νj , then the matrix (a ij ) is the one describing the endomorphism σ in the sense of Section 2.1.

Recall Example 3.2. Since σ is an isometry we have A i A j = B i B j = ¯ g ij = g ij . From Example 3.3 we have

(g ij ) = (a ji )(g ij )(a ij ).

Taking the determinants, we get

G = (det(a ij )) 2 · G = (det σ) 2 · G.

Since V is non-singular, we know that G 6= 0. Therefore, det σ = ±1. 

(22)

In the case when det σ = ±1, we will use some special terminology.

Definition 3.13. If det σ = +1, σ is called a rotation. If det σ = −1, we call σ a reflexion.

3.2. Quadratic Forms. We are now almost ready to define orthogonal geometry, but we should first take a quick look at something called quadratic forms. This will provide us with a deeper understanding of the motivation behind the study of orthogonal geom- etry.

We start with a definition.

Definition 3.14. A quadratic form is a map Q : V → F , which satisfies the two conditions:

(1) Q(aX) = a 2 Q(X),

(2) φ(X, Y ) = Q(X + Y ) − Q(X) − Q(Y ),

where X, Y ∈ V , a ∈ F and φ(X, Y ) is a bilinear form that we will denote X ◦ Y . Example 3.15. Let V be a vector space with a metric structure, and consider the expression X 2 = XX. This is an example of a quadratic form. We find from (3.1) that

X 2 =

n

X

i,j=1

g ij x i x j .

This is an expression which depends quadratically on the x ν . The coefficient of x 2 i is g ii , and the coefficient of x i x j is g ij + g ji if i 6= j.

If we let Q(X) = X 2 , this Q(X) satisfies condition (1). For X ◦ Y we find that X ◦ Y = (X + Y ) 2 − X 2 − Y 2 = XY + Y X,

which will satisfy condition (2). However, we note that X ◦ Y is not the original bilinear form. We notice that we can select the g ij in such a way that X 2 becomes a given quadratic form. Moreover, this is possible in several ways.  Theorem 3.16. Let V be a vector space with a metric structure, let A 1 , A 2 , · · · , A n be a basis of V , and let X = P n

i=1 x i A i . All quadratic forms of V can be expressed as polynomials of the form

Q(X) =

n

X

i,j=1

h ij x i x j

that depends quadratically on the x ν , and where h ij ∈ F .

Proof. Suppose that Q(X) is a quadratic form. If we put X = X 1 + X 2 + · · · + X r−1 and Y = X r in condition (2), we find

Q(X 1 + X 2 + · · · + X r ) = Q(X 1 + X 2 + · · · + X r−1 ) + Q(X r ) +

r−1

X

i=1

(X i ◦ X r ).

Now use induction on r to get

(23)

Q(X 1 + X 2 + · · · + X r ) =

r

X

i=1

Q(X i ) + X

1≤i<j≤r

(X i ◦ X j ).

Then

Q(X) =

n

X

i=1

x 2 i Q(A i ) + X

1≤i<j≤n

x i x j (A i ◦ A j ).

This shows that Q(X) indeed depends quadratically on the x i .  Theorem 3.17. The quadratic form Q(X) determines a unique symmetric bilinear form.

Proof. Let XY be X ◦ Y from Example 3.15. From condition (2) of Definition 3.14 we have

X ◦ Y = Q(X + Y ) − Q(X) − Q(Y ). (3.4) Since Q(2X) = 4Q(X), putting Y = X gives us

X ◦ X = 2Q(X).

We also notice that

X ◦ Y = Y ◦ X.

Since char F 6= 2, we can write Q(X) = 1 2 (X ◦ X), which shows that the quadratic form differs inessentially from X ◦ X. The bilinear form X ◦ Y is symmetric.

Is this bilinear form unique? Suppose that there is another symmetric bilinear form XY = Y X for which we also have Q(X) = 1 2 X · X. We would then get

Q(X + Y ) − Q(X) − Q(Y ) = 1

2 (X + Y )(X + Y ) − 1

2 XX − 1

2 Y Y = X · Y which shows X · Y = X ◦ Y .

The quadratic form Q(X) therefore uniquely determines a symmetric bilinear form such that

X · X = 2Q(X).

 Remark 3.18. We should clarify why we exclude the case when char F = 2. This is because in that case we have 2 · a = 0, which implies that a = −a for all a ∈ F , meaning that every element is its own additive inverse. For a = 1 = −1, we get aX = X = −X.

If we now put Y = X in (3.4) we would get X ◦ X = 0.

3.3. Orthogonal Geometry. In this section, we finally give a definition of orthogonal geometry and explore some important properties. We will introduce the idea of symme- try, and, lastly, we show that in orthogonal geometry, every isometry can be expressed as a product of symmetries (Theorem 3.44).

Consider an arbitrary bilinear form XY . As before, we say that a vector A is or- thogonal to a vector B if AB = 0. Now, a natural question to ask is: For which metric structures does AB = 0 imply BA = 0? We have the following theorem:

Theorem 3.19. Let XY : V ×V → F be an arbitrary bilinear form, and let A, B, C ∈ V .

If AB = 0 implies BA = 0, then either AB = BA or A 2 = B 2 = 0.

(24)

Proof. Suppose that this is true for V . We then see that

A((AC)B − (AB)C) = (AC)(AB) − (AB)(AC) = 0.

Hence,

((AC)B − (AB)C)A = 0, or

(AC)(BA) = (CA)(AB).

We put C = A and obtain

A 2 · (BA) = A 2 · (AB), giving us two possible cases:

Case 1). If A 2 6= 0, then BA = AB.

Case 2). If AB 6= BA, then A 2 = 0, and similarly B 2 = 0.

 This thesis mainly focuses on the case where AB = BA, and the geometry that this property implies.

Definition 3.20. A vector space V with a metric structure is said to have an orthogonal geometry if

XY = Y X for all X, Y ∈ V.

We recognize this as a symmetric bilinear form, and, since we are only concerned about the case when char F 6= 2, this geometry is entirely satisfactory. This is because the symmetric bilinear forms are in one-to-one correspondence with quadratic forms, and one may simply say that X 2 is the quadratic form connected with our bilinear form.

It is worth noting that, from our quick study of quadratic forms, we see that the formula (X + Y ) 2 = X 2 + Y 2 + 2XY holds in this geometry, which one may, in special cases, recognize as the Pythagorean theorem or the law of cosines.

Remark 3.21. It is easy to show that, if V contains two vectors A and B such that AB 6= BA, then C 2 = 0 for any vector C ∈ V (see page 111 in [1]). This means that there is also a metric structure where X 2 = 0 for all X ∈ V , and we say that V , in this case, has a symplectic geometry. In such a geometry XY = −Y X, and the bilinear form is said to be skew symmetric. The study of symplectic geometry is therefore equivalent to the study of skew symmetric bilinear forms.

From now on, V will stand for a non-singular space of dimension n, with an orthogonal geometry. Left orthogonality is now equal to right orthogonality, implying that the right and left kernels are the same. They are the space V . There is, again, some associated terminology.

Definition 3.22. The kernel V of V is called the radical of V , and is denoted by rad

V .

(25)

If U is a subspace of V , then the orthogonal subspace of V induces, by restriction, an orthogonal subspace of U . U itself has a radical consisting of those vectors of U which are in U . In other words,

rad U = U ∩ U , U ⊂ V. (3.5)

Definition 3.23. If V is the direct sum

V = U 1 ⊕ U 2 ⊕ · · · ⊕ U r

of subspaces that are mutually orthogonal, we shall say that V is the orthogonal sum of the U i , and use the notation

V = U 1 ⊥ U 2 ⊥ · · · ⊥ U r .

We take a closer look at what it means for V to be composed of an orthogonal sum.

Theorem 3.24. Let V be a vector space that is a direct sum of subspaces U i . Suppose that a geometric structure is given to each subspace. There is, then, a unique way to extend these structures to one of V , such that V becomes the orthogonal sum of the U i . Proof. Let

X =

n

X

i=1

A i , Y =

n

X

i=1

B i , be vectors of V , and A i , B i ∈ U i . We now have to define

XY = A 1 B 1 + A 2 B 2 + · · · + A r B r . (3.6) It is easy to check that (3.6) defines a bilinear form, and that V will have an orthogonal geometry if all the U i have an orthogonal geometry. The geometry of V induces on each U i its original geometry, and U i and U j are orthogonal if i 6= j.  Theorem 3.25. Suppose that V = U 1 + U 2 + · · · + U r , where U i is orthogonal to U j if i 6= j. V is then an orthogonal sum if each U i is non-singular.

Proof. Let X = P n

i=1 A i , where A i ∈ U i , and assume X ∈ rad V . We must then have XB i = 0 for all B i ∈ U i , which gives A i B i = 0 or A i ∈ rad U i . Conversely, if each A i ∈ rad U i , then X ∈ rad V . In other words, if U i are mutually orthogonal, then

rad V = rad U 1 + rad U 2 + · · · + rad U r .

Should each U i be non-singular, i.e., rad U i = 0, we obtain rad V = 0, and V is non-singular. However, in this case our sum is direct. Indeed, if P n

i=1 A i = 0, we obtain A i B i = 0 for any B i ∈ U i . Hence, A i ∈ rad U i = 0. We therefore have

V = U 1 ⊥ U 2 ⊥ · · · ⊥ U r ,

if each U i is non-singular. 

Theorem 3.26. Each space U that is supplementary to rad V gives rise to an orthogonal

splitting. U is non-singular and naturally isomorphic to V /rad V .

(26)

Proof. Consider the subspace rad V of V , and let U be a supplementary subspace such that

V = rad V ⊕ U.

Since rad V is orthogonal to V , and therefore to U , we get an orthogonal splitting V = rad V ⊥ U.

We deduce

rad V = rad(rad V ) ⊥ rad U = rad V ⊥ rad U.

Since this last sum is direct, we must have rad U = 0, and U is therefore non-singular.

We now show the isomorphism. Let the cosets (X + rad V ) and (Y + rad V ) be elements of the factor space V /rad V . We can define a product of the cosets as

(X + rad V ) · (Y + rad V ) = XY. (3.7) From Theorem 2.11, we know that there is a natural isomorphism U → V /rad V , map- ping the vector X of U onto the coset (X + rad V ). Equation (3.7) means that this map is an isometry of U onto V /rad V , and the theorem is thus proved.  Remark 3.27. The geometry on V does not induce a geometry on a factor space in general. However, it does so for the space V /rad V .

We take a look at the properties of the isometries of orthogonal subspaces.

Definition 3.28. Let V = U 1 ⊥ U 2 ⊥ · · · ⊥ U r and V 0 = U 1 0 ⊥ U 2 0 ⊥ · · · ⊥ U r 0 be orthogonal splittings of two spaces V and V 0 , and suppose that an isometry σ i of U i into U i 0 is given for each i. If X = P n

i=1 A i with A i ∈ U i is a vector of V , we can define a map σ of V into V 0 by

σX = σ 1 A 1 + σ 2 A 2 + · · · + σ r A r , which is an isometry and shall be denoted by

σ = σ 1 ⊥ σ 2 ⊥ · · · ⊥ σ r .

We shall call it the orthogonal sum of the maps σ i . However, one must check that σ is a homomorphism of V into V 0 , and that scalar products are preserved.

Again, the bijective isometries σ : V → V are of special importance.

Theorem 3.29. Let V = U 1 ⊥ U 2 ⊥ · · · ⊥ U r and each σ i an isometry of U i onto U i . The orthogonal sum

σ = σ 1 ⊥ σ 2 ⊥ · · · ⊥ σ r

is an isometry of V onto V and we have

det σ = det σ 1 · det σ 2 · · · det σ r . If

τ = τ 1 ⊥ τ 2 ⊥ · · · ⊥ τ r , where τ i are also isometries of U i onto U i , then

στ = σ 1 τ 1 ⊥ σ 2 τ 2 ⊥ · · · ⊥ σ r τ r .

Proof. Cf. [1, p. 117]. 

(27)

The relationships between the dimensions of the subspaces and their orthogonal com- plements are essential to our further studies of the isometries of V .

Theorem 3.30. Suppose that V is a non-singular vector space with orthogonal geometry, and let U be any subspace of V . We then have

a) U ⊥⊥ = U ,

b) dim U + dim U = dim V , c) rad U = rad U = U ∩ U , and d) dim U = codim U .

Proof. For a) and b), see page 117 [1]. c) follows from b) and (3.5). For d), see page

23 [1]. 

Theorem 3.31. The subspace U of V will be non-singular if and only if U is non- singular. Should U be non-singular, we have

V = U ⊥ U .

Proof. Suppose that U is non-singular. Formula c) of Theorem 3.30 shows that U is non-singular, and that the sum U + U is direct. We get V = U ⊥ U .

If V = U ⊥ W , then W ⊂ U and dim W = n − dim U = dim U . Therefore,

W = U and rad U = U ∩ U = 0. 

The definition of orthogonal geometry allows for V to be a hyperbolic space, meaning that there are vectors X ∈ V such that X 2 = 0. We will give a formal definition after an important, motivating example. But first, a definition.

Definition 3.32. A space is called isotropic if all products between vectors of the space are 0. An isotropic subspace U is called maximal isotropic if U is not a proper subspace of some isotropic subspace of V . A vector A is called isotropic if A 2 = 0.

We see that the zero subspace of a space and the radical of a space are examples of isotropic subspaces.

The following example plays an important role in the general theory.

Example 3.33. We assume that dim V = 2, that V is non-singular, and that V contains an isotropic vector N 6= 0. If A is any vector that is not contained in the line hN i, then V = hN, Ai. We shall try to determine another isotropic vector M such that N M = 1.

If we let x, y ∈ F , putting M = xN + yA gives N M = yN A. If N A were 0, then N ∈ rad V , but we have assumed V to be non-singular. Therefore, N A 6= 0, and we can determine y uniquely so that N M = 1. We can determine x from

M 2 = 0 = 2xyN A + y 2 A 2 .

This is also possible since 2yN A 6= 0, and leads to a uniquely determined x. We now have

V = hN, M i,

N 2 = M 2 = 0,

(28)

and

N M = 1.

Conversely, if V = hN, M i is an arbitrary plane, we may impose an orthogonal geometry on it from the symmetric bilinear form XY . Recall Example 3.2. We can determine XY by setting g 11 = g 22 = 0 and g 12 = 1 (hence g 21 = 1). Then N 2 = M 2 = 0 and N M = 1.

If B = xN = yM ∈ rad V , then BM = 0, and therefore x = 0. Also N B = 0, which gives y = 0. Therefore, B = 0 and rad V is the zero subspace, which means that V is non-singular.

Suppose V = hN, M i has an orthogonal geometry. Can there be any other isotropic vectors in V ? The vector X = xN + yM will be isotropic if X 2 = 2xy = 0. However, we either have y = 0, meaning that X = xN , or x = 0, meaning that X = yM . The vectors xN and yM are therefore the only isotropic vectors of V . 

We can now state the definition.

Definition 3.34. A non-singular plane which contains an isotropic vector shall be called a hyperbolic plane. It can always be spanned by a pair of vectors N, M that satisfy

(1) N 2 = M 2 = 0, (2) N M = 1.

We shall call any such ordered pair N, M a hyperbolic pair. If V is a non-singular plane with orthogonal geometry, and N 6= 0 is an isotropic vector of V , there exists precisely one M in V such that N, M is a hyperbolic pair. The vectors xN and yM , where x, y ∈ F , are then the only isotropic vectors of V .

Definition 3.35. An orthogonal sum of hyperbolic planes P 1 , P 2 , · · · , P r shall be called a hyperbolic space:

H 2r = P 1 ⊥ P 2 ⊥ · · · ⊥ P r . It is non-singular and, of course, of even dimension 2r.

Since V can be expressed as a sum of orthogonal subspaces, we can imagine that these subspaces can also be expressed as sums of orthogonal subspaces. This could be iterated until we end up with an expression consisting only of orthogonal spaces of dim = 1. We therefore have:

Theorem 3.36. A space V with orthogonal geometry is an orthogonal sum of lines V = hA 1 i ⊥ hA 2 i ⊥ · · · ⊥ hA n i.

The A i are called an orthogonal basis of V . V is non-singular if and only if none of the A i are isotropic.

Proof. Cf. [1, p. 119]. 

In order to prove The Cartan-Dieudonné Theorem (Theorem 3.44) in the case when

V is hyperbolic, we need to know how to describe rotations in hyperbolic space. We

have the following lemma:

(29)

Lemma 3.37. Let V = hN 1 , M 1 i ⊥ hN 2 , M 2 i ⊥ · · · ⊥ hN r , M r i be a hyperbolic space and σ an isometry of V onto V that keeps each vector N i fixed. σ is then a rotation and of the following form:

σN i = N i , σM j =

r

X

ν=1

N ν a νj + M j

where the matrix (a ij ) is skew symmetric if the geometry is orthogonal.

Proof. We of course have σN i = N i . Let σM j =

r

X

ν=1

N ν a νj +

r

X

µ=1

M µ b µj .

We find

σN i · σM j = N i · σM j = b ij

and, since we must have σN i · σM j = N i M j if σ is an isometry, we get b ii = 1 and b ij = 0 for i 6= j. Thus

σM j =

r

X

ν=1

N ν a νj + M j .

We must still check that σM i · σM j = M i M j = 0. This leads to

r

X

ν=1

N ν a νi + M i

! r X

µ=1

N µ a µj + M j

!

= a ji + (M i N i )a ij = 0.

If V is orthogonal, then a ji = −a ij , and since we assume that the characteristic of F is 6= 2 we see that (a ij ) is skew symmetric. Obviously, det σ = +1  We now turn our focus to isometries of general, non-singular spaces. We denote the identity isometry by 1, or 1 V if we need to specify the space. The map σ that sends each vector X onto −X satisfies (1 + σ)X = X − X = 0, such that 1 + σ = 0, and is therefore denoted by −1, or −1 V (if we need to specify the space).

Now, let V = U ⊥ W . Building on the theory we just learned, we see that we can form the isometry σ = −1 U ⊥ 1 W . If X = A + B with A ∈ U and B ∈ W , then σX = −A + B. We have σX = X if and only if A = 0, or X ∈ W , and σX = −X if and only of B = 0 or X ∈ U . This means that U and W are characterized by σ. Such a map σ satisfies σ 2 = 1, and we should use a special term:

Definition 3.38. An isometry σ of a non-singular space V is called an involution if σ 2 = 1.

Theorem 3.39. Every involution is of the form −1 U ⊥ 1 W , resulting from a splitting V = U ⊥ W .

Proof. Let σ 2 = 1. Then

XY = σX · σY, and

σX · Y = σ 2 X · σY = X · Y.

(30)

Consequently,

(σX − X)(σY + Y ) = σX · σY + σX · Y − X · σY − XY = 0.

The two subspaces U = (σ − 1)V and W = (σ + 1)V are therefore orthogonal. A vector σX − X of U is reversed by σ, and a vector σX + X of W is preserved by σ. Hence, U ∩ W = 0. Since any vector X can be written as − 1 2 (σX − X) + 1 2 (σX + X), we see

that V = U ⊥ W and σ = −1 U ⊥ 1 W . 

We can now define symmetry in a precise way.

Definition 3.40. If σ = −1 U ⊥ 1 W and p = dim U , we call p the type of the involution σ. We obviously have det σ = (−1) p . Since V has orthogonal geometry, p might be any number ≤ n = dim V . An involution of type p = 1 shall be called a symmetry with respect to the hyperplane W . An involution of type 2 shall be called a 180 rotation.

The following example makes it clear why we use the word symmetry.

Example 3.41. Suppose p = dim U = 1, then U = hAi is a non-singular line, and U = W is a non-singular hyperplane. The image of a vector xA + B (with B ∈ W ) is

−xA + B. 

The isometries ±1 V are characterized by the following theorem.

Theorem 3.42. Let σ be an isometry of V that keeps all lines of V fixed. Then σ = ±1 V . Proof. If σ keeps the line hXi fixed, then σX = Xa, and for any Y ∈ hXi we have σY = σ(Xb) = σ(X) · b = Xab = Y a. This a may still depend on the line hXi if σ keeps every line of V fixed. If hXi and hY i are different lines, then X and Y are independent vectors. On one hand, we have σ(X + Y ) = (X + Y ) · c, and on the other σ(X + Y ) = σ(X) + σ(Y ) = Xa + Y b. A comparison shows that a = c = b, and we know that we have σX = Xa with the same a for all X. Let X and Y be vectors such that XY 6= 0. Then, XY = σX · σY = Xa · Y a = (XY )a 2 . We see that a 2 = 1, and

a = ±1. 

In the remaining part of this thesis, we only consider isometries of V onto V where V is a non-singular vector space of dimension n with an orthogonal geometry. These isometries form an important group.

Definition 3.43. The group of all isometries of V into V is called the orthogonal group of V, and is denoted by O(V ). The subgroup of all rotations is called the special orthogonal group, and is denoted by SO(V ).

In the final theorem of this section, due to Élie Cartan and Jean Dieudonné, we show that each element of O(V ) can be expressed as a product of symmetries.

Theorem 3.44 (The Cartan-Dieudonné Theorem). Let V be a non-singular vector space of dimension n with an orthogonal geometry. Every isometry of V onto V is a product of at most n symmetries with respect to non-singular hyperplanes.

Proof. If σ = 1 or n = 1, the proof is trivial. We use induction on n and have to

distinguish four cases.

(31)

Case 1). Assume that there exists a non-isotropic vector A, left fixed by the isometry σ. Let H = hAi . Hence, σH = H and dim H = n − 1. We construct an isometry λ : H → H that is the restriction of σ to H, and write λ = τ 1 τ 2 · · · τ r with r ≤ n − 1, where τ i is a symmetry of the space H with respect to a hyperplane H i of H.

Next, we extend each τ i to the whole of V . Put ¯ τ i = 1 L ⊥ τ i , where L = hAi. Each ¯ τ i leaves the hyperplane L ⊥ H i of V fixed, and is therefore a symmetry of V (as reflexion).

We have

τ ¯ 1 ¯ τ 2 · · · ¯ τ r = 1 L ⊥ λ = σ.

Hence, σ can be expressed by at most n − 1 symmetries.

Case 2). Suppose that there exists a non-isotropic vector A such that σA − A is not isotropic. Let H = hσA − Ai , and let τ be the symmetry with respect to H. Since σ is an isometry, we have

(σA + A)(σA − A) = (σA) 2 − A 2 = 0, and σA + A ∈ H. We therefore have

τ (σA + A) = σA + A, and

τ (σA − A) = A − σA.

We add these equations and get τ σ(2A) = 2A, which shows that τ σ leaves A fixed.

By case 1) τ σ = τ 1 τ 2 · · · τ r where r ≤ n − 1. Now, multiply this expression by τ from the left. Since τ 2 = 1, we get

σ = τ τ 1 τ 2 · · · τ r , which is a product of at most n symmetries.

Case 3). Suppose n = 2, and that V contains non-zero isotropic vectors: V = hN, M i, where N , M is a hyperbolic pair. Since σ preserves products, we have two cases:

a) σN = aM, σM = a −1 N . Hence, σ(N + aM ) = aM + N is a fixed non-isotropic vector. We are in case 1).

b) σN = aN, σM = a −1 M . We may assume, a 6= 1 since a = 1 means σ = 1.

A = N + M , and σA − A = (a − 1)N + (a −1 − 1)M are not isotropic. We are, therefore, in case 2).

Case 4). Now assume that n ≥ 3, that no non-isotropic vector is left fixed, and, finally, that σA − A is isotropic whenever A is not isotropic.

Let N be an isotropic vector. The space hN i has at least dimension 2 and its radical is the same as that of hN i (i.e., it is hN i). By assumption, hN i contains a non-isotropic vector A. We know that A 2 6= 0 and (A + N ) 2 = A 2 6= 0. Let  = ±1.

From our assumption, we can conclude that σA − A, as well as the vectors σ(A + N ) − (A + N ) = (σA − A) + (σN − N ), are isotropic. The square is, therefore,

2(σA − A)(σN − N ) +  2 (σN − N ) 2 = 0

(32)

This last equation is written down for  = 1 and  = −1 and added. We get 2(σN −N ) 2 = 0 or (σN − N ) 2 = 0. We know, therefore, that σX − X will be isotropic whether or not X is isotropic.

Let W be the set of all vectors σX − X. W is then the image of V under the map σ − 1. It contains only isotropic vectors and is, consequently, an isotropic subspace of V . A product of any two of its vectors will be zero.

Now let X ∈ V and Y ∈ W . Consider

(σX − X)(σY − Y ) = 0 = σX · σY − X · σY − (σX − X) · Y.

However, σX − X ∈ W , Y ∈ W so that the last term is 0. Furthermore, σX · σY = XY , since σ is an isometry. Hence,

X(Y − σY ) = 0,

which is true for all X ∈ V . This means Y − σY ∈ rad V = 0 or Y = σY .

We see that every vector of W is left fixed. We had assumed that a non-isotropic vector is not left fixed. This shows that W is an isotropic subspace.

Since both W and W are isotropic, we know that dim W ≤ 1 2 n and dim W 1 2 n, but since dim W + dim W = n by Theorem 3.30, we know that the equality sign holds.

The space V is therefore a hyperbolic space H 2r , n = 2r, and W a maximal isotropic subspace of H 2r . The isometry σ leaves every vector of W fixed, and Lemma 3.37 shows σ to be a rotation. For the space V = H 2r , we may draw the conclusion that our theorem holds, at least for reflexions.

Now let τ be any symmetry of V = H 2r . τ σ is then a reflexion of H 2r , hence τ σ = τ 1 τ 2 · · · τ s with s ≤ n = 2r, but since τ σ is a reflexion, s must be odd, hence s ≤ 2r − 1. We get

σ = τ τ 1 τ 2 · · · τ s , which is a product of s + 1 ≤ 2r = n symmetries.



References

Related documents

Our five research areas represent world-class, multi-disciplinary research – all with the single-minded vision to save lives, prevent injuries and enable safe mobility...

This
is
where
the
Lucy
+
Jorge
Orta
have
founded
the
Antarctic
Village,
the
first
symbolic


His study of township courts in Boipatong provides evidence, he argues, that ‘a retributive understanding of human rights can provide a mean- ingful basis for creating

The ordinal numbers are constructed such that each well-ordered set X is isomorphic to exactly one ordinal number ord(X), the order type of X.. Since

First we define the restriction to the boundary of a function in a Sobolev space, which is not trivial since functions in Sobolev spaces are generally only defined up to a set

Moreover, the point (0, 0, 1) of the sphere will project onto the point at infinity when dealing with the complex projective line, since it is otherwise undefined. This sphere is

Identication and control of dynamical sys- tems using neural networks. Nonparametric estima- tion of smooth regression functions. Rate of convergence of nonparametric estimates

The results show that Davies‟ idiolect does not fully follow any specific pattern, however most of the words switched are nouns and the least common word class is