• No results found

SJ ¨ALVST ¨ANDIGA ARBETEN I MATEMATIK MATEMATISKA INSTITUTIONEN, STOCKHOLMS UNIVERSITET

N/A
N/A
Protected

Academic year: 2021

Share "SJ ¨ALVST ¨ANDIGA ARBETEN I MATEMATIK MATEMATISKA INSTITUTIONEN, STOCKHOLMS UNIVERSITET"

Copied!
64
0
0

Loading.... (view fulltext now)

Full text

(1)

SJ ¨ ALVST ¨ ANDIGA ARBETEN I MATEMATIK

MATEMATISKA INSTITUTIONEN, STOCKHOLMS UNIVERSITET

When Differential Equations meet Galois Theory

av Pinar Larsson

2012 - No 15

(2)
(3)

When Differential Equations meet Galois Theory

Pinar Larsson

Sj¨alvst¨andigt arbete i matematik 30 h¨ogskolepo¨ang, avancerad niv˚ a

Handledare: Torsten Ekedahl/Rikard B¨ogvad

(4)
(5)

Abstract

Analogous to the classical Galois theory which provides an important link be- tween field extensions and subgroups of the symmetric group, di↵erential Galois theory relates di↵erential field extensions and subgroups of the general linear group. In this thesis we introduce di↵erential extensions and construct the Picard-Vessiot extension field L for given base field K and a di↵erential linear homogeneous equation. Furthermore we examine the connection between di↵er- ential K-automorphisms of L to GL(V ), where V stands for the vector space of solutions of the given linear homogeneous di↵erential equation over the field of constants of K, which in its turn is equal to the field of constants of L. Moreover we are examining the algebraic group structure of di↵erential K-automorphisms of Picard-Vessiot extension fields. Also, Liouville and generalized Liouville ex- tensions are examined and their connections to the di↵erential Galois groups are presented.

(6)

Acknowledgements

I would like to thank my supervisor Rikard B¨ogvad and my former supervisor Torsten Ekedahl, who sadly left us too soon, for their support, guidance and creative tips. I would also like to thank my examiner Boris Shapiro for valuable comments and Torbj¨orn Tambour for introducing me to Galois theory. I also owe a big thanks to my friends and family who supported me with their company during my studies.

(7)

Contents

1 Introduction to Di↵erential Galois Theory 4

2 Linear Homogeneous Di↵erential Equations and their Wron-

skian 11

3 Picard-Vessiot Extensions 14

4 Di↵erential Galois Group 22

5 Basic examples 31

6 Liouville extensions 38

7 More complicated example (Airy equation) 39

8 The Di↵erential Galois Correspondence and Generalized Liou-

ville extensions 44

9 Appendix 45

9.1 Proof of Lemma 1.1 . . . 45 9.2 Basic facts about connected sets and connected components . . . 46 9.3 Sketch of proof that a two-dimensional connected subgroup G of

SL2(C) has a common eigenvector . . . 48 9.4 More on SL2(C) . . . 56 9.5 Proof of Theorem 7.1 . . . 57

10 References 59

(8)

1 Introduction to Di↵erential Galois Theory

In this section, we are going to go through some basic definitions and concepts of di↵erential Galois theory.

Definition 1.1. [1] A derivation on a ring A is a homomorphism of abelian groups D : A! A that satisfies the Leibniz rule, i.e:

D(ab) = aD(b) + D(a)b for all a,b2 A.

Definition 1.2. [1] A ring A endowed with a derivation is called a di↵erential ring. In case when A is a field, it is called a di↵erential field. (General notation is a0= D(a), a00= D(D(a)) and formally a(n)= Dn(a) for all integers n 0)

The following lemma summarizes basic properties of derivation, and the proof is rather elementary. Hence it will be stated without proof (but for the interested readers, proof can be forund in the Appendix).

Lemma 1.1. [1] Let (A,D) be a di↵erential ring. Let a,b be two elements of A.

The following peoperties hold:

a) D(1) = 0

b) For any integer n 1, D(an) = nan 1D(a) c) for any integer n 1, Dn(ab) =Pn

k=0

✓n k

Dk(a)Dn k(b)

d) If b is invertible, then D⇣ a b

⌘= bD(a) aD(b)

b2 . In particular, D

✓1 b

= D(b)

b2 .

Definition 1.3. [1] An element of a di↵erential ring whose derivative vanishes is called a constant.

Proposition 1.1. [1] The set of constant elements in a di↵erential ring is a subring. Similarly, the set of constant elements in a di↵erential field is a subfield, which is called the constant field.

Proof. First, let (A, D) be a di↵erential ring and a, b2 A satisfy D(a) = D(b) = 0. Now let us check the necessary properties for the set of such a,b to be a subring:

D(a + b) = D(a) + D(b) = 0 D(a.b) = aD(b) + D(a)b = 0

D(1) = 0 D(0) = 0 as D homomorphism of abelian groups.

Obviously, the set of constants is a subring. Now we will prove the same for a di↵erential field (A, D), where additionally we have to check the existence of inverses. Let a2 (A, D) with D(a) = 0 be invertible. Then

D

✓1 a

= D(a) a2 = 0 so the set of constants is proved to be a subfield.

(9)

Remark 1.1. Throughout this section we use The following notations for sub- ring/subfield of constant:

AD= set of constant elements in a di↵erential ring A KD = set of constant elements in a di↵erential field K.

Proposition 1.2. [1] Let k be a field. Set A = k[T ] and K = k(T ), endowed with the usual derivation. If k has characteristic 0, AD = KD = k. If k has characteristic p > 0, then AD= k[Tp] and KD= k(Tp).

Proof. i. Take P 2 A to be P =PN

n=0anTn. Derivating P , we get P0 = PN

n=0nanTn 1. Assuming P0 = 0 implies that nan = 0 for all n 0.

Note that this does not imply that an= 0 for all n 0.

In case of characteristic of k is 0, then, all anexcept for a0is necessarily 0 so that P = a0 2 k. So AD consists of the elements of k as a0 can be any element of k.

In case of characteristic p > 0, there are two subcases for each n:

(a) p| n: Then n = pt for some positive integer t, and so nan= ptan= 0. Thus the nth power of T would be a multiple of p in the original polynomial.

(b) p- n: This case implies that an= 0.

(a) and (b) implies that P 2 k[Tp], so

AD✓ k[Tp]. (1)

Conversely, P2 k[Tp] implies that P =PN

n=0anTpnand so P0=PN

n=0pnanTpn 1and as px = 0 for all x in k, P0= 0 and

k[Tp]✓ AD (2)

(1) and (2) together imply that k[Tp] = AD.

ii. Now take R2 k(T ) which satisfies D(R) = 0. Let R = A

B where A and B are two polynomials such that B 6= 0 and has minimal degree. This relation imples that A = BR. Di↵erentiating both sides of A = BR:

D(A) = BD(R) + D(B)R = D(B)R which implies that A0= B0R.

deg B0 < deg B) B0 = 0 as B has minimal degree, which implies that A0= 0.

Now following the argument in the part (i), in case characteristic of k is 0, A and B are constant polynomials so k(T )D= k. Similarly, in case when the characteristic of k is p > 0, A and B are polynomials in variable Tp which implies that R2 k(Tp), so

k(T )D✓ k(Tp). (3)

(10)

Conversely, if R 2 k(Tp) then R = A

B where A,B are polynomials in variable Tpand R0= A0B B0A

B2 = 0 as A0= B0= 0. So

k(Tp)✓ k(T )D (4)

(3) and (4) together imply that k(Tp) = k(T )D.

Definition 1.4. [1] A ring homomorphism f : A ! B from one di↵erential ring to another is called a di↵erential homomorphism

f : (A, DA)! (B, DB),

if it satisfies f (DA(a)) = DB(f (a)) for all a2 A. In case of fields, it is called a di↵erential extension and we say that DB extends DA.

Lemma 1.2. [1] Let f : (A, DA)! (B, DB) be a di↵erential ring homomor- phism. Then the kernel of f is stable under DA.

Proof. Take a 2 A where a 2 kerf. Then DB(f (a)) = DB(0) = 0 as DB is homomorphism of abelian groups. Since 0 = DB(f (a)) = f (DA(a)), DA(a)2 kerf .

Definition 1.5. [2] An ideal I which is mapped to itself by a di↵erential operator is called a di↵erential ideal. So kerf is a di↵erential ideal.

Theorem 1.1. [1] Let (A, DA) be a ring and let I be a di↵erential ideal of A.

Then there exists a unique derivation of the quotient ring B = A/I such that the canonical ring morphism A! B is a di↵erential homomorphism.

Proof. Define DB(a + I) = DA(a) + I. We claim that it is well-defined. Let a + I = b + I, then a b2 I. Derivating both sides, DA(a b)✓ DA(I)✓ I.

This implies that DA(a) + I = DA(b) + I, which proves the claim. Further, we claim that it is a derivation.

DB(a + I + b + I) = DB(a + b + I) = DA(a + b) + I = DA(a) + I + DA(b) + I

DB((a + I)(b + I)) = DB(ab + I)

= DA(ab) + I

= (aDA(b) + I) + (DA(a)b + I)

= (a + I)(DA(b) + I) + (DA(a) + I)(b + I) Thus the claim is proved.

Proposition 1.3. [1] Let (A, D) be a di↵erential ring. Assume that A is an integral domain and let K be its field of fractions. There then exists a unique derivation on K which coincides with D on A. Consequently, K has a canonical structure of a di↵erential field.

(11)

Proof. Each x2 K is of the form x =a

b where a, b2 A, Then we have:

D(x) = D⇣ a b

⌘= D(a)b aD(b) b2

First we claim that this formula does not depend on the choice of the fraction a

b, i.e D⇣ a b

⌘= D

✓at bt

, since x =a b = at

bt for a nonzero t which is an element of A. Indeed:

D⇣ a b

⌘= D

✓at bt

, since the latter is:

D(at)bt atD(bt)

(bt)2 = D(at)b aD(bt) b2t

= aD(t)b + D(a)tb abD(t) aD(b)t b2t

= D(a)b aD(b) b2

Now set the map D : K! K. Let us first check if it is well-defined. For it to be well-defined, D⇣ a

b

⌘= D⇣ c d

⌘for a b = c

d. Note that a b = c

d implies that ad = bc and D(ad) = D(bc). The previous claim implies that

D⇣ a b

⌘= D

✓ad bd

and

D

✓bc bd

= D⇣ c d

So D⇣ a b

⌘= D

✓ad bd

= D

✓bc bd

= D⇣ c d

⌘, and D is well-defined.

Now let us check that D is a homomorphism of abelian groups. Indeed, For x = a

b and y = c d, D(x + y) = D

✓ad + bc bd

=

= D(ad + bc)bd (ad + bc)D(bd)

(bd)2 =

= D(ad)bd + D(bc)bd adD(bd) bcD(bd)

(bd)2 =

= D(ad)bd adD(bd)

(bd)2 +D(bc)bd bcD(bd)

(bd)2 =

= D

✓ad bd

◆ + D

✓bc bd

=

= D⇣ a b

⌘ + D⇣ c

d

=

= D(x) + D(y).

(12)

Thus D is homomorphism of abelian groups.

Lastly, we have to show that D is a derivation:

D(xy) = D⇣ ac bd

⌘=

= D(ac)bd acD(bd)

(bd)2 =

= aD(c)bd + D(a)cbd acbD(d) acD(b)d

(bd)2 =

= D(a)cbd acD(b)d

(bd)2 +aD(c)bd acbD(d)

(bd)2 =

= D(a)b aD(b) b2

cd d2 +ab

b2

D(c)d cD(d)

d2 =

= D⇣ a b

⌘ c d+ D⇣ c

d

⌘ a b =

= D(x)y + D(y)x.

so D is a derivation.

The next theorem shows us how to extend the derivation to a polynomial ring.

Theorem 1.2. [1] Let (A, D) be a di↵erential ring and consider the ring A[T ] of polynomials in one variable T with coefficients in A. For any b2 A[T ], there is a unique derivation Db of A[T ] with Db(T ) = b such that the canonical ring morphism A! A[T ] is a di↵erential morphism (A, D) ! (A[T ], Db).

Proof. Take P =Pn

k=0akTk2 A[T ]. Define B = A[T ] and suppose that DB is an arbitrary derivation of B extending D. Then:

DB(P ) = Xn k=0

DB(akTk) =

= Xn k=0

(D(ak)Tk+ akDB(Tk)) =

= Xn k=0

(D(ak)Tk+ akkTk 1DB(T )) =

= Xn k=0

D(ak)Tk+ Xn k=0

akkTk 1DB(T ) =

= PD(T ) + DB(T )P0(T ) (5)

where PDstands for the polynomial obtained by just derivating the coefficients.

Formula (5) shows that, in this case, the derivation of P depends on the image of T under DB.

Now let us examine the converse, i.e taking 2 B we will show that DB(P ) = PD(T ) + P0(T )

(13)

defines a derivation extending a derivation of A with the special property that DB(T ) = .

First let us check that DB is well-defined. If P = R, then

DB(P ) = PD(T ) + P0(T ) = RD(T ) + R0(T ) = DB(R).

Next is to check if it a homomorphism of abelian groups. Indeed, DB(P + R) = (P + R)D(T ) + (P + R)0(T ) =

= PD(T ) + RD(T ) + (P0(T ) + R0(T )) =

= DB(P ) + DB(R).

So it is proved. Lastly, we will check that it defines a derivation. For that, let us first define the multiplication of two polynomials:

Take Q =Pm

k=0bkTkthen P Q =Pm+n

k=0 ckTkwhere ck=P

i+j=kaibj. Thus:

DB(P Q) =

m+nX

k=0

D(ck)Tk+

m+nX

k=0

kckTk 1.

Denote the first sum in this expression by A and the second sum by B. Then

A =

m+nX

k=0

X

i+j=k

D(aibj)Ti+j=

=

m+nX

k=0

X

i+j=k

(aiD(bj) + D(ai)bj)Ti+j=

=

m+nX

k=0

X

i+j=k

aiD(bj)Ti+j+

m+nX

k=0

X

i+j=k

(D(ai)bj)Ti+j=

= Xn i=0

Xm j=0

aiD(bj)Ti+j+ Xn i=0

Xm j=0

D(ai)bjTi+j.

B =

m+nX

k=0

X

i+j=k

(i + j)aibjTi+j 1=

=

m+nX

k=0

X

i+j=k

iaibjTi+j 1+

m+nX

k=0

X

i+j=k

jaibjTi+j 1=

= Xn i=0

Xm j=0

iaibjTi+j 1+ Xn i=0

Xm j=0

jaibjTi+j 1.

Adding A and B, we get

A + B =

m+nX

k=0

X

i+j=k

aiD(bj)Ti+j+ Xn i=0

Xm j=0

jaibjTi+j 1+

+ Xn i=0

Xm j=0

D(ai)bjTi+j+ Xn i=0

Xm j=0

iaibjTi+j 1.

(14)

Analysing the last two sums of this:

Xn i=0

Xm j=0

D(ai)bjTi+j+ Xn i=0

Xm j=0

iaibjTi+j 1 = Xn i=0

D(ai)Ti Xm j=0

bjTj+

+ Xn i=0

iaiTi 1 Xm j=0

bjTj =

= PD(T )Q + P0(T )Q =

= DB(P )Q.

Similarly, the first two sums can be analysed and the result equals P DB(Q).

The next theorem describes how we can extend derivation in case of separable algebraic field extensions. For this we need brief information about separable extensions the primitive element theorem.

Definition 1.6. ([3]) A field extension L K is called separable if and only if the minimal polynomial of every ↵2 K over L has distinct roots, i.e is separable.

Theorem 1.3. ([1]) The primitive element theorem states that, for a finite separable extension K✓ L, there exists an element ↵ 2 L such that L = K[↵].

Theorem 1.4. [1] Let (K, D) be a di↵erential field and let L be a finite separable algebraic extension of K. Then there exists a unique derivation on L which extends D.

Proof. Since L is a separable extension then by the primitive element theorem, there exists an x2 L such that L = K[x]. Denote the minimal polynomial of x by P = Xn+an 1Xn 1+..+a0. By the algebraic Galois theory, L' K[X]/(P ), see ([3]). Now suppose that DLis a derivation of L that extends the derivation on K. Di↵erentiating the relation P (x) = 0 we get

0 = DL(0) = DL(P (x)) = PD(x) + P0(x)DL(x).

First notice that deg P0< deg P . Notice also that P is a separable polynomial,i.e it has only simple zeros. Hence P0(x) is not identically equal to zero. Then:

DL(x) = PD(x)

P0(x) . (6)

Formula (6) shows that DL(x) is uniquely defined, and by Theorem 1.2, this implies that DL (if it exists) is unique. Now the last thing left to show is that such a derivation actually exists. Note that (L, DL) = (K[X]/(P ), DL).

We have to show that there exists a derivation on K[X] such that (P ) is a di↵erential ideal. Suppose that ˜D is a derivation on K[X]. Calculate:

D(P ) = P˜ D(X) + P0(X) ˜D(X).

Since P is separable, P and P0are coprime and we can find U, V 2 K[X] such that U P + V P0= 1. If we choose ˜D(X) = V PD(X), it implies that

D(P ) = P˜ D P0V PD= (1 V P0)PD= (U PD)P

. Thus ˜D(P ) is a multiple of P , so it is an element of (P ). It follows that for any A2 K[X], ˜D(AP ) = A ˜D(P ) + ˜D(A)P 2 (P ), which implies that (P) is a di↵erential ideal of (K[X], ˜D). Hence by Theorem 1.1, L = K[X]/(P ) is a di↵erential field with the unique derivation extending ˜D.

(15)

2 Linear Homogeneous Di↵erential Equations and their Wronskian

For a di↵erential field (K,D), we shall be dealing with the equations of the following type:

Dn(f ) + an 1Dn 1(f ) + .. + a0f = 0 (?)

where ai 2 K for all i, and f is unknown. This kind of di↵erential equations are called linear homogeneous, which means that we have the terms with the unknown variable in the left-hand side and zero on the right hand side (see [11]).

Matrix-valued first order linear di↵erential equations are of the form, Y0= AY , A2 Mn(K)

where Y is unknown. Any equation of the form (?) can be transformed to the matrix form by introducing Y = (f, f0, .., f(n 1))tand

Y0 = (f0, f00, .., (an 1Dn 1(f ) + .. + a0f ))t

= 0 BB

@

0 1 0 .. 0

.. .. .. .. ..

0 .. .. 0 1

a0 a1 .. .. an 1

1 CC A Y.

Theorem 2.1. [1] Let (K, D) be a di↵erential field and let C denote its field of constants. Then the set of solutions Y 2 Knof a di↵erential equation Y0= AY with A2 Mn(K), is a C-vector space of dimension less than or equal to n.

Proof. Take the derivation D : K! K. For any a 2 C, f 2 K, D(af ) = aD(f ) + D(a)f = aD(f ) so the derivation D is C-linear. Define the map

' : Kn! Knby '(Y ) = Y0 AY

By the information above, ' is C-linear since (af )0 = af0, (af0)0 = af00, etc.

Now let us find ker'; '(Y ) = 0, Y0= AY . So all Y 2 Knsatisfying Y0= AY are in the kernel of '. Thus all such Y’s form a C-vector space which we denote by V .

We are going to prove that the dimension of V is less than or equal to n. Take Y0, .., Yn2 V . They are linearly dependent over K as they are also the elements of Knand Knhas dimension n as a K-vector space. Relation of this with the second part of the theorem is provided by the following lemma.

Lemma 2.1. [1] Let (K, D) be a di↵erential field with the field of constants C.

Let Y1, .., Ym be solutions of a di↵erential equation Y0= AY , for A2 Mn(K).

If they are linearly independent over C, then they are linearly independent over K.

(16)

Proof. We use induction on m:

Base of induction: If Y1 is linearly independent over C, it means that for all c2 C, cY1= 0 implies that Y1 6= 0 and so it is also linearly independent over K.

Step of induction: Assume the lemma holds for m 1. So if Y1, .., Ym 1 are linearly independent over C then they are linearly independent over K.

Now assume that Y1, .., Ym are linearly independent over C but not over K.

Thus there is a relation a1Y1+ .. + amYm = 0 for ai 2 K which do not all vanish. We assume am 6= 0 and divide the relation by it. Without loss of generality, we can assume that am= 1. Now di↵erentiate the new version of the relation:

(a01Y1+ a1Y10) + .. + (am 10 Ym 1+ am 1Ym 10 ) + Ym0 = 0.

Since Y0= AY , this is equivalent to:

(a01Y1+ .. + a0m 1Ym 1) + A(a1Y1+ .. + am 1Ym 1+ Ym) = 0.

As a1Y1+ .. + am 1Ym 1+ Ym= 0, a01Y1+ .. + a0m 1Ym 1= 0. By the induction hypothesis, a0i= 0 for all i and so the aiare constant for all i. This implies that Y1, .., Ymare linearly dependent over C, which is a contradiction.

Back to the proof of the Theorem 2.1: Assume that, dim V > n so Y0, .., Ynare linearly independent over C. Then by the above lemma, they are also linearly independent over K, which is not true. So dim V  n.

Definition 2.1. [1] The Wronskian of elements f1, .., fnform a di↵erential field (K, D) is the determinant:

W (f1, .., fn) = det 0 BB

@

f1 f2 .. .. fn

f10 f20 .. .. fn0

. . . . .

f1(n 1) .. .. .. fn(n 1)

1 CC A .

Theorem 2.2. [1] Let (K, D) be a di↵erential field. Elements f1, .., fn in K are linearly dependent over C if and only if W=0.

Proof. ) Assume that f1, .., fnare linearly dependent over C. Then we have a relation:

fn= a1f1 .. an 1fn 1

where ai 2 C (without loss of generality we can assume that an = 1).

Di↵erentiating both sides, we get

fn0 = a01f1 a1f10 .. an 1fn 10 .

Taking higher order derivatives we get similar relations. Since aiare in C, a0iare zero so the columns satisfy a linear relation which implies W = 0.

( Assume that W = 0. We use induction. First, consider the case n = 1.

Then W (f1) = f1. If W = 0, then by induction f1= 0, which implies the linear dependence. As induction hypothesis, assume that the implication holds for n 1. Now assume that W (f1, .., fn) = 0. The induction hypoth- esis implies that, if W (f2, .., fn) = 0 then f2, .., fnare linearly dependent

(17)

over C. This will directly imply that f1, .., fnare linearly dependent. So assume that W (f2, .., fn)6= 0. W (f1, .., fn) = 0 implies that we have the following relation;

a1f1(j)+ .. + anfn(j)= 0

between the columns of W where 0 j  n 1. Since W (f2, .., fn)6= 0 implies that f2, .., fnare linearly independent over C, then a1in the above relation cannot vanish. Without loss of generality,assume that a1 = 1.

Di↵erentiating the relations for 0 j  n 2:

f1(j+1)+ (a2f2(j+1)+ a02f2(j)) + .. + (anfn(j+1)+ a0nfn(j)) = 0 implies that:

a02f2(j)+ .. + a0nfn(j)= 0.

By our assumption, all a0iare zero, so they are all constants. Thus a1f1(j)+ .. + anfn(j) = 0 has its coefficients in C and since a1 6= 0, f1, .., fn are linearly dependent over C.

(18)

3 Picard-Vessiot Extensions

In this section, we are going to construct a minimal extension of a di↵erential field K such that a given nth order di↵erential equation with coefficients in K admits n linearly independent solutions. This notion of di↵erential Galois theory is similar to the notion of splitting fields in the algebraic Galois theory where one constructs the minimal extension field including all roots to a given polynomial. Note that, all the fields of interest have characteristic 0.

Before describing Picard-Vessiot extensions, let us study the question of the ex- istence of a di↵erential extension field for every base field K and the di↵erential equation

(E) : Yn= an 1Yn 1+ .. + a1Y0+ a0Y where ai2 K

Proposition 3.1. [2] Given K and (E), there exists a di↵erential extension field L containing K and such that (E) has n solutions which are algebraically independent over K.

Proof. We will start by adjoining to K n2 indeterminates, and defining the polynomial ring R = K[y11, y12, .., ynn]. By Theorem 1.2, we can define a unique derivation on K[y11] which extends the derivation on K. In the same manner, a unique derivation on K[y11, y12] which extends the derivation on K[y11] can be defined. Iterating this process, a unique derivation on R extending the derivation D on K. We can define the derivation on R as follows:

DR(yij) = yi,j+1for j < n and DR(yin) =Pn k=0akyik

Since R is a polynomial ring over a field, it is an integral domain. By construction, yi1, where 1 i  n, are algebraically independent solutions of (E). Setting L = Q(R), we obtain the fraction field of R which proves the statement.

Now we are going to give formal definition of a Picard-Vessiot extension.

Definition 3.1. [4] For a di↵erential field (K, D) of characteristic 0 and an algebraically closed field of constants C, take (E) : Y0= AY , where A2 Mn(K).

A di↵erential extension (L, D) is the Picard-Vessiot extension if it satisfies the following:

a) There exists a fundamental invertible n⇥ n matrix Y = (Y1, .., Yn) with entries in L. Columns of Y form a basis for the solutions of (E) hence it satisfies Y0= AY .

b) L is generated over K by the entries Yij of this matrix.

c) C is equal to the field of constants of L.

We now state the existence and uniqueness of the Picard-Vessiot extensions but the proof is just focusing on the existence part.

Theorem 3.1. [1] Any linear homogeneous di↵erential equation of the form (?) admits a Picard-Vessiot extension. Two such extensions are isomorphic as di↵erential extensions of (K, D).

(19)

Proof. From the latter definition, we know that a Picard-Vessiot extension is generated by the coefficients of a basis of solutions Yij so we can start by con- sidering the ring R = K[Y11, .., Ynn] of polynomials in n2 indeterminants. Let G stand for the matrix (Yij), and endow the ring R with the derivation:

D : R! R given by D(G) = AG

Extending D to R, we have to K added a family of solutions to equation (E).

These solutions must be linearly independent and which is equivalent to det(G) being nonzero. Thus det(G) should be invertible. Now we want to extend the ring R by an unknown variable T , which corresponds to the inverse of det(G).

In order to do so, we have to include the relation that 1 T det(G) = 0. As a result, we get a new ring S, defined by S = R[T ]/(1 T det(G)). Now we want to extend the derivation from R to S. From the earlier results, it is known that, in order to extend the derivation of R to R[T ], we need to define D(T ). Furthermore, we have proved that a unique derivation exists for the ring R[T ]/(1 T det(G)) in case when (1 T det(G)) is a di↵erential ideal. So we have to find a proper definition of D(T ) such that (1 T det(G)) becomes a di↵erential ideal. Our claim is that,

D(det(G)) = T r(Com(G)D(G))

where Com(G) stands for the comatrix; i.e. the transpose of the cofactor matrix.

The cofactor matrix is a n⇥ n matrix where ijth entry is calculated by taking the determinant of the matrix obtained by eliminating the ijth element of the ithrow and the jthcolumn of the original matrix. For the proof of this relation, first note that the determinant of a matrix A can be thought as a function in its entries. Hence,

D(det(A)) = Xn i=1

Xn j=1

✓@(det(A))

@(aij)

D(aij) (7)

Now consider the Laplace’s formula det(A) =Pn

j=0(Cijaij) for the determinant of any matrix A expanded in the ithrow, where Cij is the corresponding entry of the cofactor matrix. Substitute this formula in (7). Calculating the inner part of the sum in (7), we get

@(detA)

@(aij) = @(Pn

k=0(Cikaik))

@(aij) =

= Xn k=0

✓@(Cikaik)

@(aij)

=

= Xn k=0

✓ Cij

@(aik)

@(aij)

◆ +

Xn k=0

✓@(Cik)

@(aij)aik

◆ .

aikand aijare in the same row, thus Cikwould not contain aij. Hence@Cik

@aij

= 0 So:

@(detA)

@aij

= Xn k=0

✓ Cij

@aik

@aij

(20)

where @aik

@aij = 1 if j = k and 0 otherwise. Thus

@(detA)

@aij = Cij. So D(det(A)) =Pn

i

Pn

j(Cij)D(aij). Now consider

Com(A) = 0 BB

@

C11 C21 .. Cn1

C12 .. .. Cn2

.. .. .. ..

C1n .. .. Cnn

1 CC

A and A0= 0 BB

@

a011 .. .. a01n .. .. .. ..

.. .. .. ..

a0n1 .. .. a0nn 1 CC A .

Observe that the jjthentry of the matrix Com(A)A0, is given by the (Com(A)A0)jj = Pn

i=0(CijD(aij)). Thus:

T r(Com(A)A0) = Xn j=1

Xn i=1

(CijD(aij)) = Xn i=1

Xn j=1

(CijD(aij))

Thus we have proved the claim. By using the relation D(G)=AG, we can further derive that:

D(det(G)) = T r(Com(G)AG) =

= T r(AGCom(G)) =

= T r(A det(G)) =

= T r(A) det(G) which implies,

D(1 T det(G)) = T T r(A) det(G) D(T ) det(G) = det(G)(D(T )+T T r(A)).

Define D(T ) = T T r(A). Then, for some k2 S,

D(k(1 T det(G))) = kD((1 T det(G))) + D(k)(1 T det(G)) =

= D(k)(1 T det(G)).

Hence, (1 T det(G)) is a di↵erential ideal, which implies that S is endowed with the derivation D(G) = AG.

We are not done yet since S contains more constants than the field of constants of K. Here it is enough to present one example that contradicts the fact that S will not have more constants that K. Take K = C(X) and (E) : Y00+ Y = 0.

Now construct R = K[Y11, Y12, Y21, Y22] with the derivation DR(Y11) = Y12and DR(Y21) = Y22. By taking Y11 = sinX and Y21 = cosX, we are adding two linearly independent solutions of (E) to K. So sin2X +cos2X is also an element of R which does not lie in K. For any x, sin2x + cos2x = 1, its derivative will be zero and so it is a constant which lies in R but not in K. Hence, we have obtained the contradiction that we wanted.

To cope with that problem, we will take a maximal di↵erential ideal J of S. I claim that such a maximal di↵erential ideal exists. To prove it, Zorn’s lemma

(21)

will be used, stating that, if M is a non-empty partially ordered set, then there exists a maximal element if every chain has an upper bound ([5]) (Here, a chain refers to a set of elements of the form I0 ✓ I1 ✓ I2 ✓ ...). To apply Zorn’s lemma, assume that M = set of all proper di↵erential ideals of S, ordered by inclusion. M is non-empty since (0)2 J. Let C be a chain in M and J be the union of all I 2 C. Here J is an ideal since;

a) If x, y2 J, then x 2 I and y 2 I0 for some I, I02 C. Since C is a chain, either I includes I0or vice versa, so x + y would be an element of I or I0, hence also an element of J.

b) If x2 J then x 2 I for some I contained in J and so does ax for some a2 S, hence ax 2 J.

J is clearly a proper ideal. Indeed since M is the set of all proper di↵erential ideals, and C is a chain in it, none of the ideals in J would contain 1. Lastly, J is a di↵erential ideal, which is proved in the following way. If x2 J then x 2 I for some I contained in J. Since I is a di↵erential ideal, it holds that x02 I and so is an element of J. Thus J is the upper bound of C and by Zorn’s lemma, M has a maximal element.

We can extend the derivation on S to S/J by the previous results. It has obviously no non-trivial di↵erential ideal. By the following lemma, the field of fractions L = Q(S/J) has a field of constants which is equal to the field of constants of K. Since other conditions of a Picard-Vessiot extension are satisfied by the construction of S, L is the Picard-Vessiot extension for our equation.

Before proceeding with the concluding lemma, we need to prove the Null- stellensatz for algebras.

Theorem 3.2. (Nullstellensatz)[1] Let k be a field and let A be any finitely generated k-algebra. If A is a field, then it is a finite (algebraic) extension of k.

Proof. Theorem will be proved for uncountable fields k since we are only dealing such fields below. Take x1, .., xn 2 A such that A = k[x1, ..., xn]. According to this definition, we can write any element of A as polynomial, not necessarily uniquely, in x1, .., xn. This gives us the fact that the dimension of A as a k- vector space can at most be equal to the cardinality of the set of monomials in n variables. since the latter is an at most countable set, so do the dimension of A as a k-vector space. Assume that one of the xiis transcendental over k. This implies an isomorphism between k[xi] and k[X]. Since A is a field, it should contain k(x1) and hence k(X). Now we claim that the elements 1

X afor a2 k are linearly independent over k. Indeed, since we can uniquely decompose every rational function into partial fraction, linear combinations of those elements will lead to distinct non-vanishing rational functions. Hence those elements span k(X). But since k is uncountable, the dimension of k(X) as a k-vector space is uncountable, thus the inclusion k(X)⇢ A is not possible. Therefore all xiare algebraic over k and thus A is algebraic over k. Now we will prove that k⇢ A is a finite extension by induction on n. Let n = 1. x1is algebraic over k so by [3], k⇢ k[x1] is finite with degree equal to the degree of the minimal polynomial of x1 in k. Assume that the claim holds for all extensions k⇢ k[x1, ..., xi] where

(22)

2  i  n 1, in particular [k[x1, ..., xn 1] : k] is finite. By the tower law (see [3]), [A : k] = [A : k[x1, ..., xn 1]][k[x1, ..., xn 1] : k], so we are done if we can prove that [A : k[x1, ..., xn 1]] is finite. Since xn is algebraic over k, it is algebraic over k[x1, ..., xn 1]. Hence similarly by [3], [A : k[x1, ..., xn 1]] will be equal to the degree of its minimal polynomial, and so we are done.

Lemma 3.1. [1] Let (K, D) be a di↵erential field of characteristic 0. Denote its field of constants by C. Consider a morphism (K, D) ! (A, D) of di↵erential rings and assume that A has no di↵erential ideal except (0) and A. Then the following statements hold:

a) The ring A is an integral domain. Denote by L its field of fractions, with its natural derivation.

b) The field of constants of L is contained in A.

c) If A is a finitely generated K-algebra, and if C is algebraically closed, then the field of constants of L is equal to C.

Proof. a) First of all let us show that there are no nilpotent elements in A;

i.e. no element x2 A such that xn= 0 for some n. Set I ={x | xn= 0 for some n}. I is an ideal. Indeed,

i. For x, y 2 I with m,n the respective powers that makes them 0, (x + y)m+n= 0 by binomial theorem, thus x + y2 I.

ii. For a2 A, (ax)m= 0, so ax2 I.

Thus I is an ideal. Now let us check if it is a di↵erential ideal. For that aim take x 2 I which satisfies xn = 0, n 1. Di↵erentiating it we get nxn 1x0 = 0, and since K⇢ A and A has characteristic 0, xn 1x0 = 0.

Now by induction, it will be proved that for any integer k with 0 k  n, we have xn k(x0)2k= 0.

For the case where k = 1, we have xn 1(x0)2 = xn 1(x0)(x0) = 0. Now assume the claim is true for all k less than or equal to k 1. We then have; xn k+1(x0)2k 2= 0. If we di↵erentiate the latter identity we get

(n k + 1)xn k(x0)2k 1+ 2(k 1)xn k+1(x0)2k 3x00= 0.

Multiplying it by x0we obtain

(n k + 1)xn k(x0)2k+ 2(k 1)xn k+1(x0)2k 2x00= 0

which implies that (n k + 1)xn k(x0)2k= 0. n k + 16= 0 since k  n and as K has characteristic 0, xn k(x0)2k = 0. If k = n, (x0)2n= 0, so x02 I, which implies that I is a di↵erential ideal. 1 is not nilpotent so 1 is not in I, so I6= A and by assumption I = 0.

Second step is to show that there are no zero-divisors. For that aim, take 06= a 2 A. Let I = {b 2 A | ab = 0}. I is an ideal since:

i. If b, c2 I the a(b + c) = ab + ac = 0 ii. r2 A, then arb = abr = 0

(23)

By di↵erentiating ab = 0, we get

a0b + ab0= 0 Multiplying the previous by a we get

aa0b + a2b0= 0

which implies that a2b0= 0. Multiplying further by b0 we get (ab0)2= 0

It was proved that there are no nonzero nilpotent element in A which implies that ab0= 0. This shows that b02 I and so I is a di↵erential ideal.

It is known that a6= 0 which implies that 1 is not an element of I. So I6= A and hence according to the assumption I = 0. Thus the ring A is an integral domain.

b) Denote by C0 the field of constants of L. Obviously C0 is a subfield containing C. For a x 2 C0, define I = {a 2 A | ax 2 A}. I is an ideal since:

i. For a, b2 I, (a + b)x = ax + bx 2 A.

ii. For r2 A, rax 2 A since ax 2 A.

In order to show that it is a di↵erential ideal, di↵erentiate b = ax. One gets b0= a0x + ax0. Since x2 C0, x0 = 0. Thus b0 = a0x. Since A is a di↵erential ring, b02 A, thus a0x2 A, which implies that I is a di↵erential ideal. Furthermore, since L is the field of fractions, and C0⇢ L, x = a where a, b2 A, so bx = a 2 A and thus I 6= 0. Hence I = A. In particular,b 12 I so x = 1x 2 A for all x 2 C0.

c) By part (b), we know that C0⇢ A. Take a maximal ideal m of A. Then A/m is a field. It is assumed in the present lemma that A is a finitely generated K-algebra, so there are x1, .., xn2 A such that A = K[x1, .., xn].

Thus the images of x1, .., xnin A/m generate A/m as a K-algebra, which implies that A/m is also a finitely generated K-algebra. Hence by Theorem 3.2, A/m is an algebraic extension of K. This result implies that all elements of A/m are algebraic over K. Now take the morphism:

C0! A ! A/m (8)

which is a morphism of fields sending 12 C0to 12 A/m. Since the kernel is an ideal in the field C0, and the only proper ideal of a field is (0), the kernel of the morphism of fields has to be (0). Hence the morphism is injective. It can be concluded that, since (8) is an injection, C0 is also algebraic over K.

The following lemma concludes this proof of Theorem 3.1.

Lemma 3.2. [1] Let (K, D)! (L, D) be a di↵erential homomorphism of dif- ferential fields of characteristic 0. Denote by C the field of constants in K. Let x2 L. The following are equivalent:

(24)

a) x is algebraic over C.

b) x is constant in L and algebraic over K.

Proof. We will show two implications.

(a)) (b) Assume that x is algebraic over K. Then it is also algebraic over K as C is in K. Thus what is left to show is that x is a constant. Let P = Xn+ an 1Xn 1+ .. + a0 be the minimal polynomial of x L, i.e P (x) = 0. Di↵erentiating P (x) = 0, we get

nxn 1x0+ (n 1)an 1xn 2x0+ .. + a1x0= P0(x)x0= 0

by the chain rule. K has characteristic zero, so that P0(x) would be equal to zero if and only if x would be a root of it. It is not possible since the degree of P0is less than the degree of P , and P is the minimal polynomial of x. This implies that x0= 0. Hence x is constant.

(b)) (a) Assume now that x 2 L is a constant algebraic over K. Let P = Xn+ an 1Xn 1+ .. + a0 be the minimal polynomial of x over K. Note that P (x) = 0 and if di↵erentiated we get

nxn 1x0+(a0n 1xn 1+an 1(n 1)xn 2x0)+..+a00= PD(x)+P0(x)x0= 0.

As x is constant, x0 = 0 so what is left is to show that PD(x) = 0. As the deg PD< deg P and since P is the minimal polynomial of x, PD= 0 so a0k= 0 for any k. Thus P 2 C[x] and so x is algebraic over C.

Proving the the previous lemma and Theorem 3.1: C0algebraic over K , C0 is algebraic over C. Thus for every x2 C0, there is a polynomial P with coefficients in C such that x2 C0 is a root. On the other hand, since C is algebraically closed, P would have a root in C. As a result, the morphism C ! C0 is an isomorphism, hence C = C0. This proves that L and K has the same field of constants thus Theorem 3.1 is proved.

Analogous to the splitting fields in the algebraic Galois theory, Picard-Vessiot extension is the smallest field that fulfil similar conditions.

Proposition 3.2. [2] Let k ✓ K be a di↵erential field extension such that K contains the full set of solutions of the di↵erential equation (E), of order n.

Let L be a field that also contains the full set of solutions of (E) and fulfils k✓ L ⇢ K. Then K contains a constant which is not in L.

Proof. Let V be the vector space of elements in K which are the solutions to (E). Choose y1, .., yn2 V with nonzero Wronskian. Consider the di↵erential subfield K1 which is obtained by adjoining y1, .., ynto k in the fashion which was used in the proof of Theorem 3.1. Let V1 be the intersection of V and K1, hence the solutions of (E) in K1. Then V1 has dimension n over the constants C1 of K1. We will continue by choosing another set of solutions di↵erent from yi’s; i.e. z1, ..., zn2 V1 with nonzero Wronskian. In the same manner, let K0

be the di↵erential subfield obtained by adjoining z1, ..., zn to k. Furthermore, V0 is the intersection of V and K0 hence the vector space of solutions of (E) in K0. V0has also dimension n over the constants C0 of K0. Thus, together with

(25)

the fact that their Wronskian is nonzero, zi spans V1 over C1, i.e. zi forms a basis for V1as a vector space over C1. We claim that it is still possible for K1

to properly include K0. Indeed, consider the case that zi are already included in k and K is obtained by adjoining yi’s. Then K1 = K and K0 = k and k is a proper subfield of K. In case K0 ⇢ K1, we show that V0 ⇢ V1. Indeed, if they were equal then K1 would be equal to K0 since both of the subfields are obtained by adjoining respective Vi’s. Together with the fact that V0 spans V1

over C1, we can conclude that C0 is a proper subfield of C1, which proves the assertion.

Lemma 3.3. [2] Let k✓ K be a Picard-Vessiot extension for the equation (E).

If k ✓ L ✓ K is a subextension in which L contains a full set of solutions of (E), then L = K.

Proof. Fields k and K has the same constants according to the definition of Picard-Vessiot extension, so do L by the relation. Hence the case L ⇢ K is impossible by proposition 3.2. Then K = L.

The following theorem defines the normality property of the Picard-Vessiot extensions and it has a useful corollary.

Theorem 3.3. [6] Let L1and L2be Picard-Vessiot extensions of K for a homo- geneous linear di↵erential equation (E) of order n and let K⇢ L be di↵erential extension such that K and L have the same field of constants. If i: Li ! L i=1,2 are di↵erential K-morphisms, we have 1(L1)= 2(L2).

Proof. Let Vi be the vector space of solutions in respective Li and V be the vector space of solutions in L. By previous results, V has dimension less than or equal to n while Vihave dimension n. Since iare di↵erential K-morphisms, they are equal to identity on K and we have i(Vi)✓ V . Thus, by counting the dimensions, we have 1(V1) = 2(V2) = V . Since L1 ad L2 are Picard-Vessiot extensions, we have 1(L1) = 2(L2).

Corollary 3.1. [6] Consider the di↵erential fields K ⇢ L ⇢ M. Assume that L is a Picard-Vessiot extension of K. Assume further that M and K have the same field of constants. Then any di↵erential K-automorphism of M sends L onto itself.

Proof. Consider the identity automorphism of M . Then (L) = L. By Theorem 3.2, we have i(L) = j(L) for all automorphisms of M , we are done.

(26)

4 Di↵erential Galois Group

In this section, we will examine the properties of the di↵erential Galois group connected to our Picard-Vessiot extension.

Definition 4.1. [1] Let K✓ L be an extension of di↵erential fields. Then the di↵erential Galois group of L over K, denoted by G(L/K) is defined to be the set of automorphisms : L! L such that:

a) (x) = x for all x2 K.

b) (y)0= (y0) for all y2 L.

In the algebraic Galois theory, the Galois groups are associated with the sub- groups of groups of permutations. Analogously, the di↵erential Galois groups are associated with the subgroups of GL(V ); i.e group of C-linear automor- phisms of the vector space of solutions. (Note that GL(V )' GLn(C) when n is finite)

Proposition 4.1. [1] Let 2 G(L/K). For any solution Y of the di↵erential equation (E), (Y ) is again a solution of (E) and the induced map obtained as |V : V ! V is an isomorphism of C-vector spaces. Moreover, the map

⇢ : G(L/K)! GL(V ) defined by 7! |V is an injective group morphism.

Proof. Take 2 G(L/K) and take the solution Y = (y1, .., yn)t of (E). By using the properties of G(L/K), one can see that:

(Y )0= (Y0) = (AY ) = (A) (Y ) = A (Y )

which shows that (Y ) is also a solution of (E). The map |V is C-linear since C⇢ K and is K-linear. It is a bijection since its inverse is the restriction of

1 to V .

Now let us examine the map ⇢. First of all let us find out if it is a morphism:

⇢( 1)⇢( 2) = 1|V 2|V = 1|V( 2|V) = ⇢( 1 2)

And now let us find out if it is an injection. Take 2 G(L/K) with ⇢( ) = Id |V. Id |V(Y ) = Y implies that (Yj) = Yj for all j. By the definition of L, it is generated over K by the coordinates of the Yj and further note that

|K = idK. Hence also (Y ) = Y for all Y 2 L and so is also the identity, which proves that ⇢ is injective.

Now we will examine the elements of the di↵erential Galois group of a Picard- Vessiot extension a bit closer. In roposition 4.1, we have seen that there is an injection between the elements of the di↵erential Galois group and the elements of GL(V ) that stabilizes V . Due to that injection, elements of the di↵erential Galois group are determined by the elements T of GL(V ). We have previously mentioned that in case when n is finite, GL(V ) is isomorphic to GLn(C). How- ever, this isomorphism is determined by the choice of the basis. For the rest of the discussion, we will accept that [tij] is the matrix corresponding to T for the basis that we are using to stabilize V .

Let S, J be the same as in Theorem 3.1. Consider the projection ⇡ : S! S/J.

(27)

This projection takes the basis elements of S to their images in S/J and hence their Wronskian is the Wronskian of the image of the basis set. Since Wron- skian is a unit, its image under the projection is nonzero. It implies that the image set of the basis elements in S are linearly independent. As they have dimension n, they form a basis for the image of the vector space of solutions under S/J. So the restriction of ⇡ to V is an isomorphism. Furthermore, since V ⇢ S/J, we can conclude that S/J is stable under the action of any element of the di↵erential Galois group. Hence since L is the quotient field of S/J, any automorphism on S/J fixing K will extend to L. Therefore the elements of the di↵erential Galois group are determined by their action on S/J. Finally, due to the isomorphism of the vector spaces, we can examine the action of GL(V ) on S/J by its action on S.

In the following lemma, we will examine the action of the group elements on the Wronskian.

Lemma 4.1. [2] Let L be the Picard-Vessiot extension of the field K, where C is the field of constants. Furthermore, let = (y1, ..., yn) be a basis for the C-vector space V ⇢ L, and let w be their Wronskian. For 2 G(L/K), (w) = ⇢( )w where ⇢( ) is the determinant in the basis of restricted to V . Proof. Since stabilizes V , we can define the group action of G(L/K) on the basis elements as (yi) = ⌃cjiyj where cji2 C. It follows that (yi0) = (yi)0=

⌃cjiyj0 since cjiare constants. Iterating this, one can find the group action on higher order derivatives to be (yi(n)) = ⌃cjiy(n)j . Note that, for the n⇥n matrix

¯

y = (¯y1, .., ¯yn), we have (¯y) = ( (¯y1), .., (¯yn)). Since w = det(¯y(0), ..., ¯y(n 1)), (w) = det( (¯y(0)), ..., (¯y(n 1))) as is a di↵erential automorphism.

We can represent the group action on the Wronskian in the following way:

(y1) .. (yn)

.. .. ..

(y(n 1)1 ) .. (yn(n 1)) can be represented by:

c11y1+ .. + cn1yn .. c1ny1+ .. + cnnyn

.. .. ..

c11y(n 1)1 + .. + cn1y(n 1)n .. c1ny1(n 1)+ .. + cnnyn(n 1)

which can be decomposed as:

y1 .. yn

.. .. ..

y(n 1)1 .. y(n 1)n

c11 .. c1n

.. .. ..

cn1 .. cnn

Calling the matrix of constants M , the above relation implies that (w) = w· detM

where detM = ⇢( ), which proves the lemma.

(28)

Combining the above lemma with the previous discussion, one can define the action of GL(V ) on S as:

T : S! S, yqi7!X

p

(tpiypq), w 17! det[tij]w 1 (9)

where yqi= yi(q). We will proceed by examining the action of the group elements on J. Surely, any automorphism that stabilizes J will be an element of GL(V ), but the claim is that all such automorphisms stabilize J. In order to prove it, we will use the following method presented in [2].

Consider the set of indeterminates{xij | 1  i, j  m} over C where d is their determinant. So C[xij][d 1] will be a di↵erential ring if we equip it with zero derivation. We can define:

M = S[xij][d 1] = S⌦CC[xij][d 1]

Observe that this is a di↵erential K-algebra with xij being new constants. Sup- pose that we have K(xij) as our base field. We can use the linear transfor- mation with matrix [xij] to define a K(xij)-algebra automorphism ✓ of K(xij)[yqp][w 1]. Then by the previous discussion, this action stabilizes M and as (9), it will define a di↵erential automorphism ✓ on M . Hence we can define:

✓ : M! M where ✓(yqi) =P

pxpiyqpand ✓(w 1) = (dw) 1 We can also define:

M = (S/J)[x¯ ij][d 1] = (S/J)⌦CC[xij][d 1] Hence we can define the following map:

⇧ = ⇡⌦ 1 : M ! ¯M

Furthermore, consider the inclusion map i : S! M. Lastly, we can define the following maps for a matrix t = [tij]2 Mn(C):

evt: C[xij][d 1]! C and Evt: ¯M ! S/J by mapping xij to tij. Now take an element T of GL(V ) which represents an automorphism on S by the transformation matrix [tij]. All of those which we have defined above satisfy the following relation:

EvT ⇧ ✓ i = ⇡ T.

Now consider the automorphism µ of S/J which also has the matrix t = [tij]2 GLn(C). This mapping will let us write the following relation:

EvT ⇧ ✓ i = µ ⇡.

Since ker⇡ = J, we have µ(⇡(J)) = 0 = ⇡( T(J)). Hence T(J) ✓ J. Note that, µ is the automorphism induced by T. Hence we can conclude that, J is stabilized by any automorphism of S on K, hence will be stabilized by any automorphism of S/J on K. Since the automorphisms of L are determined by their action on S/J, we have proved our claim. Thus di↵erential Galois group of a Picard-Vessiot extension is isomorphic to the elements of the group of GL(V )

(29)

that stabilizes J.

We will finish this section by examining the algebraic group structure of the Galois group. Before we start, i want to give some definitions that will be used throughout this section.

Definition 4.2. [6][22] For a field K, let An(K) denote the set of n-tuples (a1, ..., an) where ai2 K. By defining addition and multiplication in the usual way, An(K) can be considered as a vector space. Then the underlying set is called the affine n-space over K. A set V in An(K) given as the set of zeros of a finite collection of polynomials in K[X1, .., Xn], is called an affine variety. Its affine coordinate ring is defined to be

K[V ] = K[a1, ..., an]/(f1, ..., fn)

where fi are the regular functions vanishing at all points lying in V .

Definition 4.3. [6] An algebraic group is an algebraic variety on which the multiplication and the inverse are defined by regular functions. Furthermore, if a subgroup of an algebraic group is also algebraic, it is called a Zariski closed group.

Definition 4.4. [7] Action of a group G on a ring R is defined by a = ( )a where a 2 R, 2 G and is an injective homomorphism : G ! Aut(R).

Suppose that G is an algebraic group over K and R is a finitely generated ring over K, where K is an algebraically closed field. Then the action of G on R is called rational if:

i) G acts on R in the sense of a group action.

ii) Each a2 R and each connected component Giof G admits an expression a =P

(fi( )bi) for some rational functions fi2 K(Gi), and bi2 R for all 2 Gi.

Definition 4.5. [12] For an algebraic group G and C-algebra R, we say that G acts rationally on R if it satisfies the following:

i) There is a map:

G⇥ R ! R such that (g, r) 7! rg

that satisfies rgg0 = (rg)g0 for all r 2 R where g, g0 2 G and rg is a C-algebra automorphism for all g2 G

ii) Every element of R is contained in a finitely generated G-invariant sub- space of R.

Theorem 4.1. [2] Let K be a di↵erential field with algebraically closed constant field C, and let (E) : Y0= AY be a di↵erential equation of order n. Let L be the Picard-Vessiot extension of K for (E) defined as the fraction field of S/J where J is a maximal di↵erential ideal. Then the action of GLn(C) on S defined by [trs]yqi=P

p(tpiyqp) is rational and GLn(C)J ={t|tJ ✓ J} is a Zariski closed subgroup.

References

Related documents

Arabella and Beau decide to exchange a new piece of secret information using the same prime, curve and point... It was only a method of sharing a key through public channels but

When Tietze introduced the three-dimensional lens spaces L(p, q) in 1908 they were the first known examples of 3−manifolds which were not entirely determined by their fundamental

• In the third and main section we will use all the structures discussed in the previous ones to introduce a certain operad of graphs and deduce from it, using the

We study the underlying theory of matrix equations, their inter- pretation and develop some of the practical linear algebra behind the standard tools used, in applied mathematics,

Given a set of homologous gene trees but no information about the species tree, how many duplications is needed for the optimal species tree to explain all of the gene trees?.. This

We also have morphisms called weak equivalences, wC, denoted by − → and defined to fulfill the following conditions: W1: IsoC ⊆ wC; W2: The composition of weak equivalences is a

Dessa är hur vi kan räkna ut antalet parti- tioner av ett heltal och med hjälp av Pólyas sats räkna ut på hur många sätt vi kan färga en kub med n färger i stället för bara

For if there were an efficient procedure, we could use that the satisfiability problem for dual clause formulas is easy (see next section 2.2.6), to get an efficient procedure