• No results found

EXAMENSARBETEN I MATEMATIK MATEMATISKA INSTITUTIONEN, STOCKHOLMS UNIVERSITET

N/A
N/A
Protected

Academic year: 2021

Share "EXAMENSARBETEN I MATEMATIK MATEMATISKA INSTITUTIONEN, STOCKHOLMS UNIVERSITET"

Copied!
42
0
0

Loading.... (view fulltext now)

Full text

(1)

EXAMENSARBETEN I MATEMATIK

MATEMATISKA INSTITUTIONEN, STOCKHOLMS UNIVERSITET

Existence of Solutions to Ordinary Differential Equations

av Alma Maˇsi´c

2006 - No 6

(2)
(3)

Existence of Solutions to Ordinary Differential Equations

Alma Maˇsi´c

Examensarbete i matematik 20 po¨ang Handledare: Andrzej Szulkin

(4)
(5)

Abstract

Differential equations are applied in many sciences and can be used in many ways. This report takes up ordinary differential equations and the existence of their solutions. Along with necessary theory, the report focuses on two important theorems: Picard’s Theorem, that asserts the uniqueness of a solution, and Peano’s Theorem, that gives the existence of at least one solution. The first can be proved by successive iterations, but also by the Contraction Mapping Principle, and the latter by approximations (and the

(6)
(7)

Acknowledgements

I have had the pleasure of collaborating with some very kind and professional people throughout my work on this report. The result would have been quite different if you had not helped me.

Therefore, I would like to thank my supervisor, prof. Andrzej Szulkin, for all the help, time, patience and devotion. Thank you for supporting my idea and agreeing to keep helping me while I was abroad. I have surely trained your skills in explaining mathematics ’electronically’.

I would also like to thank my co-supervisors, prof. Mirjana Vukovi´c and prof. Fikret Vajzovi´c, for the time and will to help and explain. You have been very kind to me and helped me fulfill my dream of spending some time at the University of Sarajevo, Bosnia-Herzegovina, which turned out to be a lovely experience.

Finally, I would like to thank my family, friends and everyone else who has supported me. Your encouragement has meant a lot!

/Alma

(8)
(9)

Table of Contents

Acknowledgements i

Table of Contents iii

Introduction 1

1 Picard’s Theorem 2

1.1 Picard’s Theorem for a Single Equation . . . 2

1.2 Picard’s Theorem for Systems . . . 6

1.3 Uniqueness of Extended Solutions . . . 6

2 Peano’s Theorem 7 2.1 Arzel`a-Ascoli’s Theorem . . . 7

2.2 Peano’s Theorem for a Single Equation . . . 9

2.3 Peano’s Theorem for Systems . . . 10

2.4 On Extending Solutions . . . 11

2.5 On Maximal Solutions . . . 12

3 Picard’s Theorem by the Contraction Mapping Principle 16 3.1 Linear Vector Spaces . . . 16

3.2 Contraction Mapping Principle . . . 17

3.3 Picard’s Theorem . . . 18

4 Peano’s Theorem by the Schauder Fixed Point Theorem 20 4.1 Compact Operators . . . 20

4.2 Schauder’s Fixed Point Theorem . . . 21

4.3 Peano’s Theorem . . . 24

4.4 Kneser’s Theorem . . . 26

A Bibliographical Notes 31

Bibliography 32

(10)
(11)

Introduction

Differential equations are probably the most applicable part of mathemat- ics and can be found everywhere from physics to economy. The equations contain a dependent variable (with respect to one or more independent vari- ables) and its derivatives, hence, the differential equations express the con- nections between a rate of change of a variable and the variable itself. Since almost everything around us changes, it is easy to see why differential equa- tions can be used in many ways and areas.

An ordinary differential equation contains derivatives with respect to only one independent variable. The order of a differential equation concerns the order of the highest derivative in the equation. A dependent variable is often denoted by y and an independent by x (or t). When solving differential equations one can have conditions that the solution must satisfy. Often seen are initial or boundary conditions.

In this report, I will take a look at ordinary differential equations, mostly first order, in the form of initial value problems, and the existence of their solutions. Peano proved that, under certain circumstances, there exists at least one solution to a differential equation. Picard, on the other hand, proved that, under some additional conditions, there exists a unique solution in a neighborhood of the initial point. These two important theorems can be proved in different ways by different lemmas and theories, which is what I will show.

Note that there is a reference table with references for each section on page 31 and a complete bibliography on page 32.

(12)

Chapter 1

Picard’s Theorem

1.1 Picard’s Theorem for a Single Equation

Consider the first order differential equation y0(x) = f (x, y(x))

where f is a function of two variables defined in a rectangle R = {(x, y)| − a ≤ x ≤ a, |y − y0| ≤ b},

y0 ∈ R. The maximum norm of f in C(R), the latter being the set of all continuous functions defined in the interval, is

kf k = sup

(x,y)∈R

|f (x, y)| (= max |f (x, y)| because R is compact).

We assume that f is continuous in R, with kf k = M , and that it satisfies a Lipschitz condition with respect to y:

|f (x, y1) − f (x, y2)| ≤ L|y1− y2| ∀ (x, y1), (x, y2) ∈ R.

Theorem 1.1 (Picard’s Theorem). Under the conditions specified above, let T = min(a,Mb ). Then the initial value problem

y0(x) = f (x, y(x)), y(0) = y0 (1.1) has a unique solution ϕ(x) defined in the interval −T < x < T .

Remark 1.2. Notice that T is independent of the Lipschitz constant L. This fact is crucial later on when we prove Peano’s theorem.

Remark 1.3. It might as well have been y(x0) = y0, but we will use x0 = 0 for simplicity.

2

(13)

Proof. We construct a sequence of functions {ϕk(x)}k≥0. Every function ϕk(x) will be defined on the open interval (−T, T ). We have the constant function ϕ0(x) = y0 ∀ x ∈ (−T, T ). Set

ϕk+1(x) = y0+ Z x

0

f (s, ϕk(s))ds. (1.2) Note that ϕk(0) = y0 for all k.

Since f is bounded on R, (1.2) gives the following inequality when x ∈ (−T, T ) :

k+1(x) − y0| ≤ M

Z x 0

ds

= M |x| ≤ M T ≤ M b M = b.

Together with the above, the definition of R implies that (x, ϕk+1(x)) ∈ R for all k.

In a similar way we get

1(x) − ϕ0(x)| = |ϕ1(x) − y0| ≤ M |x|.

The recursion formula (1.2) gives us ϕk+1(x) − ϕk(x) =

Z x 0

(f (s, ϕk(s)) − f (s, ϕk−1(s)))ds, and the Lipschitz condition gives

|f (s, ϕk(s)) − f (s, ϕk−1(s))| ≤ L|ϕk(s) − ϕk−1(s)|

so that

k+1(x) − ϕk(x)| ≤ L

Z x 0

k(s) − ϕk−1(s)|ds

. (1.3)

Assume now that

k(x) − ϕk−1(x)| ≤ M Lk−1|x|k

k! , (1.4)

which is true for k = 1, then (1.3) gives

k+1(x) − ϕk(x)| ≤ M Lk

Z x 0

|s|k k! ds

= M Lk |x|k+1 (k + 1)!, from which (1.4) follows by induction.

Since the series

XM

L

(LT )k+1

(k + 1)! :=X

C Ak+1 (k + 1)!,

C being a constant, converges for any A > 0, we conclude that the series

X

k=0

k+1(x) − ϕk(x)|

(14)

converges uniformly on (−T, T ), according to Weierstrass criterion [8, Thm 7.10]. Therefore the sequence {ϕk}, where ϕk+1 = y0+Pk

j=0j+1− ϕj), converges uniformly1 to a continuous limit function ϕ(x) ∈ C(−T, T ). Pass- ing to the limit in (1.2) it is obvious that this limit function ϕ(x) satisfies the integral equation below on (−T, T ):

ϕ(x) = y0+ Z x

0

f (s, ϕ(s))ds. (1.5)

The integrand is continuous, wherefore the integral is differentiable, making the right hand side continuously differentiable. Hence ϕ(x) ∈ C1(−T, T ) and a differentiation of the integral equation gives

ϕ0(x) = f (x, ϕ(x)).

Hence we have found a solution to the differential equation (1.1) on (−T, T ) and since ϕk(0) = y0 for every k, we also have ϕ(0) = y0.

There now remains to prove that the solution ϕ(x) is unique. Let ω(x) be an arbirtary solution to (1.1), i.e. ω(x) = y0+Rx

0 f (s, ω(s))ds. First, we have

|ω(x) − ϕ0(x)| = |ω(x) − y0| =

Z x

0

f (s, ω(s))ds

≤ M |x|.

Further on we get

|ω(x) − ϕ1(x)| =

Z x

0

f (s, ω(s))ds − Z x

0

f (s, ϕ0(s))ds ≤

Z x 0

(f (s, ω(s)) − f (s, ϕ0(s)))ds ≤

Z x

0

L|ω(s) − ϕ0(s)|ds ≤

≤ LM

Z x 0

|s| ds

= LM|x|2 2! . Proceeding by induction we will have

|ω(x) − ϕk(x)| ≤ M Lk |x|k+1 (k + 1)!.

If we now let k → ∞, we get |ω(x) − ϕ(x)| ≤ 0, that is ϕ = ω.

This proves the uniqueness of the solution.

Remark 1.4. The method of approximations used in the proof of Picard’s Theorem is called Picard’s method of successive approximations and can be used to numerically solve initial value problems [9, Sec. 68]. However, this numerical algorithm is not very efficient.

1limk→∞ϕk+1(x) = y0+ limk→∞Pk

j=0j+1− ϕj) = y0+P

j=0j+1− ϕj)

4

(15)

Remark 1.5. Consider a pair c1, c2 with

−b < c1< c2< b.

By Theorem 1.1 there exist unique solutions ϕ1(x) and ϕ2(x) with initial conditions ϕv(0) = cv for v = 1, 2. If these two solutions would intersect in a point (ˆx, ˆy), we would have ϕ1(ˆx) = ˆy = ϕ2(ˆx), which is a contradiction to Picard’s Theorem. This implies that the graphs of the two functions never intersect and hence

ϕ1(x) < ϕ2(x)

as long as both solutions exist. We conclude that the solutions to the initial value problem with different initial conditions yield a family of pairwise disjoint curves:

x 7→ ϕc(x) ∀ c ∈ (−b, b).

Here ϕc is the solution with the initial condition ϕc(0) = c. Concerning the family of curves, the Lipschitz continuity of f and a similar method as in the proof of the uniqueness gives:

Proposition 1.6. For every pair c1, c2 as above, it follows that

1(x) − ϕ2(x)| ≤ |c1− c2| · eL|x|

for x in a neighborhood of 0.

Proof. ϕ1, ϕ2 do not intersect, thus put ϕ1 > ϕ2. Let σ(x) = ϕ1(x) − ϕ2(x) = c1− c2+

Z x 0

(f (s, ϕ1(s)) − f (s, ϕ2(s)))ds (σ(x) > 0). Then

σ0(x) = f (x, ϕ1(x)) − f (x, ϕ2(x)) =⇒ |σ0(x)| ≤ L|ϕ1(x) − ϕ2(x)| = Lσ(x).

Consider the function ψ(x) = log(σ(x)). As before, we have |dψ(x)dx | ≤ L.

The Mean Value Theorem2 gives

|ψ(x) − ψ(0)| ≤ L|x| ⇒

log(σ(x) σ(0))

≤ L|x| ⇒

σ(x) σ(0)

≤ eL|x|. Now we have

|σ(x)| ≤ |c1− c2|eL|x|⇔ |ϕ1(x) − ϕ2(x)| ≤ |c1− c2|eL|x|

since σ(0) = c1− c2.

Thus, thanks to the Lipschitz continuity of f , one has a good control over the solutions to the initial value problem (1.1) as the initial condition varies. The solutions will always be rather close to one another.

2ψ(x) − ψ(0) = ψ0(ξ) · x

(16)

1.2 Picard’s Theorem for Systems

Assume now that the function f is defined in a rectangle R = {(x, y) ∈ R × Rn: −a ≤ x ≤ a, |y − y0| ≤ b},

where |y| =py21+ . . . + y2n denotes the (Euclidean) length. If we let M = kf k = max(x,y)∈R|f (x, y)|, where f = (f1, . . . , fn), f ∈ C(R, Rn) and Lips- chitz continuous, we have the following theorem:

Theorem 1.7 (Picard’s Theorem for Systems). Under the conditions spec- ified above, let T = min(a,Mb ). Then the initial value problem

y0(x) = f (x, y1(x), . . . , yn(x)), y(0) = y0

has a unique solution ϕ(x) defined in the interval (−T, T ).

The same proof as in Theorem 1.1 also holds for a system of differential equations. Note that, in this proof, ϕk(x) = (ϕ1k(x), . . . , ϕnk(x)) and

ϕk+1(x) = y0+ Z x

0

(f1(s, ϕk(s)), . . . , fn(s, ϕk(s)))ds.

1.3 Uniqueness of Extended Solutions

Definition 1.8. A function f is locally Lipschitz continuous if there, for every point in the domain of f , exists a neighborhood where the function is Lipschitz continuous.

It is easy to see that a locally Lipschitz continuous function is Lipschitz continuous on every compact set.

Proposition 1.9. Let U ⊂ R × Rn be open and let f : U → Rn be locally Lipschitz. Suppose two solutions v(x), w(x) of y0 = f (x, y) are defined on the same open interval J containing x0 and satisfy v(x0) = w(x0). Then v(x) = w(x) for all x ∈ J , i.e. the solution is unique.

Proof. We know from Theorem 1.7 that v(x) = w(x) in some open interval around x0. The union of all such open intervals is the largest open interval J around x0on which v = w. But Jmust contain J . For, if not, Jhas an endpoint x1 ∈ J ; we suppose x1 is the right-hand endpoint, the other case being similar. By continuity, v(x1) = w(x1). But, by Theorem 1.7, v = w in some J0, an interval around x1. Then v = w in J∪ J0 which is larger than J. This contradiction proves the proposition.

Remark 1.10. In Theorem 2.12 it will be shown that a solution can be extended until it reaches the boundary. According to Proposition 1.9, this solution is unique as long as f is locally Lipschitz continuous.

6

(17)

Chapter 2

Peano’s Theorem

2.1 Arzel` a-Ascoli’s Theorem

First we will give some definitions:

Definition 2.1. A metric space X is compact if any sequence {xm}, m = 1, 2, . . . in X has a subsequence which converges to an element of X.

Definition 2.2. Let {xm} be a sequence in a metric space (X, ρ). The sequence is said to be dense if for any ε > 0 and any x ∈ X there exists an m such that

ρ(xm, x) < ε.

Definition 2.3. A family {fα} of real or complex-valued functions defined on a set Z is said to be uniformly bounded if there is a constant C such that

|fα(x)| ≤ C ∀ α and ∀ x ∈ Z.

Definition 2.4. A family {fα} of real or complex-valued continuous func- tions defined on a subset Z of a metric space (X, ρ) is said to be equicontin- uous if for any ε > 0 there exists a δ > 0 such that ∀ α

|fα(x) − fα(y)| < ε if ρ(x, y) < δ.

Lemma 2.5. Every compact metric space is separable (i.e. has a countable dense subset).

Proof. For every n ≥ 1, consider the open cover by all open balls of radius

1

n. This has a finite subcover B(x1,n1), . . . , B(xk,n1). Do this for all n and collect the centres in a set. This set is countable and dense.

Next we will prove the following important theorem:

Theorem 2.6 (Arzel`a-Ascoli’s Theorem). Let K be a family of real or com- plex valued functions, uniformly bounded and equicontinuous on a compact metric space X. Then any sequence {fn} of functions of K has a subse- quence that is uniformly convergent in X to a continuous function.

(18)

Proof. Let {xm} be a dense sequence in X (see Lemma 2.5). Take a se- quence {fn} of functions and its value in a first point x1 from {xm}. The sequence {fn(x1)} is bounded and hence it has a subsequence {fn,1(x1)}

that is convergent. (In other words: The subsequence {fn,1(x)}, of the se- quence {fn}, converges in the point x = x1.) Next, take the value of the subsequence in a second point x2 from {xm}. The sequence {fn,1(x2)} is also bounded. Hence it has a subsequence {fn,2(x2)} that is convergent.

This sequence converges for x = x1, x2.

We proceed in this way step by step. In the kth step, we extract a convergent subsequence {fn,k(xk)} of the bounded sequence {fn,k−1(xk)}.

A summary of the sequences is given in Table 2.1.

f1,1(x), f2,1(x), . . . , fn,1(x), . . . f1,2(x), f2,2(x), . . . , fn,2(x), . . . . . . , . . . , . . . , . . . , . . . f1,k(x), f2,k(x), . . . , fn,k(x), . . . . . . , . . . , . . . , . . . , . . .

Table 2.1: (Sub)sequences.

The sequences in the table have the following features:

– They are all subsequences of {fn}.

– Every one of them, except the first one, is a subsequence of the sequence above itself; the kth sequence in the table converges in the points x = x1, x2, . . . , xk.

Consider now the diagonal sequence {fn,n} of the double sequence {fn,k}, and write gn= fn,n. Then {gn(xk)} is convergent for every k, since, except for the first k − 1 terms, it is a subsequence of {fn,k(xk)}.

We shall prove that {gn} = {fn,n} is uniformly convergent in X.

For an arbitrary but fixed k, the diagonal sequence {gn} is, starting with the kth member, a subsequence of the kth sequence in Table 2.1. Since the latter converges for x = x1, . . . , xk, then so does {gn}, and since k is an arbitrary number, then {gn} converges for all x = x1, x2, . . .. There is now left to prove uniform continuity.

Since the family {gn} is equicontinuous, then for any ε > 0 there is a δ > 0 such that

|gn(x) − gn(y)| < ε

3 whenever ρ(x, y) < δ.

For any x ∈ X we now write

|gn(x) − gm(x)| ≤ |gn(x) − gn(xk)| + |gn(xk) − gm(xk)| + |gm(xk) − gm(x)|,

8

(19)

where xk is such that ρ(x, xk) < δ. Then

|gn(x) − gm(x)| < 2ε

3 + |gn(xk) − gm(xk)|. (2.1) Now, since X is compact, we can take a finite covering of X by balls B1, . . . , Bp of radius δ and choose in each ball Bj a point xαj from the sequence {xm}. Since {xm} is dense, it is possible to find a point from the sequence in every ball no matter how small the radius is. Then, with h = max(α1, . . . , αp), we will have a finite number of points x1, . . . , xh, such that for any x ∈ X there is a point xk, 1 ≤ k ≤ h, such that ρ(x, xk) < δ.

Since the diagonal sequence now converges for a finite number of points xk, 1 ≤ k ≤ h, then for each k, 1 ≤ k ≤ h, there is a positive integer nksuch that

|gm(xk) − gn(xk)| < ε

3 if m ≥ n ≥ nk.

Using this in (2.1), we get |gn(x) − gm(x)| < ε if m ≥ n ≥ ¯n, where ¯n = max(n1, . . . , nh). Thus {gn} is uniformly convergent.

Denote now by f (x) the uniform limit of {gn(x)}. Then, for any ε > 0,

|f (x) − f (y)| ≤ |f (x) − gn(x)| + |gn(x) − gn(y)| + |gn(y) − f (y)|

< 2ε

3 + |gn(x) − gn(y)|

if n is sufficiently large. Fix n. The uniform continuity of gn then implies that |gn(x) − gn(y)| < ε3 if ρ(x, y) < δ. Hence |f (x) − f (y)| < ε if ρ(x, y) < δ.

Thus, f (x) is continuous, and the proof of the theorem is complete.

Remark 2.7. Arzel`a-Ascoli’s Theorem also holds for functions fn= (fn1, . . . , fnm) ∈ Rm.

It is easy to see that the sequence {fn(x1)} is bounded in Rm and hence there exists a convergent subsequence. The proof is similar to Theorem 2.6.

2.2 Peano’s Theorem for a Single Equation

The following theorem guarantees existence of a solution, but not necessarily a unique one. Assume now that f is a continuous function in the rectangle

R = {(x, y)| − a ≤ x ≤ a, |y − y0| ≤ b}.

Its maximum norm is bounded by M , just as in Theorem 1.1.

Theorem 2.8 (Peano’s Theorem). Under the conditions specified above, let T = min(a,Mb ). Then the initial value problem

y0(x) = f (x, y(x)), y(0) = y0 (2.2) has at least one solution Y (x) defined in (−T, T ).

(20)

Proof. The function f can be uniformly approximated by a sequence {fk} [6, Sec. I.3], where every fk is Lipschitz continuous in y. The Lipschitz constants Lk may increase as k → ∞. Theorem 1.1 shows that to every k, there exists a unique solution Yk to the initial value problem

Yk0= fk(x, Yk(x)), Yk(0) = y0, (2.3) x ∈ (−T, T ), T being independent of k (and in particular of Lk).

The graph of every function Yk stays in R and we may assume that the maximum norms kfkk ≤ M hold for every k. Hence (2.3) implies that the absolute values of the first order derivatives of this sequence are bounded by M and we have

|Yk(x) − Yk(ˆx)| =

Z x ˆ x

Yk0(s) ds =

Z x ˆ x

fk(s, Yk(s)) ds

≤ M |x − ˆx| < ε (2.4) if ρ(x, ˆx) < δ = Mε . This means that {Yk} is an equicontinuous family of functions. Let ˆx = 0. This gives

|Yk(x) − y0| ≤ M |x| ≤ M T (2.5) and hence {Yk} is uniformly bounded. The conditions for Arzel`a-Ascoli’s Theorem are thereby satisfied, giving a subsequence which converges uni- formly to a continuous Y (x). Passing to integrals and limits in (2.3) we get

Y (x) = y0+ Z x

0

f (s, Y (s))ds.

A differentiation shows that Y (x) gives the required solution.

Remark 2.9. Notice that the Lipschitz constants Lk are only necessary as an intermediate step to solve the problem for fk and are thereafter not interesting. Hence, it is sufficient for f to be only continuous in Peano’s Theorem (while it is Lipschitz continuous in Picard’s).

Remark 2.10. The solution in Peano’s Theorem is in general not unique.

Consider the differential equation y0 = 3y23. With y0 = 0 there exists a trivial null solution, i.e. y(x) = 0 for every x. But there also exists a solution y(x) = x3, which satisfies the initial condition y(0) = 0. This non-uniqueness will be studied more in detail later on.

2.3 Peano’s Theorem for Systems

Assume now that the function f is defined in a rectangle R, with y ∈ Rn and |y| = py12+ . . . + yn2, just as in Theorem 1.7. Its maximum norm is bounded by M .

10

(21)

Theorem 2.11 (Peano’s Theorem for Systems). Under the conditions spec- ified above, let T = min(a,Mb ). Then the initial value problem

y0(x) = f (x, y1(x), . . . , yn(x)), y(0) = y0

has at least one solution Y (x) defined in (−T, T ).

The same proof as in Theorem 2.8 also holds for a system of differential equations. Note that f is approximated by fk = (fk1, . . . , fkn).

2.4 On Extending Solutions

Next, we investigate what happens to a solution as the limits of its domain of existence are approached. We prove this only for the right-hand limit;

the other case is similar.

Theorem 2.12. Let f ∈ C(U, Rn), U open in R × Rn, and let y = y(x) be a solution of

y0(x) = f (x, y(x)), y(x0) = y0 (2.6) on a maximal interval of existence (ω, ω+). Then, for any compact set K ⊂ U such that (x0, y0) ∈ K, (x, y(x)) /∈ K for all x near ω±.

Remark 2.13. The extension of y(x) over a maximal interval of existence need not be unique and, correspondingly, ω± depend on the extension. To say that, given any compact set K, (x, y(x)) /∈ K for all x near ω± means that (x, y(x)) tends to ∂U or to infinity as x → ω±. (ω, ω+) is connected and ω = −∞ or ω+= ∞ (or both) is allowed.

Proof. According to our assumptions, (ω, ω+) is the maximal interval of existence for y(x). Let x0 ≤ x < ω+. We claim that (x, y(x)) cannot remain in K for all x ∈ [x0, ω+).

If the claim is false, we have (x, y(x)) ∈ K for all x ∈ [x0, ω+). Since f is continuous and K compact, there exists M > 0 such that kf k ≤ M if (x, y(x)) ∈ K.

Take a sequence xk→ ω+. If xm > xl are two elements from the sequence, we have

|y(xm) − y(xl)| =

Z xm

xl

y0(s)ds =

Z xm

xl

f (s, y(s))ds ≤

Z xm

xl

|f (s, y(s))|ds

≤ M |xm− xl|.

For all ε > 0 there exists k0 such that M |xm− xl| ≤ ε if l, m > k0 (since xl, xm → ω+). Hence {y(xk)} is a Cauchy sequence, y(xk) → ¯y and (ω+, ¯y) ∈ K. Take now another sequence ˜xk → ω+. Then

|y(˜xk) − y(xk)| =

Z x˜k

xk

f (s, y(s))ds

≤ M |˜xk− xk| → 0 when k → ∞,

(22)

hence y(˜xk) → ¯y as well. We conclude that lim

x→ω+y(x) = ¯y =: y(ω+).

Since y(x) is a solution of (2.6), we know that y(x) = y0+

Z x x0

f (s, y(s))ds. (2.7)

By taking limits when x → ω+ we get y(ω+) = lim

x→ω+y(x) = y0+ lim

x→ω+

Z x x0

f (s, y(s))ds = y0+ Z ω+

x0

f (s, y(s))ds, because the integrand is continuous and bounded.

We have just shown that (2.7) also holds for x = ω+. Hence, (2.6) and (2.7) hold for all x ∈ [x0, ω+].

We now solve (2.6) with y(ω+) = y0 as the initial value. According to Peano’s Theorem there exists a solution on (ω+− δ, ω++ δ), for some δ > 0, and hence on (ω, ω++ δ). This contradicts the maximality of (ω, ω+), hence the claim is true.

So far we have only shown that ∃ x ∈ (x0, ω+) with (x, y(x)) /∈ K. We need to show that (x, y(x)) /∈ K for all x near ω+.

If the claim is false, there exist xk → ω+ such that (xk, y(xk)) ∈ K.

Since K is compact, {(xk, y(xk))} has a convergent subsequence. We can assume passing to a subsequence that y(xk) → ¯y. Let K0 be a compact neighborhood of (ω+, ¯y), K0 ⊂ U and M0 := maxK0|f |. For all sufficiently large k we have (xk, y(xk)) ∈ K0. According to the first part of this proof, by solving (2.6) with y(xk) as the initial value, there exists x > xk such that (x, y(x)) /∈ K0. It follows that there exist ˜xk > xk, ˜xk → ω+, such that (˜xk, y(˜xk)) ∈ ∂K0 and (x, y(x)) ∈ K0 for all x ∈ [xk, ˜xk]. But then

|y(˜xk) − y(xk)| ≤ M0(˜xk− xk) → 0

and y(˜xk) → ¯y, since y(xk) → ¯y. We have (˜xk, y(˜xk)) ∈ ∂K0and (˜xk, y(˜xk)) → (ω+, ¯y), which is impossible, since the distance from (ω+, ¯y) to ∂K0 is pos- itive. This is a contradiction, hence, (x, y(x)) /∈ K for all x near ω+. This completes the proof of the theorem.

2.5 On Maximal Solutions

Definition 2.14. Let f ∈ C(U, R), where U ⊂ R2 is open. A C1-function Y (x), satisfying the differential equation y0(x) = f (x, y(x)), is called a Peano solution to the equation determined by f . Let

P (f, y0) 12

(23)

denote the set of Peano solutions with initial condition y(0) = y0 and let Γy0(x) = {Y (x)|Y ∈ P (f, y0)}.

Proposition 2.15. Let R and T be as in Theorem 2.8. Then the set Γy0(x) is a point or a closed interval for any x ∈ (−T, T ).

Proof. Let {Yk} be a sequence of Peano solutions with the common initial value y0. Then Yk(x) ∈ Γy0(x). This sequence of functions is equicontinuous and uniformly bounded on (−T, T ) (see the proof of Theorem 2.8) and has therefore a uniformly convergent subsequence whose limit Y (x) again is a solution. Hence Y (x) ∈ Γy0(x) and so it follows that Γy0(x) is a closed set.

There remains to prove that this closed set is connected. We will do this by showing that if Y1, Y2 ∈ P (f, y0) are solutions such that Y1(x0) < Y2(x0) and if ˆy ∈ (Y1(x0), Y2(x0)) then there exists a solution ˆY ∈ P (f, y0) with Y (xˆ 0) = ˆy.

We begin with a solution Z to Z(x0) = ˆy. Such a solution exists at least until it gets to the boundary of R, according to Theorem 2.12. It also lies between Y1 and Y2 to start with, and if we let x approach 0 starting from x0 (where x0 > 0, say) two things can happen. Either Z stays between Y1

and Y2 until x = 0, and then we can put ˆY = Z, or it reaches one of the solutions, say Y1 when x = x1∈ (0, x0). In the latter case we let ˆY = Y1 for 0 < x ≤ x1 and ˆY = Z for x1 < x < x0. In this way, ˆY lies in the region bounded by Y1 and Y2 all the time, hence it cannot reach the boundary of R. We have constructed ˆY as required.

Definition 2.16. Let R and T be as above. Then Y0 ∈ P (f, y0) is a maximal solution on (−T, T ) if, for every solution Y ∈ P (f, y0) the following holds:

Y (x) ≤ Y0(x) ∀ x ∈ (−T, T ).

The definition of a minimal solution is analogous.

Theorem 2.17 (Maximal Solutions). Let R and T be as above. In P (f, y0) there then exist a maximal and a minimal solution.

Proof. Let {Yk} be a sequence of solutions to y0= f (x, y) + 1

k, y(0) = y0 (k → ∞).

They exist for x ∈ (−Tk, Tk), where Tk= min(a,M +1/kb ) ≤ T . By using the same method as in the proof of Peano’s Theorem, we show that {Yk} are equicontinuous with

|Yk(x) − Yk(ˆx)| ≤ (M + 1

k) · |x − ˆx|,

(24)

x, ˆx ∈ (−Tk, Tk), and uniformly bounded with

|Yk(x) − y0| ≤ (M +1 k) · Tk.

Now we can use the Arzel`a-Ascoli Theorem, which gives a convergent sub- sequence. In the limit (k → ∞) we then have a solution Y0 to the initial value problem.

There now remains to show that Y0 is a maximal solution, i.e. if Y is another solution, Y ≤ Y0 will hold. First we will show this for x ≥ 0.

We see that Y0(0) < Yk0(0), since Y0(0) = f (0, y0), Yk0(0) = f (0, y0) +1k and Y (0) = Yk(0). This implies that Y (x) < Yk(x) for small x > 0. After that, the two solutions can never meet in a point ¯x > 0, since we would then have Y0(¯x) ≥ Yk0(¯x), which is impossible. Hence

Y (x) ≤ Yk(x) (2.8)

for x ∈ [0, Tk).

Since (a subsequence of) Yk→ Y0 and Tk → T , it follows from (2.8) that Y (x) ≤ Y0(x) and so Y0 is a maximal solution.

(For x < 0, consider the equation y0 = f (x, y) − 1k.)

To show that there exists a minimal solution, for x ≥ 0, consider the equation y0 = f (x, y) −k1, (for x < 0, y0 = f (x, y) + 1k) and proceed in an analogous way as above.

Example 2.18. Consider the initial value problem y0 = 2|y|1/2, y(0) = 0.

We see that one solution is y1 =

 x2 for x ≥ 0

−x2 for x ≤ 0.

But for x0> 0 we also have y2, and for x0 < 0, y3: y2 =

 (x − x0)2 for x ≥ x0

0 for x ≤ x0, y3 =

 0 for x ≥ x0

−(x − x0)2 for x ≤ x0. Considering all the solutions, we find that the maximal and the minimal solution respectively are

ymax=

 x2 for x ≥ 0

0 for x ≤ 0 and ymin =

 0 for x ≥ 0

−x2 for x ≤ 0.

To show that y = x2 is a maximal solution for x ≥ 0, assume the contrary, i.e. that there exists a solution ¯y with ¯y(ˆx) = ˆy > ˆx2 for some ˆ

x > 0. By solving the differential equation y0(x) = 2p|y| with the initial 14

(25)

condition ¯y(ˆx) = ˆy, we get the solution ¯y(x) = (x +√ ˆ

y − ˆx)2. (It is unique because f is Lipschitz continuous except at y = 0, see the argument below.) But this means that ¯y(0) > 0 and hence it is not a solution to the initial value problem (2.2). A similar argument shows that if ˆx < 0 and ˆy > 0, then ˆy(0) > 0. The proof for the minimal solution is analogous.

If 0 < ˆy < ˆx2, then there exists a solution ¯y such that ¯y(ˆx) = ˆy. By solving the differential equation as above, we once again get the function

¯

y(x) = (x +√ ˆ

y − ˆx)2. But now, since 0 < ˆy < ˆx2, we have ¯y(ˆx −√ ˆ

y) = 0. If we now let x0 = ˆx−√

ˆ

y, 0 < x0 < ˆx, it will suffice to put ¯y = 0 for x < x0and we will have a solution that satisfies the initial condition y(0) = 0. Hence Γ0(ˆx) is the interval [0, ˆx2] which illustrates the conclusion of Proposition 2.15.

Finally, we show that f (y) is Lipschitz continuous everywhere except close to 0. For each y1 > y2 ≥ a > 0 we have

2√

y1− 2√

y2= 2(y1− y2)

√y1+√

y2 ≤ 1

√a(y1− y2),

hence f is Lipschitz continuous for y ≥ a > 0, and by a similar argument, for y ≤ a < 0. But when examining Lipschitz continuity around 0 we get

|f (y) − f (0)| = 2p|y| ≤ L|y| ⇔ 2

L ≤p|y| ⇔ 4

L2 ≤ |y| ∀ y close to 0.

If we now take |y| ≤ L22 we obtain a contradiction. Hence, no Lipschitz constant exists in a neighborhood of y = 0.

(26)

Chapter 3

Picard’s Theorem by the Contraction Mapping

Principle

3.1 Linear Vector Spaces

Definition 3.1. A linear vector space (or linear space) X over R (or C) is a collection of elements (vectors) {x, y, z, . . .} with the following properties:

There exists an operation - addition - on X, which, to every pair x, y ∈ X assigns a third vector x + y ∈ X.

Every vector x ∈ X can be multiplied by an arbitrary a ∈ R (or C) and the product is a new vector a · x ∈ X.

Both of these operations satisfy the following rules of addition and multipli- cation for all vectors x, y, z and all a, b ∈ R (or C):

(i) x + y = y + x

(ii) (x + y) + z = x + (y + z)

(iii) there exists a null vector 0 such that x + 0 = x ∀ x ∈ X

(iv) to every vector x there exists a vector −x such that x + (−x) = 0 (v) a(bx) = (ab)x

(vi) 1 · x = x

(vii) (a + b)x = ax + bx (viii) a(x + y) = ax + ay

16

(27)

Definition 3.2. A linear space X is a normed linear space if to each x ∈ X, there corresponds a real number kxk called the norm of x which satisfies:

1. kxk > 0 for x 6= 0, k0k = 0;

2. kx + yk ≤ kxk + kyk (triangle inequality);

3. kaxk = |a| · kxk ∀ a ∈ R (or C) and x ∈ X.

Definition 3.3. A sequence {xn} in X is a Cauchy sequence if for every ε > 0, there is an N (ε) > 0 such that

|xn− xm| < ε if n, m ≥ N (ε).

The space X is complete if every Cauchy sequence in X converges to an element of X.

Definition 3.4. A complete normed linear space is a Banach space.

3.2 Contraction Mapping Principle

Definition 3.5. Let X be a metric space and let F be a transformation (mapping), F : X → X. F is said to be a contraction on X if there is a q, 0 ≤ q < 1, such that

ρ(F (x), F (y)) ≤ qρ(x, y) ∀ x, y ∈ X.

A fixed point of a transformation F : X → X is a point x ∈ X such that F (x) = x. The following is a theorem asserting the existence of a fixed point.

Theorem 3.6 (Contraction Mapping Principle). Let X be a complete metric space and let F : X → X be a contraction . Then there exists a unique solution ¯x to the equation

x = F (x),

i.e. there exists a unique fixed point of F in X. Moreover, if x0 ∈ X is arbitrary and xn= F (xn−1), then the sequence {xn}n=1 converges to ¯x and the estimate

ρ(¯x, xn) ≤ qn

1 − q ρ(x1, x0), (3.1) where q is the contraction constant for F on X, holds.

Proof. We prove that {xn}n=1 is a Cauchy sequence. First we see that ρ(xn, xn−1) = ρ(F (xn−1), F (xn−2)) ≤ qρ(xn−1, xn−2) ≤ . . . ≤ qn−1ρ(x1, x0).

(28)

Further, for m > n we have

ρ(xm, xn) ≤ ρ(xm, xm−1) + ρ(xm−1, xm−2) + . . . + ρ(xn+1, xn) ≤

≤ (qm−1+ qm−2+ . . . + qn)ρ(x1, x0) =

= qn1 − qm−n

1 − q ρ(x1, x0) ≤ qn

1 − q ρ(x1, x0).

Hence

ρ(xm, xn) ≤ qn

1 − q ρ(x1, x0) (m > n) (3.2) from which follows, since qn → 0 for n → ∞ (q < 1), that the sequence {xn}n=1 is a Cauchy sequence. Since X is a complete metric space, there exists an ¯x ∈ X such that limn→∞xn= ¯x. For this limit, the estimate (3.1) holds by letting m → ∞ in (3.2).

Being a contraction, F is a continuous mapping, and therefore

¯ x = lim

n→∞xn= lim

n→∞F (xn−1) = F ( lim

n→∞xn−1) = F (¯x).

Hence ¯x ∈ X is a fixed point of F . If now ¯x = F (¯x) and ¯y = F (¯y), then

ρ(¯x, ¯y) = ρ(F (¯x), F (¯y)) ≤ qρ(¯x, ¯y), i.e. ρ(¯x, ¯y) = 0 (q < 1) which proves the uniqueness of the solution.

3.3 Picard’s Theorem

A classical application of the Contraction Mapping Principle is, among many, the proof of Picard’s Theorem (in Rn). The theorem has previously been proved by successive iterations (see Theorems 1.1 and 1.7).

Theorem 3.7 (Picard’s Theorem).

Let D be an open set in R × Rn and let f : (x, y1, . . . , yn) ∈ D → Rn be continuous and locally Lipschitz continuous with respect to the y-variables.

Then for any (x0, y0) ∈ D there exists α > 0 such that the initial value problem

y0(x) = f (x, y(x)), y(x0) = y0 (3.3) has a unique solution ϕ(x) on the interval [x0− α, x0+ α].

Proof. First, we rewrite the initial value problem into an equivalent fixed- point problem by defining F (y) as

F (y(x)) = y0+ Z x

x0

f (s, y(s))ds,

18

(29)

x ∈ [x0− α, x0+ α], y ∈ C([x0− α, x0+ α], Rn). Then y is a solution to the initial value problem if and only if y(x) = F (y(x)) (see (1.5)). Notice that also F (y) ∈ C([x0− α, x0+ α], Rn).

Now we wish to solve the equation F (y) = y

in a closed subset X of the Banach space C([x0− α, x0+ α], Rn) for a certain small α > 0.

We need two properties of F and X, namely that F maps X into X and that F is a contraction on X. Since f is locally Lipschitz continuous, we need to find a neighborhood to (x0, y0) on which Lipschitz continuity holds.

Choose first a, b such that

R1 := [x0− a, x0+ a] × {y ∈ Rn: ky − y0k ≤ b} ⊂ D.

This set R1 is compact, and therefore, f is bounded and uniformly Lipschitz continuous on it, i.e. there are constants M, L such that

kf (s, y)k ≤ M, kf (s, y1) − f (s, y2)k ≤ Lky1− y2k for (s, y1), (s, y2) ∈ R1. Put

X = {y ∈ C([x0−α, x0+α], Rn) : ky(x)−y0k ≤ b for all x ∈ [x0−α, x0+α]}

for some α ≤ a. Let Iα := [x0− α, x0+ α]. Then sup

x∈Iα

|F (y(x)) − y0| = sup

x∈Iα

Z x x0

f (s, y(s))ds

≤ αM (3.4)

and sup

x∈Iα

|F (y1(x)) − F (y2(x))| = sup

x∈Iα

Z x x0

f (s, y1(s)) − f (s, y2(s))ds ≤

≤ L sup

x∈Iα

Z x x0

|y1(s) − y2(s)|ds ≤

≤ αL sup

x∈Iα

|y1(x) − y2(x)|. (3.5) If we choose α so small that αM ≤ b and αL < 1, then F will map X into itself (the first condition, see (3.4)) and will be a contraction with q = αL < 1 (the second condition, see (3.5)). By the Contraction Mapping Principle, F has a unique fixed point ϕ in X, i.e.

ϕ(x) = F (ϕ(x)) ≡ y0+ Z x

x0

f (s, ϕ(s))ds.

This is the unique solution of the initial value problem (3.3) on the interval [x0− α, x0+ α].

Remark 3.8. Because of Theorem 2.12 and Proposition 1.9, Picard’s The- orems 1.1, 1.7 and 3.7 are equivalent; one can extend (−T, T ) and [x0 − α, x0+ α] to the same maximal interval of existence.

(30)

Chapter 4

Peano’s Theorem by the Schauder Fixed Point

Theorem

4.1 Compact Operators

Definition 4.1. A subset X of a linear space is convex if for x, y ∈ X it follows that

λx + (1 − λ)y ∈ X for 0 ≤ λ ≤ 1, that is, X contains the line segment joining x and y.

Definition 4.2. A subset of a metric space is called relatively compact if its closure in this space is compact.

Definition 4.3. Let X, V be normed linear spaces and let K ⊂ X. A mapping F : K → V is called a compact operator if F is continuous on K and F (M ) is a relatively compact set in V , for any bounded set M ⊂ K.

Definition 4.4. The set of all compact operators on K into V is denoted byC (K, V ). If the image of F ∈ C (K, V ) is a subset of a finite-dimensional subspace of V , then we say that F is a finite-dimensional operator and write F ∈Cf(K, V ).

Lemma 4.5. Let X, V be normed linear spaces and let K ⊂ X be bounded.

If F ∈ C (K, V ), then there exists a sequence {Fn}n=1 ⊂ Cf(K, V ) which converges uniformly to F on K.

Proof. Since F (K) is compact there is a finite n1 -net y1, . . . , ym ∈ F (K) of F (K), i.e. ∀ y ∃ i such that d(y, yi) < n1. Functions

ψk(x) = max{0,1

n − kF (x) − ykk}

20

(31)

are continuous on K andPm

k=1ψk(x) > 0 for every x ∈ K (because kF (x) − ykk < n1 for some k). Therefore the functions

µk(x) := ψk(x) Pm

k=1ψk(x), k = 1, . . . , m, form a continuous partition of unity1 on K. Put

Fn(x) =

m

X

k=1

µk(x)yk, x ∈ K. Then Fn∈Cf(K, V ) and

kF (x) − Fn(x)k = k

m

X

k=1

µk(x)(F (x) − yk)k ≤

m

X

k=1

µk(x)kF (x) − ykk < 1 n for every x ∈ K, hence Fn converges uniformly to F .

4.2 Schauder’s Fixed Point Theorem

Fixed point theorems in higher dimensions have been difficult to prove.

Brouwer’s Fixed Point Theorem has been generalized to Banach spaces by Schauder. Since Schauder uses Brouwer’s Fixed Point Theorem, we will only state the result, without proving it.

Theorem 4.6 (Brouwer’s Fixed Point Theorem). Let K be a non-empty, convex, closed and bounded subset of Rn. Assume that F : K → K is continuous. Then F has a fixed point in K.

A proof can be found in [2, Thm 5.1.3].

Theorem 4.7 (Schauder’s Fixed Point Theorem). Let K be a non-empty, closed, convex and bounded subset of a normed linear space X. Assume that F ∈C (K, X) and F (K) ⊂ K. Then there exists a fixed point of F in K.

Proof. Let {Fn}n=1 be a sequence constructed in the proof of Lemma 4.5, i.e. Fn(x) =Pm

k=1µk(x)yk, x ∈ K. Denote the set of all linear combinations of y1, . . . , ym by

Xn:= Lin{y1, . . . , ym}.

Fn(K) ⊂ Xnbecause Fn(x) is a linear combination of y1, . . . , ymfor each x ∈ K. Since y1, . . . , ym∈ F (K) ⊂ K, convexity gives Fn(x) =Pm

k=1µk(x)yk∈ K. Hence

Fn(K) ⊂ K ∩ Xn.

1A partition of unity on a set K is a (finite or infinite) system {µk} of functions on K such that

0 ≤ µk(x) ≤ 1, X

k

µk(x) = 1 for all x ∈ K.

(32)

The restriction of Fn to K ∩ Xn satisfies the assumptions of the Brouwer Fixed Point Theorem and hence there is xn∈ K ∩ Xn such that

Fn(xn) = xn.

Since {xn} is bounded and F is compact, the sequence {F (xn)} is relatively compact. Hence there is a subsequence {F (xnk)}k=1 which converges to some x ∈ F (K) ⊂ K = K. The estimate

kF (xnk) − xnkk = kF (xnk) − Fnk(xnk)k < 1 nk implies that also limk→∞xnk = x. Since F is continuous,

k→∞lim F (xnk) = F (x).

Together with limk→∞F (xnk) = x this gives F (x) = x, which proves the theorem.

Next we will give a lemma which is needed in the proof of Theorem 4.10.

Lemma 4.8. Consider the following boundary value problem:

y00 = g(x), y(0) = y(1) = 0, (4.1) g continuous on [0, 1]. Let y(x) =R1

0 G(x, s)g(s)ds, where G(x, s) =

 s(x − 1), 0 ≤ s ≤ x ≤ 1 x(s − 1), 0 ≤ x ≤ s ≤ 1 Then y is a solution to (4.1).

Remark 4.9. G(x, s) is called the Green function.

Proof. Notice that the function y also can be written as y =

Z 1 0

G(x, s)g(s)ds = Z 1

0

sxg(s)ds − Z x

0

sg(s)ds − Z 1

x

xg(s)ds.

Each solution to the boundary value problem (4.1) can be written in the form y = yh+ yp, where yhis a solution to the homogeneous equation and yp a particular solution to the inhomogeneous equation. It is easy to see that yh= c1x + c2, for some constants c1, c2. By using variation of constants [9, Sec. 19] we set

yp = v1(x)x + v2(x) 22

(33)

and obtain

y0p= v01(x)x + v20(x) + v1(x).

If we now let v10(x)x + v20(x) = 0, we will have y00p = v01(x) = g(x). This gives us the following system of equations:

 v01(x)x + v20(x) = 0 v10(x) = g(x)

By integrating the second equation, we get2 v1(x) = −R1

x g(s)ds. From the first equation we have v02(x) = −xg(x), which gives us v2(x) = −Rx

0 sg(s)ds.

Now we have

y = yh+ yp = c1x + c2− Z 1

x

xg(s)ds − Z x

0

sg(s)ds.

The first boundary condition y(0) = 0 gives y = c2 = 0. The second condition y(1) = 0 gives y = c1−R1

0 sg(s)ds = 0 ⇒ c1 =R1

0 sg(s)ds. Hence y =

Z 1 0

xsg(s)ds − Z 1

x

xg(s)ds − Z x

0

sg(s)ds which proves the lemma.

The following theorem is an example of how Schauder’s Fixed Point Theorem can be applied to differential equations, in this case, to a boundary value problem.

Theorem 4.10. Let f = f (x, y) be a continuous function on [0, 1] × R and let a > 0, b < 8 be such that |f (x, y)| ≤ a + b|y| for all x, y ∈ [0, 1] × R.

Then the boundary value problem

y00= f (x, y), y(0) = y(1) = 0 has a solution.

Proof. According to Lemma 4.8 this problem is equivalent to the integral equation

y(x) = Z 1

0

G(x, s)f (s, y(s))ds := F (y(x))

and we want to show that F has a fixed point. First we need to find a constant R and prove that F maps the ball B(0; R) ⊂ C[0, 1] into itself. We have3

kF (y)kC[0,1]≤ a + bkykC[0,1] max

x∈[0,1]

Z 1 0

|G(x, s)|ds ≤ a + bkykC[0,1]

8 .

2Rx

1 g(s)ds = −R1 xg(s)ds

3R1

0 G(x, s)ds =Rx

0 s(x − 1)ds +R1

xx(s − 1)ds = 12(x2− x), maxx∈[0,1]R |G| =18

(34)

If R ≥ 8−ba and y ∈ B(0; R), then kF (y)kC[0,1]a+bR8 ≤ R, and hence F (B(0; R)) ⊂ B(0; R).

The ball B(0; R) is closed, bounded and convex. It remains to show that F is a continuous and compact mapping.

Since G is uniformly continuous, for each ε > 0 there exists δ > 0 such that |G(x, s) − G(¯x, s)| < ε when |x − ¯x| < δ, hence

|F (y(x)) − F (y(¯x))| =

Z 1 0

(G(x, s) − G(¯x, s))f (s, y(s))ds

≤ ε(a + bR) for all x, ¯x ∈ [0, 1] and y ∈ B(0; R). This means that F (B(0; R)) is an equicontinuous family. We also have

|F (y(x))| =

Z 1 0

G(x, s)f (s, y(s))ds ≤

Z 1 0

|G(x, s)| |f (s, y(s))|ds ≤ a + bR for all x ∈ [0, 1], which shows uniform boundedness. According to Arzel`a- Ascoli’s Theorem 2.6 each sequence in F (B(0; R)) has a convergent subse- quence. Hence the closure of F (B(0; R)) is compact.

From the uniform continuity of f on [0, 1] × [−R, R] and since |G| ≤ 1, we get

|F (y(x)) − F (¯y(x))| ≤ Z 1

0

|G(x, s)| |f (s, y(s) − f (s, ¯y(s))|ds < ε.

This is equivalent to F being a continuous mapping. Hence, all of the conditions of the Schauder Fixed Point Theorem are satisfied, thus F has a fixed point in B(0; R).

Remark 4.11. If f has a sublinear growth in y, i.e. there is α ∈ (0, 1) and a, b such that

|f (x, y)| ≤ a + b|y|α,

then |f (x, y)| ≤ a0+ |y| for a suitable a0, hence the boundary value problem has a solution according to Theorem 4.10.

4.3 Peano’s Theorem

Recall that the proof of Peano’s Theorem 2.8 was based on approximations of the function f . Now we will prove Peano’s Theorem (in Rn) by using the Schauder Fixed Point Theorem.

Theorem 4.12 (Peano’s Theorem).

Let D be an open set in R × Rn and let f : (x, y1, . . . , yn) ∈ D → Rn be continuous. Then for any (x0, y0) ∈ D there exists at least one solution to the initial value problem

y0(x) = f (x, y(x)), y(x0) = y0. (4.2) 24

References

Related documents

In applications wavelets are often used together with a multiresolution analysis (MRA) and towards the end it will be shown how a wavelet basis is constructed from a

With other restrictions Helly’s theorem can also be expanded to an infinite collections of convex sets, while without any additional conditions the original Helly’s theorem is

Här visas också att förlorade sampelvärden för en översamplad funktion kan återskapas upp till ett godtyckligt ändligt antal.. Konvergenshastigheten för sampling

hα, βi där integralen konvergerar kallas för den fundamentala remsan.. I den fundamentala remsan är

3.2.2.10 A stricter definition of the integral and the fundamental theorem of calculus Armed with a better understanding of limits and continuity, as well as perhaps a firmer

Let us say we want to lift this system to the base period h.. Discrete lifting to enable state realization. As suggested by the dierent linings for the signals in the gure,

Aczel showed that CZF can be interpreted in Martin Löf’s type theory by considering a type of sets, hence giving CZF a constructive meaning.. In this master’s thesis we review

Siegelmann's analog recurrent networks use a nite number of neurons, which can be viewed as analog registers, but innite precision in the processing (which amounts to an assumption