• No results found

Introduction to some modes of convergence : Theory and applications

N/A
N/A
Protected

Academic year: 2021

Share "Introduction to some modes of convergence : Theory and applications"

Copied!
40
0
0

Loading.... (view fulltext now)

Full text

(1)

School of Education, Culture and Communication

Division of Applied Mathematics

BACHELOR THESIS IN MATHEMATICS / APPLIED MATHEMATICS

Introduction to some modes of convergence

Theory and applications

by

Milosz Bolibrzuch

Kandidatarbete i matematik / tillämpad matematik

DIVISION OF APPLIED MATHEMATICS

MÄLARDALEN UNIVERSITY SE-721 23 VÄSTERÅS, SWEDEN

(2)

School of Education, Culture and Communication

Division of Applied Mathematics

Bachelor thesis in mathematics / applied mathematics

Date:

28-09-2017

Project name:

Introduction to some modes of convergence: theory and applications

Author: Milosz Bolibrzuch Supervisor: dr Linus Carlsson Reviewer: dr Richard Bonner Examiner:

Prof. Anatoliy Malyarenko

Comprising:

(3)

Abstract

This thesis aims to provide a brief exposition of some chosen modes of convergence; namely uniform convergence, pointwise convergence and L1 convergence. Theoretical discussion is complemented by simple applications to scientific computing. The latter include solving dif-ferential equations with various methods and estimating the convergence, as well as modelling problematic situations to investigate odd behaviors of usually convergent methods.

(4)

Acknowledgements

I would like to thank my supervisor, dr Linus Carlsson, who provided assistance in developing, writing and reviewing the thesis at all stages. He has also helped me solve multiple problems and understand many new concepts, which allowed me to learn a great deal more mathematics than I would be able to do on my own in such a short time.

(5)

Contents

Abstract 1 Acknowledgements 2 List of Figures 5 1 Introduction 6 1.1 Thesis overview . . . 6 1.2 Literature review . . . 6 1.3 Notation . . . 7

2 Modes of convergence: basic theory exposition 8 2.1 Introduction . . . 8 2.2 Important definitions . . . 8 2.3 Pointwise convergence . . . 10 2.4 Uniform convergence . . . 13 2.5 Convergence at a point . . . 15 2.6 L1convergence . . . 16

3 Numerical applications with MATLAB 19 3.1 Introduction . . . 19

3.2 Important concepts . . . 19

3.3 Uniform convergence in estimating solutions to simple differential equations . 21 3.3.1 Computing convergence of error by comparison with a known solution 21 3.3.2 Computing convergence of error without access to a known solution . 23 4 Problem areas in numerical computation 26 4.1 Introduction . . . 26

4.2 Non-smooth function . . . 26

4.3 Singularity . . . 28

Conclusion 30

(6)

Index 32 A MATLAB code 33 A.1 Script 1 . . . 33 A.2 Script 2 . . . 34 A.3 Script 3 . . . 36 A.4 Script 4 . . . 37 A.5 Script 5 . . . 37 A.6 Script 6 . . . 38

(7)

List of Figures

2.1 The sequence of isosceles triangles with the properties defined in Example 2, converging pointwise to 0. For clarity of visualisation, sequence started with the second triangle. . . 11 2.2 The sequence of isosceles triangles as defined in the Example 7. It can be

easily observed that they seem to converge to 0 in area, but never converge at points 0 or 1. . . 18 3.1 Numerical solution to the problem y0= xy, y(0) = 1 for x between 0 and 1. . 22 3.2 Convergence of error of the solution to the problem y0= xy, y(0) = 1.

Com-puted by comparison to the analytic solution, for forward difference method and central difference method. Plotted in log-log scale, logarithm of base 2. . 23 3.3 Estimated solution to the equation y0= sin(xy) + y, y(0) = 1 for x between 0

and 1. . . 24 3.4 The empirical convergence of error for central difference method in estimating

the solution of y0= sin(xy) + y, y(0) = 1. Plot in log-log scale, logarithm base 2. 25 4.1 The lesser error in trapezoidal method, with the cusp of the function falling

exactly on a grid point. . . 27 4.2 The larger error in trapezoidal method, with a cusp of the function falling in

the middle of a subinterval. . . 28 4.3 The empirical convergence in trapezoidal method for solving the equation

from the Section 4.2. Plot in log-log scale, logarithm base 2. Second order method slope included for reference. . . 29 4.4 The convergence of error in the central difference scheme when solving y0=

2 3x

−1/3, y(0) = 0, as described in the Section 4.3. Plot in log-log scale with

(8)

Chapter 1

Introduction

1.1

Thesis overview

This thesis consists of three main parts. The first and major part is a theoretical introduction to some modes of convergence, containing definitions, additional descriptions and examples. The examples were constructed with the aim of rendering the concepts simpler to grasp. The modes included in the thesis are pointwise convergence, uniform convergence, convergence at a point and convergence in L1norm.

In the second part we investigate convergence of some chosen numerical methods. In par-ticular, we employ central difference method and forward difference method to solve simple differential equations. Included are definitions and descriptions of some useful concepts, such as the O(h) notation and log-log scale plotting. A simple solver, which was constructed in MATLAB for this thesis, is used.

The last part contains examples of problematic areas in numerical computation, such as non-smooth functions and singularities. It includes introducing the trapezoidal rule along the finite difference methods and investigating what happens to the convergence of these methods when the aforementioned problems occur. Once again, we employ the MATLAB solver to analyse the issues numerically and visualise the results.

Additionally, the scripts containing the MATLAB code used in the project are included in the Appendix A: MATLAB code.

1.2

Literature review

The idea of convergence is far from a new concept, and it might be extremely difficult if not impossible to pinpoint its precise origins. It might, however, be possible to find the first distinction between divergent and convergent series, which according to [2] was done in the 17th century by James Gregory. The distinction between the modes of convergence introduced in this thesis would come at a later date. The first major influence on the birth of the concept of uniform convergence was by Cauchy in 1821. He provided an errouneous proof of the fact that a convergent sum of continuous functions is always continuous. This led to discoveries of uniform convergence by Stokes and Seidel later in the same century. Karl Weierstrass has also

(9)

discovered this notion and was perhaps the first one to do so, according to [6] (although the first use of the term "convergence in a uniform way" was by Gudermann in [5]) . Presently, more refined notions can be found within analysis textbooks, such as [11].

Addressing the numerical methods, many of them are in fact ancient in the literal sense. It was recently discovered (see [10]) that the trapezoidal rule, one of the methods used in this thesis, was already known in a similar form to ancient babylonians. The analysis of error and stability of these methods as we know it today, however, was developed mostly in the 20th century. For more on this, see e.g. [13].

1.3

Notation

In this section we briefly introduce the notation used in the thesis. d(a, b) a distance between a and b

∃ a there exists an a ∀a for all a

a∈ A ais an element of set A supa∈A f(a) supremum of f on the set A A⊂ B set A is a subset of set B A∪ B a union of A and B supp( f ) support of a function f O( f ) asymptotic notation R set of real numbers Q set of rational numbers N set of natural numbers

Ck class of functions for which the first k derivates exist and are continuous (a, b) an open interval from a to b

[a, b] a closed interval from a to b |a| absolute value of a

kuk a norm of u

uh an approximation of u obtained with a step size of h (R, | · |) the real line with a distance defined as the absolute value

(10)

Chapter 2

Modes of convergence: basic theory

exposition

2.1

Introduction

This chapter will cover a few selected modes of convergence, specifically convergence at a point, pointwise convergence, uniform convergence and L1 convergence. We will introduce the necessary theory to discuss these concepts in terms of their properties, differences between them, potential problems within the way they are defined and subsequently their utility.

Before we move forward with the aforementioned topics, it is important to draft a few initial notes. In particular, it will prove particularily useful to outline a few important definitons which provide necessary context for the discussion about convergence.

In all of the examples given in this part we let the function range be in (R, |.|).

2.2

Important definitions

Firstly, we need to introduce some concept of a space. As we will repeatedly find out during this exposition, the convergence of a sequence is very much dependent on a type of space within which it is being considered. Topological space is the most general concept that appears in many areas of modern mathematics. They are key to discussing convergence, continuity and other ideas. In particular, we are interested in metric spaces, which are a special case of topological spaces. A metric space is, in essence, a topological space equipped with a metric (also known as a distance function).

Definition 1. Let M be any non-empty set. Let d be a function defined on M × M → R. If d satisfies the conditions

1. d(x, y) ≥ 0 non-negativity 2. d(x, y) = 0 ⇔ x = y indiscernibility 3. d(x, y) = d(y, x) symmetry 4. d(x, y) + d(y, z) ≥ d(x, z) triangle inequality

(11)

then d is a metric and the pair (M, d) is called a metric space.

Now that we have defined a metric space, we can follow it up with another essential defin-ition that will help us discuss convergence-related concepts: a Cauchy sequence. Cauchy sequences are useful when considering convergence in various spaces; for example, we can define property of a space by stating that Cauchy sequences converge or do not converge on that space.

Definition 2. A sequencexj

j=1in a metric space M = (M, d) is called a Cauchy sequence

if for every ε > 0 there exists an integer N = N(ε) such that

d(xm, xn) < ε for every m, n > N.

We also want to be able to use the idea of a support of a function. In essence, support is a subset of the function’s domain where the function does not vanish. In particular, we are interested in the closed support. For that, we need to first define the closure of a set.

Definition 3. For an open set A, the closure of A is the intersection of all closed sets containing A.

Definition 4. Let X be a topological space and f be a function defined on that space. Then the support of f is the closure of the subset of X where f is non-zero, i.e.

supp( f ) = {x ∈ X : f (x) 6= 0}.

Finally, we need to introduce supremum and infimum. These concepts are similar to maximum and minimum which the reader should be familiar with, but there is a key difference. Firstly, we need to introduce lower and upper bounds.

Definition 5. Let set A be a subset of real numbers. Then A is bounded from above if there exists a real number M, called the upper bound of A, such that x < M for every x ∈ A. In the same fashion, A is bounded from below if there exists a real number m, called the lower bound of A, such that x > m for every x ∈ A. The set A is called bounded if it is bounded from below and bounded from above.

Definition 6. The supremum of a set A is its least upper bound. The infimum of a set A is its greatest lower bound. If the set is not bounded above, we say that sup A = ∞. If the set is not bouned below, we say that inf A = −∞.

In contrast to maximum, supremum does not need to be an element of that given set. This means that when maximum exists, supremum also must exist, but not necessarily the other way around. The same is of course true for the relation between minimum and infimum. For a deeper introduction to metric spaces, we refer the reader to any standard textbook in the field, e.g. [8].

(12)

2.3

Pointwise convergence

Pointwise convergence is one of the first theoretical notions of convergence that was well defined. This is at least partly due to the fact that the way we define pointwise convergence seems on the first glance like the most obvious and natural way to define convergence for func-tions. However, as we will clearly demonstrate later in this text (see Examples 1, 2), thinking about pointwise convergence for functions can often lead to somewhat confusing results. This is why today we would often conrast it with the idea of uniform convergence, a stronger no-tion, which we will explore in Section 2.4. To begin discussing pointwise convergence, we shall first introduce a formal definition.

Definition 7. Let fn be a sequence of functions defined on a metric space X with its range on

a metric space (Y, d). Then fnconverges pointwise to a limit function f if and only if

∀ x ∈ X, ∀ ε > 0, ∃ M > 0

such that d( fn(x) − f (x)) < ε when n > M.

This definition of convergence will quite often prove sufficient. However, we should ob-serve that according to this formulation it is only necessary to show that the function sequence converges for every fixed point x. This directly brings about one rather awkward result, which we shall demonstrate with an example.

Example 1. Let us consider a function sequence fn(x) = xndefined on a closed set X = [0, 1] on the real line. It is readily seen that fnconverges pointwise according to the aforementioned

definition, to f (x) = 0 for all x ∈ X except for x = 1, for which it converges pointwise to f(x) = 1. This is a case of a sequence of continuous functions converging to a discontinuous limit - an interesting and rather unintuitive result.

There is yet another staple situation in which pointwise convergence exhibits odd behavior. It is directly connected with the construction of an object called a Dirac delta measure, often referred to as Dirac delta function, or simply a delta function, despite formally not being a function. It is what we call a generalized function or a measure. Measure theory, however, is out of the scope of this paper, and thus we will try to present this concept without delving too deep. The reader interested in learning more can be referred to [4].

The main properties of the delta function are as follows: it has the value of zero everywhere on the real line except for the origin, where it spikes into infinity, and the integral of this object over the real line is one. This is why it cannot be precisely defined as a function; a function with value of zero everywhere except at one point must consequently have the integral equal to zero. Furthermore, it is not possible to define a function as having infinite value at one specified point. One way to define the delta function is through integration; such definition will also allow us to pinpoint the aforementioned odd behavior concerning pointwise convergence. Let us now consider the following scenario.

(13)

Example 2. Let f be any continuous function defined on the real line and Tn be any function

sequence defined on the same metric space, with the properties 1. Tn(x) ≥ 0, Tnis continuous. 2. lim n→∞ n \ i=1 supp(Ti) = {0} . 3. Z ∞ −∞ Tn= 1.

Figure 2.1: The sequence of isosceles triangles with the properties defined in Example 2, con-verging pointwise to 0. For clarity of visualisation, sequence started with the second triangle.

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 X axis 0 2 4 6 8 10 12 14 16 Y axis T1 T3 T 7

A much studied example of such functions are the so called bump functions. Bump func-tion is a funcfunc-tion that is both smooth as well as compactly supported - meaning that its support is closed and bounded. Another, easy to imagine example of such function sequence could be a sequence of isosceles triangles on the real line with height 2n and base length of 1/n, starting at the origin (see Figure 2.1.). If we think about limiting behavior of sequences formulated this way, it is readily seen that we are able to construct the Dirac delta function by going to the limit. Furthermore, it will hold that

lim

n→∞

Z ∞

−∞

f(x)Tn(x)dx = f (0).

This can be quickly demonstrated with a proof.

Proof. Z ∞ −∞ f(x)Tn(x)dx = Z supp(Tn) f(x)Tn(x)dx

(14)

by first mean value theorem for definite integrals (see e.g. [1]), the above expression is equal to

f(an)

Z

supp(Tn)

Tn(x)dx = f (an), for some an∈ supp(Tn)

Since supp(Tn) → 0 as n → ∞, we have that limn→∞ f(an) = f (0), since f is continuous.

The relation we have just proven is one way to formulate the definition of a delta function and contain its most basic properties. Now, concerning pointwise convergence, we are able to show that Tn(x) → 0 pointwise.

Proof. To prove that Tn(x) → 0, we will first divide the problem into two cases. The first case

is trivial.

1. If a point x0∈ R \ (0, 1), then Tn(x0) = 0. Thus,

lim

n→∞Tn(x0) = 0.

The second part is a bit more tricky, as we’re looking at the non-trivial subset of the domain. 2. If a point x0∈ (0, 1), then x0> 0 and thus x10 ∈ R.

Hence, there must exist a natural number N such that N1 < x1

0. Therefore, supp(Tn) ⊂ 0,

1 N

 for all n > N. Finally, we obtain the expression below.

lim

n→∞Tn(x0) = limn→∞ n>N

Tn(x0) = lim n→∞0 = 0.

This result brings about a particular dissonance. We can immediately observe one odd consequence, as we have lim n→∞ Z ∞ −∞ Tn(x)dx = 1 6= 0 = Z ∞ −∞n→∞lim Tn(x)dx.

As we can see, we are not able to freely switch the order of operations in this case, which is often a desired result.

Remark. In Example 2, the condition on support of Tn is not necessary to construct a Dirac

delta. We introduce it for clarity of exposition.

The above examples are some of the reasons why it is not always optimal to rely only on pointwise convergence. This is why we often refer to another, stronger mode, called uniform convergence.

(15)

2.4

Uniform convergence

As previously mentioned, the pointwise convergence is often contrasted with a stronger mode, called uniform convergence. Uniform convergence directly implies pointwise convergence, but not the other way around. In fact, we will demonstrate how in the previous, problematic examples of pointwise convergence, the function sequences in question do not attain uniform convergence (see e.g. [4]).

Firstly, however, we should introduce a proper definition for uniform convergence.

Definition 8. Let fnbe a sequence of real-valued functions defined on R. Then fnconverges

uniformly on R to a limit function f if and only if

∀ ε > 0, ∃ M > 0, such that

sup

x∈R

| fn(x) − f (x)| < ε when n > M.

For a more general definition of uniform convergence, which we shall not need in this thesis, see e.g. [11]. Now let us take a closer look at the introduced definition. One key differ-entiating factor between uniform and pointwise convergence is that the former does not allow considering fixed x points. Instead, we have to determine whether the sequence converges as a whole, or as the name suggests, in a uniform fashion on it’s entire domain. We will soon find out that this more rigorous approach yields better results when it comes to problem areas of pointwise convergence; in particular, continuity and interchanging of limit processes.

Firstly, however, we ought to introduce some examples.

Example 3. Let us consider a function sequence fn(x) = x+11 n

on the interval (0, ∞). It is readily seen that this function sequence converges pointwise to f (x) =1x.

Proof. First, we fix a point x ∈ (0, ∞) and let ε > 0. Then we have

| fn(x) − f (x)| = 1 x+1n − 1 x = x− (x +1n) x(x +1n) = 1 n· 1 x(x +1n) < 1 n· 1 x2 < ε, if n > 1 ε x2. Thus, by Definition 4, fnconverges pointwise to 1x.

However, this function sequence does not converge uniformly on the real line. One way to demonstrate that is by choosing a subsequence and disproving the uniform convergence by definition for that subsequence.

(16)

Proof. Given ε > 0, we will show that the series does not converge uniformly to f (x) = 1x. We will choose a subsequence of fnby only considering xn= 1n over (0, 1). This gives us the

following expression. sup x∈(0,1) | fn(x) − f (x)| ≥ fn 1 n  − f 1 n  = 1 1 n+ 1 n −11 n = 1 2 n −11 n = n 2− n = n 2 > ε, if n > 2ε. Thus, by Definition 5, fndoes not converge uniformly to 1x.

Remark. It might be interesting to point out that, have we looked at another interval, namely (1, ∞), this function sequence would actually converge uniformly, since

| fn(x) − f (x)| < 1 n· 1 x2 < 1 n· 1 < ε, if n > 1 ε.

With that out of the way, let us consider Examples 1 and 2 in light of our new definition of convergence.

Example 4. In the first example we looked at fn(x) = xn on the interval [0, 1] and we were

able to show that it converges pointwise to a discontinuous limit despite being a sequence of continuous functions. In this part, we will demonstrate that fn does not converge uniformly

on that same interval.

Proof. To begin, we pick an ε = 1/2. Then we look at sup

x∈[0,1]

| fn(x) − f (x)|

Since for x = 1 this distance is always fixed and equal to 0, we can exclude that point and write as below. sup x∈[0,1) | fn(x) − f (x)| = sup x∈[0,1) |xn− 0| >  3 4 1n!n = 3 4 > 1 2. Thus, by Definition 5, fndoes not converge uniformly to a discontinuous limit of

f(x) = (

0 if 0 ≤ x < 1 1 if x = 1.

(17)

Example 5. In the next example we are looking at the more intriguing case of a function sequence that seemingly vanishes, yet its integral never goes to zero. That prevents us from being able to freely interchange the order of limit operations by only asserting pointwise con-vergence (see Example 2). We proved that it converges pointwise to 0. Here, we want to show that the sequence in question does not converge uniformly.

Proof. Recall that uniform convergence directly implies pointwise convergence. We know that Tn(x) converges pointwise to 0. Thus, if Tnconverges uniformly, it has to converge

uni-formly to 0. Following this, if Tnconverges uniformly, we have for any ε > 0

|Tn(x) − 0| < ε ∀ x.

Observe that if we choose a sequence xn= 1n, we have Tn(xn) = 2n. Then

|Tn(xn) − 0| = |2n| ≥ 2 ∀ n ∈ {1, 2, 3, ...} .

Now, by choosing ε = 1 we have |Tn(xn) − 0| ≥ 2 ≮ ε and thus we have disproved uniform

convergence for Tn.

In general terms, uniform convergence guarantees a lot of intuitive results; continuous functions converging to continuous limits, ability to interchange the order of limit operations and so on. This is why whenever it is possible to prove uniform convergence, it will always be more beneficial than only proving pointwise convergence. Obviously, proving uniform convergence is not always possible; in some situations we can only prove weaker modes of convergence, and that has merits of its own. We will end this section with a well known theorem. For its proof, see e.g. [11].

Theorem 1. Uniform convergence theorem. If a sequence fnof continuous functions defined on a metric space M converges uniformly to a function f defined on M, then f is continuous.

2.5

Convergence at a point

The weakest of the modes of convergence chosen to be discussed in this thesis is convergence at a point. It is also arguably the most basic. The formal definition follows.

Definition 9. Let fnbe a sequence of functions defined on a metric space X with a range on a

metric space (Y, d). Then fnconverges at a point x0to a limit a if and only if

∀ ε > 0, ∃ M > 0

such that d( fn(x0) − a) < ε when n > M.

In fact, we can quickly observe that considering convergence of a function sequence at a point is the same as considering the convergence of a sequence of numbers. For a fixed point x0, a function sequence fn(x0) is precisely that - a sequence of numbers, which we can more

conveniently (and appropriately) denote as simply {yn}.

It is readily seen that this mode of convergence does not imply any previously introduced modes; however, both uniform and pointwise convergence by definition imply convergence at any point.

(18)

2.6

L

1

convergence

The L1 space is a particular case of a mathematical notion called Lp spaces, or Lebesgue spaces. We will only concern ourselves with the L1space consisting of real-valued functions whose domain are all real numbers. Considering such objects in detail is out of the scope of this paper. We shall only use it to look at the objects

k f kLp= Z R | f |p 1/p .

In particular, with p = 1, the expression becomes

k f kL1=

Z

R

| f | .

Remark. The objects above are not norms when considering the Riemann integral, but we keep the standard notation, ||.||.

We will consider convergence in L1 to be convergence under these conditions. It is yet another interesting concept to look at when attempting to establish a basic understanding of the relations between the modes of convergence. One sensible way to look at this type of convergence, prior to attaining mathematically rigorous knowledge on the topic, is to think of objects converging in terms of area. As we will shortly demonstrate, this does not exactly translate to previously described modes. Indeed, it is possible to find examples for pointwise and even uniformly convergent function sequences which do not converge in L1, as well as examples of L1 convergent objects that do not converge even pointwise. Furthermore, we will demonstrate an example of an L1 convergent function sequence which does not attain convergence at any single fixed point in the domain. Firstly, we will need a working definition of what we mean by L1convergence.

Definition 10. Let fnbe a sequence of real valued functions defined on R. Then fnconverges

in L1on R to a limit function f if and only if

∀ ε > 0, ∃ M > 0

such that

Z

R

| fn(x) − f (x)| < ε when n > M.

Equipped with this definition, let us look for examples that show the relations between L1 convergence and the other modes. First, let us try to construct two examples of function sequences which will converge in L1, but not uniform or pointwise.

Example 6. We start with a function f (x) such that

f(x) = (

0 if x ∈ R \ Q 1 if x ∈ Q.

(19)

In other words, f disappears for all irrational numbers, but is equal to 1 for all rational num-bers. We know that irrational numbers are uncountably infinite, while rational numbers are a countable set (see e.g. [14]). This means that we can interpret taking an integral of f (x) over the real line as taking an integral of a g(x) = 0 with an infinite (but countable) number of holes that take on the value of 1. These points are just that; points, and thus can not affect the value of the integral, which remains 0.

Now, we want to construct a sequence of functions using f . We will shift it back and forth by π , letting

f2n(x) = f (x)

f2n+1(x) = f (x − π) Observe that R∞

−∞fn(x) = 0 for all n ∈ N, thus obviously this function sequence converges

to 0 in L1. Notice, however, that for all of the points that start off as rational numbers, as well as those irrational numbers that can be expressed as qp+ π, the values of fn will keep

oscillating between 0 and 1 forever. Hence we know that this function sequence can not converge pointwise.

Example 7. Recall the setting of the Example 2. In this example we will be using a similar setting, however this time we want our function sequence, which we will call Tn, to converge

to 0 in terms of area, while not converging uniformly or pointwise at the same time. Again, it will be easy to imagine the sequence of isosceles triangles. We want all of our triangles to have fixed height of 1, and their base length to go to 0 as n → ∞ and the support of Tn to

go to {0} ∪ {1}. To complete the example, we let T1 be centered at x0 = 1, T2 centered at

0, T3 centered at 1 again and so on in perpetuity (see Figure 2.2.). It is readily seen that this

sequence converges in L1to 0. However, we can observe that it does not converge uniformly or pointwise for x0= 1 nor x0= 0. At these points, the values of the functions in the sequence will

forever oscillate between 0 and 1. It might be worth pointing out that the sequence converges uniformly on all closed subsets of the domain which do not contain 0 or 1, as for all the other points sooner or later we arrive at 0 and stay at that value.

Example 8. Finally, we want to briefly go over the opposite situation; a uniformly convergent sequence which does not converge in L1. This is a bit tricky, since we are stepping a bit out of the scope of this thesis. There exist functions that are not in L1, but we are not able to go into much detail on them due to this being an undergraduate level thesis. A reader interested in gaining deeper understanding can be referred to [3, p. 20], where one such example is given and elaborated on. For this thesis we only need to acknowledge that functions like that exist and can be uniformly convergent. For instance, we may take some f not in L1 and use it to create a simple sequence fn= f + 1/n. Obviously, it converges uniformly to f , but there can

(20)

Figure 2.2: The sequence of isosceles triangles as defined in the Example 7. It can be easily observed that they seem to converge to 0 in area, but never converge at points 0 or 1.

-0.5 0 0.5 1 1.5 2 0 0.5 1 T1 -0.5 0 0.5 1 1.5 2 0 0.5 1 T2 -0.5 0 0.5 1 1.5 2 0 0.5 1 T3 -0.5 0 0.5 1 1.5 2 0 0.5 1 T4 -0.5 0 0.5 1 1.5 2 0 0.5 1 T5

(21)

Chapter 3

Numerical applications with MATLAB

3.1

Introduction

Acknowledging that this is an undergraduate level thesis and we are not able to delve very deep into the theoretical aspects of convergence, it might be of a good idea to make up for that with some applications to scientific computing. Practical implementation is often a very useful tool for better understanding and exercising freshly obtained theoretical knowledge, especially when the knowledge in question is quite abstract in some ways.

This is why for this thesis we will include a chapter dedicated to providing numerical and computational examples where some of the previously introduced theoretical concepts can be used, in an attempt form a link between the theory and application and make both more accessible to the reader. In the following chapters we restrict ourselves to real-valued functions defined on subsets of the real line.

3.2

Important concepts

As detailed above, this thesis tackles the topic of modes of convergence from two related, yet not identical perspectives, attempting to provide the reader with a solid outlook on the matter. Having covered the underlying mathematics necessary to understand these concepts, we will now be going through a series of numerical problems. In many of them we will be using appropriate iterations of a simple solver script (see the Appendix A), which we have constructed in MATLAB for this thesis. This solver uses primarily finite difference methods for solving differential equations, both central and forward. The relationships used for these methods are as follows.

Central: f0(x) = f(x + h) − f (x − h) 2h + O(h

2).

Forward: f0(x) = f(x + h) − f (x)

h + O(h).

They are based on Taylor series. The derivation from Taylor series for central and forward methods are given in[9]. This solver will be used to approximate solutions to chosen differen-tial equations with inidifferen-tial values as well as computing definite integrals. It contains the code

(22)

to compute and graphs empirical errors to help consider the convergence. Particular versions of the script and underlying algorithms will be explained in more detail in the sections corres-ponding to the problems for which those versions were developed.

The last section will focus on demonstrating some of the potential problems that one can encounter when trying to assess convergence of a numerical method. This part will also incor-porate MATLAB scripts to help solidify and visualise the issues. Before we go any further, however, let us stop to consider some basic notions which we will require in order to expand into numerics.

Convergence within the numerical context usually involves analysing an error of some sort. It would be of value to introduce a common error notation at this point, commonly referred to as big O notation (O(h)), also known as asymptotic notation. We have already used it above, as a part of the description for the central difference method. The big O notation describes limiting behavior of a function given arguments tending towards infinity or a particular value. In our case we will mostly be talking about arguments tending to zero, as the main purpose of this notation, within our context, is to express a bound on the difference between a function and its approximation. This allows us to formulate the error in a simple form by omitting the terms that become much less significant for small values of h. Let us attempt to define this notation a bit more precisely (for more on this, see e.g. [7]).

Definition 11. Let f and g be functions defined on an open interval containing 0. Then f(h) = O(g(h)) for sufficiently small h

if and only if there exist positive numbers ε and N such that | f (h)| ≤ N|g(h)| when 0 < |h| < ε.

Here are some basic rules for O(h) calculations that follow from the definition. O(hp) ± O(hp) = O(hp)

O(hp) ± O(hq) = O(hmin(p,q)) O(hp) · O(hq) = O(hp+q)

C· O(hp) = O(hp).

We should now briefly state what we mean by the order of convergence. For the error ex-pressed as follows

|uh− u| ≤ Chp,

the order of convergence is p, given some constant C and sufficiently small h.

Another notion worth introducing before delving deeper into numerical problems are the log-log plots. The idea behind using these plots, which employ log-logarithmic scale on both axis, is to make presenting and estimating convergence rate for error simpler. In general, log-log plots are used to present the relationships of type y = axk, called monomials, as straight lines on the graph. Taking logarithm of both sides of a monomial, we get

(23)

In particular, a common representation of error would be (see e.g. [12]) |uh− u| = Chp+ O(hp+1) = Chp  1 +O(h p+1) Chp 

log |uh− u| = logChp+ log(1 + O(hp+1−p))

log |uh− u| = log |C| + p log h + O(h).

This last relation makes it significantly easier to read the convergence of a given method off of the plot, which is why we will employ this technique for most graphs to come in this thesis. When h is small the graph ends up looking like a straight line with its slope equal to the convergence rate p.

3.3

Uniform convergence in estimating solutions to simple

differential equations

In this section, our goal is to present an instance where uniform convergence would be the most desired mode. We will try to accomplish that with approximating solutions to differential equations. In such type of problems it is obviously important that all the points converge to the right values, or else the solution would be simply incorrect.

3.3.1

Computing convergence of error by comparison with a known

solu-tion

Firstly we shall investigate an example in which we are able to easily obtain the error es-timates by comparing with the real values. To demonstrate this in practice we shall use the aforementioned solver and the example of the following initial value problem.

y0= xy, y(0) = 1. This iteration of the solver utilizes the following algorithm:

1 Outer while loop through step sizes until n reaches maximum n

2

3 Decrease step size to 1/2.^n

4 Declare initial values

5

6 Inner for loop, stepping the interval

7

8 Use central difference method to estimate the value of y

9 at the current step and store

10 Use forward difference method to estimate the value of y

11 at the current step and store

12 Compute the real value of y at the current step and store

(24)

14 End inner loop 15

16 Compute error as a maximum difference between approximations

17 of y through central difference method and the real values

18 at the current step size

19

20 Compute error as a maximum difference between approximations

21 of y through forward difference method and the real values

22 at the current step size

23

24 End outer loop 25

26 Plot the errors in a log-log form

We use the step size of 1/2n for n going from 1 to some chosen number (for plotting we chose maximum n to be 17). Maximum is being used a substitute of supremum from the uni-form convergence definition, since we are in the MATLAB environment, which is based on matrices. Finally, we adjust the results to the conventional log-log form before plotting (using log2). Included are the plot of an estimated solution and the plot of uniform convergence for both methods used by solver in this case, central difference method and forward difference method. For the script in MATLAB, see the Appendix A, Script 1. The first plot, when

com-Figure 3.1: Numerical solution to the problem y0= xy, y(0) = 1 for x between 0 and 1.

0 0.2 0.4 0.6 0.8 1 X axis 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 Y axis Solution estimate

pared with the real solution, looks virtually undistinguishable. Using the MATLAB function polyfit we find out that the slope of the central difference method’s convergence is 1.9972, while for the forward difference method it is 1.0126. These results are in line with the analytic calculations; see the derivations in [9] and their result summed up in the Section 3.2. This

(25)

Figure 3.2: Convergence of error of the solution to the problem y0= xy, y(0) = 1. Computed by comparison to the analytic solution, for forward difference method and central difference method. Plotted in log-log scale, logarithm of base 2.

-18 -16 -14 -12 -10 -8 -6 -4 -2

Step size (in powers of 2)

-35 -30 -25 -20 -15 -10 -5 0

Error (in powers of 2)

Central Forward

result tells us that we were indeed able to attain uniform convergence in this case.

3.3.2

Computing convergence of error without access to a known

solu-tion

The above example was of a problem to which the solution can be analytically found, which makes assessment of the error substantially easier. However, in most cases, the real value is not available for comparison. There are two main approaches to consider in such situations: 1. Compute a single reference value with a very small step size h and then proceed as if that reference value was a solution. This is a very straightforward method, but not too practical when computing the value for a very small step size takes a long time.

2. The second approach is to look at the differences between the consecutive estimates (for decreasing time steps). In this section, we will be utilizing the latter approach as more sensible in our context.

To present an example, we will use the following problem: y0= sin(xy) + y, y(0) = 1.

This equation does not have an analytic solution that we know of (although, to be precise, we have not disproved the existence of a potential solution), so we will use the solver to obtain the approximation and assess convergence. In this iteration, we follow nearly the same procedure

(26)

as in the first example, but including more loops to compare the initially saved values (see the Appendix A, Script 2). MATLAB polyfit indicates that the slope of the line is 1.9973, which indicates second order of convergence. This is precisely what we would expect when using this method. Included are the relevant plots (see Figures 3.3 and 3.4).

Figure 3.3: Estimated solution to the equation y0= sin(xy) + y, y(0) = 1 for x between 0 and 1. 0 0.14 0.28 0.42 0.56 0.7 0.84 X axis 0.5 1 1.5 2 2.5 3 3.5 4 Y axis Estimated solution

(27)

Figure 3.4: The empirical convergence of error for central difference method in estimating the solution of y0= sin(xy) + y, y(0) = 1. Plot in log-log scale, logarithm base 2.

-16 -14 -12 -10 -8 -6 -4 -2

Step size (in powers of 2)

-35 -30 -25 -20 -15 -10 -5 0

Error (in powers of 2)

Convergence of error estimate

(28)

Chapter 4

Problem areas in numerical computation

4.1

Introduction

In this section we will discuss and present examples of potential major issues that could arise while using the aforementioned methods. The convergence that we are hoping to obtain can be weakened or impossible to read from the computations in some particular circumstances. We will focus on two potential problem classes and present examples using MATLAB.

4.2

Non-smooth function

For a large portion of numerical schemes, it is required that the function in the problem is smooth enough. Different methods might require different degrees of smoothness (differenti-ability). In this instance we will investigate an issue with the trapezoidal rule, caused by the function having a cusp on the interval which we would like to integrate on.

To do this, we will first introduce the basic premise of the trapezoidal rule. This method allows for an approximation of an area under the curve on a given interval. In principle, the definite integral is approximated by a trapezoid as

Z b

a

f(x)dx ≈ (b − a)f(a) + f (b) 2 .

When using the rule, the interval is usually divided into M subintervals of length h = (b − a)/M. In this case the approximation through what is now the composite trapezoidal rule can be expressed as T( f , h) = h 2( f (a) + f (b)) + h M−1

k=1 f(xk).

To express the error term ET( f , h), we can state

Z b

a

(29)

In this case, provided that the function f ∈ C2[a, b], there exists a value c with a < c < b so that the error term has the form

ET( f , h) = −(b − a) f

00(c)h2

12 .

The proof of this result is given in [9]. This shows that the expected order of convergence of this particular method is p = 2.

However, as we plan to demonstrate, that might not hold for functions with non-smooth parts. In particular, let us consider the following object, inspired by a very similar example given in [12].

Z 1

0

(x − a)2/3dx, 0 < a < 1.

The integrand in this case is a function with a cusp at the point x = a. This can cause issues

Figure 4.1: The lesser error in trapezoidal method, with the cusp of the function falling exactly on a grid point. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 X axis 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Y axis Grid points ERROR AREA

with estimating the convergence of error. Although the error is bounded by O(h2), depending on the grid settings the error caused by this problem can be more or less significant. If the interval is divided into subintervals in such a way that the point a falls precisely or very close to the start of one of the subintervals, then the error will be really mild (see Figure 4.1). However, if the point a would fall in the middle of a subinterval, the error for that subinterval would be significantly higher than for all other subintervals, nearing the bound (see Figure 4.2). This means that attempting to compute the convergence by comparing consecutive approximations will not work properly in this case, as demonstrated with the figure below.

We will use MATLAB to visualise this problem. We will need to fix the parameter a, let a= 1/√2, which along with step size of h = 1/2nwill allow for the situations described above to take place. We use a simple algorithm written down in MATLAB language (see Appendix

(30)

Figure 4.2: The larger error in trapezoidal method, with a cusp of the function falling in the middle of a subinterval. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 X axis 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Y axis ERROR AREA Grid points

A, Script 4), looping through the decreasing step sizes and measuring the convergence by defining error as a difference between consecutive results. We obtain the following log-log plot, with slopes of methods of order 1 and 2 added for comparison. Since the magnitude of the order oscillates between approximations, we are unable to get a usual straight line from which we could read the slope (see Figure 4.3).

4.3

Singularity

It is not uncommon to encounter a mathematical setting in which there are some singularities. Values of functions escaping into infinity at some points can obviously cause major havoc within a numerical algorithm. In this subsection, we shall consider the following initial value problem:

y0=2 3x

−1/3, y(0) = 0.

This problem has a fairly simple analytic solution, y = x2/3. We will use the previously intro-duced MATLAB solver, based on the central difference method, to demonstrate the problems in practice. For the script, see the Appendix A, Script 3. It is readily seen that computing close to zero in this case will yield misleading results, and thus destroy the convergence. An attempt to solve this problem with our solver returns the plot of convergence of error as shown below. As we can observe from the log-log plot, the central difference method is much less useful in this case, yielding the order of convergence of merely 1.0370 according to the approximation by MATLAB polyfit function, instead of the expected 2 (see Figure 4.4).

(31)

Figure 4.3: The empirical convergence in trapezoidal method for solving the equation from the Section 4.2. Plot in log-log scale, logarithm base 2. Second order method slope included for reference. -16 -14 -12 -10 -8 -6 -4 -2 -35 -30 -25 -20 -15 -10 -5 0 5 Error estimate Second order slope

Figure 4.4: The convergence of error in the central difference scheme when solving y0=

2 3x

−1/3, y(0) = 0, as described in the Section 4.3. Plot in log-log scale with base 2.

-16 -14 -12 -10 -8 -6 -4 -2

Step size (in powers of 2) -18 -16 -14 -12 -10 -8 -6 -4 -2 0

Error (in powers of 2)

Convergence of error estimate

(32)

Conclusion

In conclusion, we have presented theoretical background for considering different modes of convergence. A big limiting factor was the undergraduate level of mathematics; knowledge of measure theory would have allowed this paper to become more precise as well as more extensive. We have covered some interesting numerical examples, but there are undoubtedly many more interesting properties to investigate. There are at least two possible directions for expanding on this topic; one would include delving into the theory of computation to analyse less trivial algorithms and their convergence, and the other would be to apply measure the-ory and present a more coherent explanation of the mathematical thethe-ory, perhaps incorporate Monte Carlo methods and stochastic convergence.

(33)

Bibliography

[1] Robert A Adams and Christopher Essex. Calculus: a complete course. Vol. 7. Pearson Canada 7th ed. Toronto, 2010.

[2] Walter William Rouse Ball. A short account of the history of mathematics. Courier Corporation, 1908.

[3] Gerald B Folland. Real analysis: modern techniques and their applications. John Wiley & Sons, 2013.

[4] D. H Griffel. Applied functional analysis. eng. New York: Ellis Horwood, 1981. ISBN:

0-13-043324-1.

[5] Christoph Gudermann. “Theorie der Modular-Functionen und der Modular-Integrale.” In: Journal für die reine und angewandte Mathematik 18 (1838), pp. 1–54.

[6] Godfrey Harold Hardy. “Sir George Stokes and the concept of uniform convergence”. In: Proc. Cambridge Philos. Soc. Vol. 19. 1918, pp. 148–156.

[7] Donald E Knuth. “The Art of Computer Programming. Volume 1: Fundamental Al-gorithms. Volume 2: Seminumerical Algorithms”. In: Bull. Amer. Math. Soc (1997). [8] Erwin Kreyszig. Introductory functional analysis with applications. Vol. 1. Wiley New

York, 1989.

[9] John H Mathews, Kurtis D Fink et al. Numerical methods using MATLAB. Vol. 4. Pear-son London, UK: 2004.

[10] Mathieu Ossendrijver. “Ancient Babylonian astronomers calculated Jupiter’s position from the area under a time-velocity graph”. In: Science 351.6272 (2016), pp. 482–484. [11] Walter Rudin et al. Principles of mathematical analysis. Vol. 3. McGraw-hill New York,

1964.

[12] Olof Runborg. Verifying Numerical Convergence Rates. 2012. URL: http://www.

csc.kth.se/utbildning/kth/kurser/DN2255/ndiff13/ConvRate. pdf.

[13] Vidar Thomée. “From finite differences to finite elements: A short history of numerical analysis of partial differential equations”. In: Journal of Computational and Applied Mathematics128.1 (2001), pp. 1–54.

[14] Howard J Wilcox and David L Myers. An introduction to Lebesgue integration and Fourier series. Courier Corporation, 2012.

(34)

Index

big O notation, 14 bump function, 6 Cauchy sequence, 5

central difference method, 13 compactly supported, 6 convergence at a point, 10 countable set, 11

Dirac delta function, 6 distance, 4

forward difference method, 13 indiscernibility, 5 infimum, 5 integral convergence, 11 irrational number, 11 isosceles triangle, 6 L1convergence, 11 Lpspace, 11 log-log plot, 14 lower bound, 5 maximum, 5

mean value theorem for definite integrals, 7 metric space, 4 metric, 4 minimum, 5 monomial, 14 non-negativity, 5 pointwise convergence, 5 rational number, 11 support, 5 supremum, 5 symmetry, 5 Taylor series, 13 topological space, 4 trapezoidal rule, 19 triangle inequality, 5 uncountable set, 11 uniform convergence, 8 upper bound, 5

(35)

Appendix A

MATLAB code

The code for MATLAB scripts constructed to use in this thesis can all be found in this ap-pendix.

A.1

Script 1

The first script is used to solve a differential equation in Chapter 3.3 and plot the estimated convergence.

1 %% Solver for a differential equation with known solution,

2 % central/forward. This version of the solver focuses on

3 % estimating the convergence of a

4 % problem with a known solution, specifically y' = xy, y(0)=1. It

5 % accomplishes the following:

6 % 1. Plots the convergence in the conventional, easy to read,

7 % logarithmic scale, for both the forward difference and central

8 % difference. Analytic calculations indicate that the first method

9 % should have the order of convergence one, and the second

10 % order of convergence two and the numerical results confirm it.

11 % 2. Checks the empirical order of convergence with polyfit MATLAB

12 % function. 13 %% 14 close all 15 clear all 16 n = 2; 17 max_m = 17; 18 m = 1; 19 error = zeros(1,max_m); 20 error1 = zeros(1,max_m); 21 22 while m <= max_m 23 n = 2.^m; 24 h = 1/n; 25 H(m) = h; 26 y = zeros(1,n+2);

(36)

27 x = zeros(1,n+2); 28 real = zeros(1,n+2); 29 y(1) = 1; 30 y(2) = 1; 31 x(1) = 1; 32 x(2) = y(2); 33 real(1) = 1; 34 real(2) = y(2); 35 for i = 1:n-2

36 x(i+2) = x(i+1) + h*(i+1)*h*x(i+1);

37 y(i+2) = y(i) + 2*y(i+1)*h*h*(i+1);

38 real(i+2) = exp((((i+2)*h).^2)/2); 39 end 40 error1(m) = max(abs(x-real)); 41 error(m) = max(abs(y-real)); 42 m = m+1; 43 end

44 %% Prepare log-log plot in conventional form

45 Hflip = fliplr(H); 46 errorflip = fliplr(error); 47 errorflip1 = fliplr(error1); 48 plot(log2(Hflip), log2(errorflip)) 49 hold on 50 plot(log2(Hflip), log2(errorflip1)) 51 legend('Central' , 'Forward');

52 %% Check empirical rates of convergence

53 Q = polyfit(log2(Hflip(1:12)), log2(errorflip(1:12)),1) 54 R = polyfit(log2(Hflip(1:12)), log2(errorflip1(1:12)),1)

55 %% For plotting the solution of the equation, comment out the following:

56 % plot(y(1:n))

57 % plot(save(1:n))

58 % hold on

59 % plot(real(1:n))

A.2

Script 2

The second script is used to estimate the solution as well as convergence of a problem de-scribed in the second part of the Chapter 3.3.

1 %% This is a solver for differential equations.

2 % It uses the central difference method to estimate the solution and

3 % convergence of an equation with no known solution.

4 % This version tries to estimate the y' = sin(xy)+y, y(0)=1.

5 %% 6 close all 7 clear all 8 n = 2; 9 max_m = 17; 10 m = 1;

(37)

11 error1 = zeros(1,n.^(max_m-1)); 12 error2 = zeros(1,max_m-1); 13 save = zeros(n.^(max_m),max_m); 14 15 16 while m <= max_m 17 n = 2.^m; 18 h = 1/n; 19 H(m) = h; 20 y = zeros(1,n+2); 21 22 y(1) = 1;

23 % We use the derivative at the first point as the second point, in

24 % this case it's 1.

25 y(2) = 1+h;% the derivative at x=0 is 1 so y(2) is approx y(1)+h*1

26 save(1,m) = y(1);

27 save(2,m) = y(2);

28

29 for i = 1:n

30 % is x(i+1)=h*(i+1)

31 % when i=1 we calculate x=h

32 x_i_plus_1=h*i;

33 y(i+2) = y(i) + 2*h*(sin(y(i+1)*x_i_plus_1)+y(i+1));

34 save(i+2,m) = y(i+2); 35 end 36 m = m+1; 37 end 38 39 m = 1; 40 n = 2; 41 42 while m <= (max_m - 1) 43 n = 2.^m; 44 for i = 1:n

45 % error1(i) = abs(save(n+1,m)-save((2*n)+1,m+1)); %pointwise

46 error1(i)=abs(save(i+1,m)-save((2*i)+1,m+1)); %uniform

47 end

48 error2(m) = max(error1);

49 m = m+1;

50 end

51 %% Plotting convergence in the conventional log-log form

52 Hflip = fliplr(H(1:max_m-1)); 53 errorflip = fliplr(error2);

54 plot(log2(Hflip), log2(errorflip))

55 %% Alternatively, plot the solution estimate.

(38)

A.3

Script 3

This script is for the Chapter 4.3, where we compute the convergence of error in a setting with a singularity to demonstrate error propagation.

1 %% Error propagation

2 % This version attempts to show the problems with having a non-smooth

3 % function within the problem. We use (x^2)^(-1/3) as f(x) and solve the

4 % DE: y' = f(x), y(0) = 1. We are using the central difference method.

5 %% 6 close all 7 clear all 8 n = 2; 9 max_m = 17; 10 m = 1; 11 save = zeros(n.^(max_m),max_m); 12 13 while m <= max_m 14 n = 2.^m; 15 h = 1/n; 16 H(m) = h; 17 y = zeros(1,n+2); 18 y(1) = 1; 19 y(2) = 1+h; 20 save(1,m) = y(1); 21 save(2,m) = y(2); 22 23 for i = 1:n 24 x_i_plus_1=h*i;

25 y(i+2) = y(i) + 2*h*abs((x_i_plus_1)-(1/sqrt(2)));

26 save(i+2,m) = y(i+2); 27 28 end 29 m = m+1; 30 end 31 32 m = 1; 33 n = 1; 34 35 while m <= max_m - 1 36 n = 2.^m; 37 for i = 1:n

38 % error1(i) = abs(save(n+1,m)-save((2*n)+1,m+1)); %pointwise

39 error1(i)=abs(save(i+1,m)-save((2*i)+1,m+1)); %uniform 40 end 41 error2(m) = max(error1); 42 m = m+1; 43 end 44 45 Hflip = fliplr(H(1:max_m-1)); 46 errorflip = fliplr(error2);

(39)

47 plot(log2(Hflip), log2(errorflip))

A.4

Script 4

This script attempts to present a solution for the problem in Chapter 4.2.

1 %% Plotting convergence of error in trapezoidal method.

2 % Goal: show that convergence is NOT of order 2 (expected order) when

3 % computing for a function that has a cusp and making the grid points

4 % on-and off the cusp (big error-small error oscillation).

5 % Using built-in trapz function.

6 %% 7 m = 1; 8 max_m = 17; 9 error = zeros(1,m); 10 while m <= max_m 11 n = 2.^m; 12 h = 1/n; 13 H(m) = h; 14 X = 0:h:1; 15 Y = ((X- 1/sqrt(2)).^2).^(1/3); 16 value(m) = trapz(X,Y); 17 if m >= 2 18 error(m) = abs(value(m)-value(m-1)); 19 end 20 m = m+1; 21 end 22

23 %% Plotting the convergence of error in conventional log-log form

24 %%

25 Hflip = fliplr(H(1:max_m-1));

26 errorflip = fliplr(error(2:max_m)); 27 plot(log2(Hflip), log2(errorflip))

A.5

Script 5

This script generates the sequence of triangles used in Chapter 2.3.

1 %% Generating a sequence of isosceles triangles converging pointwise.

2 %% 3 n = 2; 4 max_n = 8; 5 while n <= max_n 6 X = [0 1/(2*n) 1/n]; 7 Y = [0 2*n 0]; 8 plot(X,Y);

(40)

9 hold on

10 n = n+1;

11 end

A.6

Script 6

This script generates a sequence of triangles used in Chapter 2.6.

1 %% Generate a sequence of isosceles triangles converging in L^1.

2 clear all 3 n = 1; 4 max_n = 5; 5 figure 6 axis equal 7 while n <= max_n 8 if mod(n,2) == 0 9 X = [0-1/n 0 0+1/n]; 10 Y = [0 1 0]; 11 ax(n) = subplot(max_n,1,n); 12 plot(X,Y); 13 title(['T_' num2str(n)']) 14 hold on 15 else 16 X = [1-1/n 1 1+1/n]; 17 Y = [0 1 0]; 18 ax(n) = subplot(max_n,1,n); 19 plot(X,Y); 20 title(['T_' num2str(n)']) 21 hold on 22 end 23 n = n+1; 24 end 25 26 linkaxes([ax(1:n-1)],'xy');

Figure

Figure 2.1: The sequence of isosceles triangles with the properties defined in Example 2, con- con-verging pointwise to 0
Figure 2.2: The sequence of isosceles triangles as defined in the Example 7. It can be easily observed that they seem to converge to 0 in area, but never converge at points 0 or 1.
Figure 3.1: Numerical solution to the problem y 0 = xy, y(0) = 1 for x between 0 and 1.
Figure 3.2: Convergence of error of the solution to the problem y 0 = xy, y(0) = 1. Computed by comparison to the analytic solution, for forward difference method and central difference method
+5

References

Related documents

Both Brazil and Sweden have made bilateral cooperation in areas of technology and innovation a top priority. It has been formalized in a series of agreements and made explicit

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Av tabellen framgår att det behövs utförlig information om de projekt som genomförs vid instituten. Då Tillväxtanalys ska föreslå en metod som kan visa hur institutens verksamhet

Regioner med en omfattande varuproduktion hade också en tydlig tendens att ha den starkaste nedgången i bruttoregionproduktionen (BRP) under krisåret 2009. De

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

Den förbättrade tillgängligheten berör framför allt boende i områden med en mycket hög eller hög tillgänglighet till tätorter, men även antalet personer med längre än