• No results found

Comparative Analysis of Adaptive Domain Decomposition Algorithms for a Time-Spectral Method

N/A
N/A
Protected

Academic year: 2022

Share "Comparative Analysis of Adaptive Domain Decomposition Algorithms for a Time-Spectral Method"

Copied!
71
0
0

Loading.... (view fulltext now)

Full text

(1)

Comparative Analysis of Adaptive Domain Decomposition Algorithms for a Time-Spectral Method

VILHELM DINEVIK

KTH ROYAL INSTITUTE OF TECHNOLOGY

SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE

(2)

Adaptive Domain

Decomposition Algorithms for a Time-Spectral Method

VILHELM DINEVIK

Master in Aerospace Engineering Date: November 18, 2020

Supervisor: Jan Scheffel and Kristoffer Lindvall Examiner: Per Brunsell

School of Electrical Engineering and Computer Science, Division of Fusion Plasma Physics

Swedish title: Jämförelseanalys av Adaptiva

Domänuppdelningsalgoritmer för en Tidsspektral Metod

(3)
(4)

Abstract

Time-spectral solvers for partial differential equations (PDE) have been explored in various forms during the last few decades. The generalized weighted residual method (GWRM) is one such method with a high accuracy and efficiency. The GWRM has so far been implemented almost exclusively with a uniform grid of subdomains in the spatial domain.

Recent research has indicated that an adaptive grid can yield a significant improvement in accuracy and efficiency of the GWRM. In this thesis a comparison is performed between a uniform grid and three different adaptive grid decomposition methods. Three initial- value PDEs are used to benchmark these methods; the one-dimensional Burger’s equation, the 4th order Fisher-Kolmogorov equation and the non-linear Schrödinger equation. It was found that the average adaptive algorithm is the most efficient out of the algorithms evaluated in this thesis. The average adaptive algorithms solution time was up to 1.6 times faster than the uniform algorithm when solving the Fisher-Kolmogorov equation and with an error up to a factor of 22.5 smaller than the uniform algorithm when solving the one- dimensional Burger’s equation. The uniform algorithm needed 25 spatial subdomains to get errors of the same order of magnitude as the average adaptive algorithm got using only 12 spatial subdomains. The average subdomain decomposition algorithm is a fast, robust and efficient method, which can be applied to a variety of different problems to further increase the efficiency of the GWRM.

(5)

Sammanfattning

Tidsspektrala lösningar av partiella differential ekvationer (PDE) har utforskats på många olika sätt under de senaste årtiondena. Den generaliserade viktade residual metoden (GWRM) är en sådan metod som har uppnått hög noggrannhet och effektivitet. Metoden har hittills, nästan enbart, implementerats med en likformig subdomänsuppdelning i rumsdomänen.

Nyligen utförd forskning indikerar att GWRM kan ge signifikant förbättrad precision och effektivitet om man implementerar adaptiva rums- och tidsdomäner. I detta examensarbe- te utförs en jämförelse mellan en likformig subdomänsuppdelning i rummet och tre olika adaptiva algoritmer för subdomänsuppdelning. Dessa algoritmer testas på tre olika PDE, endimensionella Burgers ekvation, fjärde ordningens Fisher-Kolmogorovs ekvation och den icke-linjära Schrödingerekvationen. Slutsatsen var att den medelvärdesbildande adap- tiva algoritmen var den mest effektiva metoden. Den löste ekvationerna upp till 2.7 gånger snabbare än den likformiga algoritmen, med ett fel som var upp till 22.5 gånger mindre än den likformiga metodens fel. Den likformiga metoden behövde 25 rumsdomäner för att få en precision av samma potens som de adaptiva algoritmerna åstadkom med enbart 12 rumsdomäner. Den medelvärdesbildande algoritmens subdomänsuppdelning är snabb, ro- bust och effektiv. Den kan appliceras på en mängd olika problem för att öka effektiviteten av GWRM.

(6)

Acknowledgement

I would like to thank my supervisors Jan Scheffel and Kristoffer Lindvall for their contin- uous guidance and encouragement. Their knowledge and support, which they generously shared with me, enabled this thesis to reach the quality that I wanted. I would also like to thank everyone I got to know at the division of fusion plasma physics at KTH for helping me and making me feel like a part of that institution during my time there.

(7)

1 Introduction 1

1.1 Background . . . 2

1.2 Research Question . . . 3

2 Method 4 2.1 The Generalized Weighted Residual Method . . . 4

2.2 Chebyshev Polynomials . . . 5

2.3 Boundary conditions and subdomains in GWRM . . . 8

2.3.1 Boundary and Initial Conditions in GWRM . . . 8

2.3.2 Temporal and Spatial Subdomains in GWRM . . . 9

2.4 Adaptive Subdomain Decomposition Algorithms . . . 11

2.4.1 Resolution Quality . . . 11

2.4.2 The Local Method . . . 12

2.4.3 The Global Compressive Method . . . 13

2.4.4 The Average Method . . . 14

2.5 Modelling the Equations With the GWRM . . . 15

vi

(8)

2.5.1 Burger’s Equation . . . 15

2.5.2 Fisher-Kolmogorov Equation . . . 15

2.5.3 Non-Linear Schrödinger Equation . . . 16

3 Result 18 3.1 One-Dimensional Burger’s Equation . . . 19

3.2 Fisher-Kolmogorov Equation . . . 20

3.3 Non-Linear Schrödinger Equation . . . 20

3.4 Efficiency, Accuracy, and Resolution Comparison . . . 22

3.4.1 The Global Algorithm Applied to the One-Dimensional Burger’s Equation . . . 22

3.4.2 The Average Algorithm Applied to the One-Dimensional Burger’s Equation . . . 24

3.4.3 The Local Algorithm Applied to the One-Dimensional Burger’s Equation . . . 26

3.4.4 The Uniform Algorithm Applied to the One-Dimensional Burger’s Equation . . . 28

3.4.5 Comparison of the Algorithms for the One-Dimensional Burger’s Equation . . . 29 3.4.6 The Global Algorithm Applied to the Fisher-Kolmogorov Equation 31 3.4.7 The Average Algorithm Applied to the Fisher-Kolmogorov Equation 33 3.4.8 The Local Algorithm Applied to the Fisher-Kolmogorov Equation 35 3.4.9 The Uniform Algorithm Applied to the Fisher-Kolmogorov Equation 37 3.4.10 Comparison of the Algorithms for the Fisher-Kolmogorov Equation 38

(9)

3.4.11 The Global Algorithm Applied to the Non-Linear Schrödinger Equation . . . 39 3.4.12 The Average Algorithm Applied to the Non-Linear Schrödinger

Equation . . . 41 3.4.13 The Local Algorithm Applied to the Non-Linear Schrödinger Equa-

tion . . . 43 3.4.14 The Uniform Algorithm Applied to the Non-Linear Schrödinger

Equation . . . 45 3.4.15 Comparison of the Algorithms for the Non-Linear Schrödinger

Equation . . . 46

4 Discussion 48

5 Conclusion 52

Bibliography 54

A Reference Run Uniform Algorithm 56

B Chebyshev Polynomials 58

B.1 Multiplication of Chebyshev Polynomials . . . 59 B.2 Integration of Chebyshev Polynomial . . . 60 B.3 Differentiation of Chebyshev Polynomial . . . 61

(10)

Introduction

Time-spectral solvers have been explored in various forms during the last few decades [1].

One such time-spectral solver is the generalized weighted residual method (GWRM). The GWRM has been extensively studied when dividing the temporal and spatial domain into uniformly sized subdomains [2] [3]. In this thesis we instead implement an adaptive spatial subdomain using three different adaptive domain decomposition algorithms, which has not been extensively studied previously, and compare the results to the uniform subdomain approach.

The structure of this thesis is as follows. It starts with a background section which aims at answering why the research conducted in this thesis is of importance and what the goal of the research is. To do this the background begins with a description of what previous research has been done in the field of time-spectral methods. Three different methods for adaptive domain decompositions applied to the GWRM are then presented. After this, a research question is formalised to wrap up the introduction. This is then followed by a method section which answers what has been done in this thesis and how it has been accomplished. Next, typical solutions from three different partial differential equations (PDE) are presented in the results section. This is then followed by a presentation of the performance of four different domain decomposition algorithms applied to those three differential equations presented earlier. From this follows a thorough discussion of the results in the discussion section with the goal of answering why the results look like they do, what can be improved upon and what could have been done differently. The discussion chapter will give a foundation for drawing accurate and meaningful conclusions, which will be presented in the conclusion chapter. Lastly, in the appendix some interesting results and mathematical derivations are presented for some extra clarification and support.

1

(11)

1.1 Background

Generally during the last few decades, time-spectral solvers have been explored in various forms, although the general belief is that solving a problem simultaneously over space-time is an inefficient approach. It has however been shown lately that this is not necessarily the case and there exists several examples of time-spectral solvers outperforming time stepping solvers [1][4][5]. One of the benefits of using a time-spectral method over time stepping solutions is that fewer degrees of freedom are necessary to obtain a certain accuracy and that they exhibit exponential rate of convergence for sufficiently smooth solutions [6][7].

The GWRM can be used to analyse a set of partial differential equations. The GWRM is a fully spectral semi-analytical method and has previously shown very promising results in terms of efficiency and accuracy, especially when dividing the temporal and spatial do- mains into several subdomains [2]. The method has been implemented earlier on many different equations dividing the spatial and temporal domain into subdomains using a uni- form grid [3]. Lately it has also been shown that using an adaptive instead of a uniform grid has the potential to increase the efficiency of the solver even further, as long as an efficient grid mapping algorithm is constructed [8]. This has however so far only been applied in some special cases. The adaptive subdomain approach is the most efficient when you have localised steep gradients in the solution since if one were to use a uniform grid on these types of solutions one would have extremely small subdomains in areas where it might not be needed just to be able to resolve the gradients. An adaptive subdomain approach on the other hand can have small subdomains around the steep gradients of the solution while maintaining larger subdomains in the rest of the spectral domain, which should in theory give more efficient and faster solutions/simulations since it should require fewer spatial subdomains than a uniform grid. It should also be able to use a lower number of modes to get the same accuracy. The efficiency and speed of the process in which the grids adapt to optimise the resolution of the grid has however not been discussed at length earlier. This is important for future research using the GWRM since if the adaptive grid can be further optimised with a more agile and quick algorithm it will be easier to implement on new equations. It will also solve problems faster, speeding up the process of further research within the field. The adaptive grid in the GWRM should be able to resolve localised peaks while keeping a very low resolution in the rest of the spectral domain, giving a highly efficient way to simulate many different complex systems of PDEs.

In Gillgrens thesis [8], an implementation of adaptive grids for the GWRM has been done using a global adaptive grid called the "compressive method" that calculates the resolu- tion of each subdomain and then tweaks all subdomains to assist the subdomain with the worst resolution. This can however be problematic if one chooses to start with a uniform

(12)

grid and have a function with two or more distinct gradients. This will lead to the global compressive method trying to compress to one of the two gradients in the domain, and then in the next iteration trying to compress towards the other gradient, resulting in each iteration cancelling out the previous one. This would then yield a very slow adaption. This can however be avoided by instead adjusting each subdomain locally by comparing each subdomain to the subdomains directly to the left and right of it. This does however have to do more comparisons than the global compressive method since it has to compare the current subdomain with the previous and next subdomain, while the global compressive method only has to compare each domain with the subdomain that has the poorest reso- lution globally. Another option is to decrease the size of the subdomain with the poorest resolution the same amount as you increase the subdomain with the best resolution and the second poorest with the second best and so on. This method should result in a convergence of the resolution of the spatial subdomains towards the average resolution value of the sys- tem. All of these three options are very interesting to study and has their own strengths and weaknesses. In the method chapter, a more in-depth analysis and explanation of these three algorithms is presented.

1.2 Research Question

As mentioned in the background, there is previous research suggesting that applying an adaptive grid to the GWRM is more efficient than a uniform grid. This has still not been extensively tested. Thus the question that we aim at answering in this study is: Do you always gain efficiency when switching from a uniform to an adaptive grid in the generalized weighted residual method and which of the average, local and global adaptive algorithm is the most efficient adaptive algorithm?

(13)

Method

The method section starts with an introduction to the GWRM followed by a brief section about Chebyshev polynomials. Next, subdomains and boundary conditions in the GWRM are described followed by a short description of the three different adaptive grid algorithms studied in this thesis. In the last part of this section the benchmark tests for the different algorithms will be presented.

2.1 The Generalized Weighted Residual Method

In this section a simple description of the GWRM is presented, for a more detailed pre- sentation see J. Scheffel’s publication [3]. The GWRM is a semi-analytical method used to solve partial differential equations of the form

∂u

∂t = Du + f (2.1)

where u = u(t, x; p) is the solution vector, D is a linear or nonlinear matrix operator that may depend on both physical variables and parameters. f = f (t, x; p) is an explicitly given source term that is not dependant on u. Boundary conditions and initial conditions are assumed to be known.

If we integrate Equation 2.1 in time we get u(t, x; p) = u(t0, x; p) +

Z t t0

[Du(t0, x; p) + f (t0, x; p)]dt0 (2.2)

4

(14)

The solution vector u(t, x; p) can be approximated with first kind multivariate Chebyshev polynomials. Using one spatial variable x and one physical parameter p, we get

u(t, x; p) =

K

X

k=0 0

L

X

l=0 0

M

X

m=0

0αklmTk(τ )Tl(ξ)Tm(P ) (2.3) where K is the number of temporal modes, L the number of spatial modes and M the number of parameter modes used, using the transformations

τ = t − At

Bt , ξ = x − Ax

Bx , P = p − Ap

Bp (2.4)

Az = z1+ z0

2 , Bz = z1− z0

2 (2.5)

With z being any of t, x or p and, z0 and z1 representing interval boundaries. If we then define the residual as

R = u(t, x; p) − [u(t0, x; p) + Z t

t0

{Du + f }dt0] (2.6)

with the set of algebraic equations generated by 2.6 and the requirement that the residual should satisfy the Galerkin WRM defined over the entire computational domain. The spectral coefficient αklmis then determined.

Z t1

t0

Z x1

x0

Z p1

p0

RTq(τ )Tr(ξ)Ts(P )wtwxwpdtdxdp = 0 (2.7) After some algebra, equation 2.7 can be written as the final expression of the GWRM coefficients

αqrs = 2δq0brs+ Aqrs+ Fqrs (2.8) For linear cases the equations can be solved using Gauss eliminations otherwise, for non- linear cases a root solver has to be used such as SIR or Anderson Acceleration.

2.2 Chebyshev Polynomials

The switch to Chebyshev space is used in the GWRM because of the minimax property of the Chebyshev polynomials which says that, if we cut the highest mode n from the Cheby- shev polynomial then it is the the optimal approximation up to the mode of that Chebyshev

(15)

polynomial [9]. This makes it perfect to use in computationally heavy simulations. The Chebyshev polynomial of the first kind of degree n is denoted by Tn(x) and is defined as

Tn(x) = cos(n cos−1(x)) (2.9)

which given as polynomials can be written as T0(x) = 1, T1(x) = x, T2(x) = 2x2 − 1, T3(x) = 4x3 − 3x, T4(x) = 8x4 − 8x2 + 1, and so on. This can also be written as a recurrence relation as following.

Tn(x) = 2xTn−1(x) − Tn−2(x) (2.10) To make the Chebyshev polynomial valid in any interval [a, b] we can make a variable change as following

s ≡ x −12(b + a)

1

2(b − a) (2.11)

This makes it possible to approximate the function f (x) by a Chebyshev polynomial in s.

We can approximate any multivariate polynomial with Discrete Chebyshev approximation by interpolating Chebyshev polynomials between a finite set of points to obtain an exact approximation between said points [9]. The first coefficient in the sum is to be divided by two, which is indicated with an apostrophe after the summation sign. If we then have a function f (x) on the interval [−1, 1] we can express the Chebyshev series expansion as

f (x) ≈

n

X

i=0

0ciTi(x) (2.12)

where the coefficent ciis given by ci = 2

n + 1

n+1

X

k=1

0f (xk)Ti(xk) (2.13)

This can be seen since

n+1

X

k=1

Ti(xK)Tj(xk) =





0 i 6= j (≤ n) n + 1 i = j = 0

1

2(n + 1) 0 < i = j ≤ n

(2.14)

If we then multiply equation 2.12 by n+12 Tj(xk) and sum everything over k we get from equation 2.14 that

2 n + 1

n+1

X

k=1

f (xk)Tj(xk) =

n

X

i=0 0ci

 2

n + 1

n+1

X

k=1

Ti(xk)Tj(xk)



= cj (2.15)

(16)

To be able to utilise Chebyshev polynomials properly, we have to evaluate how to do basic mathematical operations with two or more Chebyshev polynomials.

Using the substitution x = cos(θ) together with some basic trigonometric identities we can write the product of two Chebyshev polynomials as

Tm(x)Tn(x) = cos(mθ) cos(nθ) = 1

2[cos((m + n)θ) + cos(|m − n|θ)] (2.16) which can also be written as

Tm(x)Tn(x) = 1

2(Tm+n(x) + T|m−n|(x)) (2.17)

Next we can evaluate an integral of Tr(x) on [a, b] by In+1(x) =

Z n

X

r=0

0arTr(x)

= C + 1 20

T1(x) + 1

4a1T2(x) +

n

X

r=2

ar 2

 Tr+1(x)

r + 1 − Tr−1(x) r − 1



=

n+1

X

r=0

0ArTr(x)

(2.18)

where A0 is determined from the integration constant C, and Ar = ar−1− ar+1

2r , r > 0 (2.19)

with

an+1= an+2 = 0 (2.20)

So to evaluate the integral on [a, b] we have to apply equation (2.11) where A ≡ 1

2(b + a), B ≡ 1

2(b − a), Tn ≡ Tn x − A B



(2.21)

The integral on the arbitrary interval [a, b] then becomes Z

Tn x − A B

 dx =

B 2[Tn+1

x−A B



n+1T|n−1|

x−A B



n−1 ], n 6= 1

B

4T2 x−AB , n = 1 (2.22)

(17)

To evaluate the derivative of a Chebyshev polynomial we just have to reverse the integration process [9]. Given the Chebyshev sum of degree n + 1

In+1(x) =

n+1

X

r=0

0ArTr(x) (2.23)

we then have

ar =

n+1

X

k=r+1 k−r odd

2kAk (2.24)

To then make this valid for a general interval [a, b] and doing as we did in equations (2.23) -(2.24) with the substitute x = x−AB

ar−1 = ar+1+ 2rAr

B , (r = n + 1, n, ..., 1) (2.25) We can then express the explicit sum as [9].

ar= 1 B

n+1

X

k=r+1 k−r odd

2kAk (2.26)

For a more in-depth derivation of the different Chebyshev operations presented in this section of the report, see Appendix B or read J.C. Mason and David C. Handscomb’s excellent book on Chebyshev polynomials [9].

2.3 Boundary conditions and subdomains in GWRM

In this section a description for how boundary and initial conditions are assigned in the GWRM is presented followed by a short presentation of how temporal and spatial subdo- manis are implemented in the GWRM.

2.3.1 Boundary and Initial Conditions in GWRM

The number of boundary and initial conditions depend on the number of equations in equation 2.1 and the order of the spatial derivatives. The initial conditions can be inserted directly into equation 2.8 and the end condition at each interval is the initial condition for

(18)

the next interval. The boundary conditions should be applied by replacing the αklm co- efficient in equation 2.3 at the upper end of the spatial mode spectrum [10]. This can be explained by looking at equations 2.23 to 2.26 where it can be seen that when differentiat- ing a function represented by a Chebyshev polynomial, it is only valid up to order n + 1 − r if it has been differentiated r times. This implies that the coefficient in equation 2.8 does not apply for higher spatial mode numbers. The derivatives will therefore be unaffected by the boundary conditions and they will only be taken into account when the Chebyshev polynomial is not differentiated. To read a more thorough explanation of how to apply boundary and initial conditions in GWRM one can read Jan Scheffel’s publication [3].

2.3.2 Temporal and Spatial Subdomains in GWRM

The dominant cost in computational time, when solving a problem using the GWRM, is the application of the root solver to equation 2.8. A simple straight forward application of the GWRM and the root solver SIR would require around Ω = (K + 1)3(L + 1)3(M + 1)3 operations for each iteration when solving said problem. This number can be reduced by a factor 3 if one implements LU decomposition instead of matrix inversion [1].

To reduce the number of operations even further one can separate the temporal and spa- tial domain into subdomains, which will give a linear instead of cubic dependence of the number of modes, as long as the number of subdomains are proportional to the number of modes. If the domain is divided into Nx spatial and Nt temporal subdomains we reduce the number of operations even further to

Ω/3

(NtNx)2 = NtNx[(K + 1)/Nt]3[L + 1/Nx]3(M + 1)3/3 (2.27) This is assuming that the same total number of modes are required in both cases [2]. This is however not the case since the number of modes can be reduced when using subdomains.

This can be done since the complexity of the function within a subdomain is usually re- duced compared to trying to solve for the full domain at once, this reduces the computa- tional cost even further [2].

The temporal subdomains are implemented in a much more straightforward way than the spatial subdomains since we can just use the previous subdomains end condition as the initial condition for the next subdomain. The temporal subdomains are also made to be adaptive in the GWRM used in this thesis. The temporal adaptive method is quite simple.

(19)

A resolution Ri,jin the temporal subdomain is calculated by Ri,j = |ak,li,j| + |a(k−1),li,j |

|a0,li,j| + |a1,li,j| (2.28) where ak,li,j is the Chebyshev coefficient of temporal and spatial subdomain j and i at tem- poral and spatial mode k and l respectively. After that, if Ri,j is larger than a certain maximum error value , the temporal subdomain size is divided by two. If the residual is smaller than  for 10 iterations in a row the temporal subdomain size is increased by a factor of 1.5, since this indicates that the solution might not require the same amount of resolution in the coming time intervals.

For spatial subdomains, the problem is that the boundary conditions are usually only known at the exterior, global boundaries. So one can not simply progress from one sub- domain to the next in the same fashion as with the temporal subdomains. For the spa- tial subdomain we have to implement the given global boundary conditions on the outer edges of the first and last subdomain and then use other specified boundary conditions for all other internal spatial boundaries. These internal boundaries are connected through the condition that the functions and their derivatives are continuous through each interior boundary between two spatial subdomains. Due to the fact that derivative matching is sen- sitive to small errors because of the large coefficients appearing in higher order derivatives, as can be seen in equation 2.26, numerical instability may result from such a procedure.

Therefore we usually use a "handshaking procedure" as a solution for this when we use the GWRM with subdomains. In this procedure we allow the functions to have an overlap into the neighbouring domains to be doubly connected, this improves the stability over the derivative matching [2]. It has also been shown that for second order contact between the internal boundaries we must have

V ≥ Γs

2 (2.29)

where Γsis the order of the system of PDEs and V is the number of variables in the PDE.

Figure 2.1: Illustration of the domain overlap, the red area is subdomain s − 1, the blue area is the subdomain s and the purple area is the overlap between the subdomains

In figure 2.1 the GWRM "handshaking procedure" between internal subdomains s − 1 and

(20)

s at point xbis illustrated. Here the distance ∆xs,1is the distance that subdomain s overlaps into the previous neighbouring subdomain and ∆xs−1,2is the distance that subdomain s−1 overlaps into the next subdomain. ∆xs is computed as a function of the subdomain size according to

∆xs,1 = ∆xs,2= xs κ

Nx (2.30)

where ∆xs,1is the domain overlap into the previous subdomain, ∆xs,2is the overlap into the next subdomain and κ is a scaling parameter set equal to 3 × 10−3for practical reasons.

2.4 Adaptive Subdomain Decomposition Algorithms

Here follows a brief description of how the resolution of the solution is evaluated. Then a mathematical definition and descriptions of the different adaptive algorithms will be presented. There are three different algorithms; the local, global and average algorithm.

The local algorithm accounts only for the resolution of the subdomains directly adjacent to the current subdomain when recalculating the size for a specific spatial subdomain. The average algorithm is trying to even out the resolution of all subdomains towards the median spatial resolution of that specific time interval. The global method focuses on resolving one area of the spatial domain extremely well by forcing all subdomains to compress towards the subdomain with the globally poorest resolution.

2.4.1 Resolution Quality

One of the aspects used to evaluate the different subdomain decomposition algorithms is the quality of the resolution. The resolution value is a measurement of the quality of the solution and therefore, the closer the resolution value in each subdomain is to the best resolution value, the better the solution is. The normalised resolution Ci,j, which is used by all adaptive algorithms in this thesis to calculate their subdomain decomposition, is defined as

Ci,j = ci,j

cmax,j (2.31)

ci,j = |ak,li,j| + |ak,(l−1)i,j |

|ak,0i,j| + |ak,1i,j| (2.32) where cmax,j is the maximum resolution at time interval j, ci,j is the resolution of the ith subdomain and ak,li,j is the Chebyshev coefficient of temporal and spatial mode k and l respectively.

(21)

The resolution quality Q at time interval j is then given by Qj =

PNx

i=1Ci,j

Nx (2.33)

Where Qj ∈ (0, 1]. One can then see that the optimal value of Qj is when Qj = 1 since that means that all values in ci,j is equal to cmax,j, but it seems to be of acceptable quality as long as Qj > 0.1.

2.4.2 The Local Method

The local algorithm for subdomain decomposition is a very simple but effective algorithm developed by Lindvall [11]. It compares the normalised resolutions Ct,ifrom the last time interval t to the left and right of a grid point xt,i. It then decreases or increases the size of the subdomain to the left or right of the grid point by moving the grid point to the left or right, depending on which one had the lower relative resolution in the last time interval.

This can be mathematically defined as

xt,i =

x(t−1),iC(t−1),i−C(t−1),(i+1)

C(t−1),i+C(t−1),(i+1)

∆x(t−1),i

v , C(t−1),i > C(t−1),i+1 x(t−1),iC(t−1),(i+1)−C(t−1),i

C(t−1),i+C(t−1),(i+1)

∆x(t−1),(i+1)

v , C(t−1),i ≤ C(t−1),(i+1)

(2.34)

where xt,iis the grid point that indicates the end of subdomain i at time interval t and Ct,i

is the normalised resolution values for subdomain i at time interval t, v is a velocity pa- rameter, which is set to v = 6 for practical reasons in this thesis. This method is extremely robust since it focuses on optimising the resolution of a subdomain in relation to the res- olution of the subdomains next to it. This local approach is however not very efficient in certain cases.

(22)

Figure 2.2: f (x) = exp(−x2)

Take for example the function f (x) = exp(−x2), which can be seen in figure 2.2. It is unnecessary to optimise the subdomains where the functions value is close to zero since they will have a much better resolution than the subdomains around and on the gradient.

The local method will spend resources trying to resolve subdomains that already have far better resolution than the subdomains close to the gradient which results in an inefficient algorithm for such a case.

2.4.3 The Global Compressive Method

The global compressive method has been applied earlier to the GWRM by Gillgren in his work with Burger’s equation and the GWRM [8], where a more thorough derivation of the algorithm can be found.

First of all, if the subdomain with the lowest resolution is located at one of the global boundaries of the spatial domain, a new subdomain size for the spatial subdomain j, is calculated from

∆xj = (1 + α)|smax−j|β (2.35)

where smaxis the index for the subdomain with the lowest resolution. β is a constant that ensures that the new grid stays within the original spatial domain |x1− x0| and it is defined

(23)

as

β = |x1− x0| PNx−1

j=0 (1 + α)j (2.36)

where α is a positive parameter which governs the rate of compression. If the lowest resolution is located at one of the inner subdomains the entire spatial domain has to be divided into two parts as follow

∆xj =

((1 + α)|smax−j|β, j ≤ smax

(1 + α)|smax−(j−1)|γ, j > smax (2.37) where the coefficients β and γ are

β = |xmax− x0| Pimax−1

j=0 (1 + α)j, γ = |x1− xmax| PNx−imax+1

j=0 (1 + α)j (2.38)

where xmaxis the x-coordinate of the centre of the subdomain with the lowest resolution, and imaxis the index of the subdomain with the lowest resolution. What this method does is that it is basically squeezing all subdomains towards the area with the lowest relative resolution. This method should work well when the solution only has one local gradient or several ones that do not move since this method can be divided to just look at a certain amount of subdomains if one knows where the local peaks will show up and roughly how many there will be.

2.4.4 The Average Method

The averaging method seeks to change the resolution of each subdomain towards the av- erage resolution value in a specific time interval. This is done by comparing the subdo- main with the lowest resolution and the subdomain with the highest resolution, decreasing the size of the subdomain with the lowest resolution the same amount as the size of the subdomain with the best resolution is increased. The subdomain with the second lowest resolution is then compared to the second best and resized accordingly and so on until all subdomains have updated sizes or there is only one subdomain left to update, the one with the median resolution value which is then left unchanged. The average algorithm is then defined as following

xt,i =

C(t−1),i−C(t−1),(Nx−i)

C(t−1),i+C(t−1),(Nx−i)

x(t−1),i

v , i < Nx/2

C(t−1),(Nx−i)−C(t−1),i C(t−1),(Nx−i)+C(t−1),i

x(t−1),(Nx−i)

v , i > Nx/2 (2.39)

where xt,i is the grid point that indicates the end of subdomain i at time interval t, v is a velocity parameter, which is set to v = 6 for practical reasons in this thesis. Index i

(24)

goes from lowest to highest resolution, Nx is the number of spatial subdomains and Ct,i

is the normalised resolution values for subdomain i at time interval t. The advantage of the average algorithm is that it is trying to even out all resolutions globally towards the median value after each iteration, which will make it theoretically faster than the local method since it does not spend resources on refining subdomains where the resolution is good. It rather focuses on globally optimising the resolution while still being able to resolve functions with more than one local gradient.

2.5 Modelling the Equations With the GWRM

2.5.1 Burger’s Equation

The one-dimensional Burger’s equation is a second order PDE with a viscosity term de- fined as

ut= νuxx− uux (2.40)

where index t denotes ∂/∂t and index x denotes ∂/∂x. Burger’s equation has previously been extensively studied using the GWRM with a uniform grid [3] and an adaptive grid [8]. In this thesis the equation was studied with another set of initial conditions to form a single distinct shock wave, modelled with

u(x, 0) = sin(2πx) + 0.5 sin(πx) (2.41) The advantage of using this initial condition is that the domain decomposition has to be very quick in resolving the shock wave before the time interval becomes too small for it to be solved within a reasonable time. Another challenge for the algorithms is that the shock wave is moving in the spatial domain. Using the GWRM with adaptive subdomains on a spatially moving shock is something that has not been studied previously. This is therefore a very good case to study how efficient the domain decomposition methods are.

2.5.2 Fisher-Kolmogorov Equation

The Extended Fisher-Kolmogorov Equation is a 4th order PDE given by

ut = −γuxxxx− uxx+ u3− u (2.42)

(25)

where γ = 0.001 in this study. We will study the Fisher-Kolmogorov equation with the following initial condition

u(x, 0) = − sin(πx) (2.43)

Looking for example at the solution to the Fisher-Kolmogorov equation presented in Bashan et al. [12], one can see that a uniform grid should be a good domain decomposition method, if not optimal for this case. This is because the function is, in all time intervals just a simple sine wave which should give roughly the same resolution in all subdomains for a uniform subdomain distribution. The reason for studying the Fisher-Kolmogorov equation here is to check the robustness of the adaptive methods and see if they will keep the uniform distribution of the subdomains or if they will change the subdomains to a less or more efficient solution.

2.5.3 Non-Linear Schrödinger Equation

The non-linear Schrödinger equation is a complex PDE defined as

iut+ uxx+ q|u|2u = 0 (2.44)

The function u(x, t) can be divided into a coupled real and imaginary function as following [13]

u(x, t) = r(x, t) + is(x, t) (2.45)

where the real part becomes

rt= −sxx− q(r2s + s3) (2.46) where q is a real parameter set equal to 2 in this thesis. The imaginary part becomes

st = rxx+ q(s2r + r3) (2.47)

with the same q as in equation 2.46. The non-linear Schrödinger equation will be studied with the following initial condition

u(x, 0) = exp(−x1.6) (2.48)

This initial condition is chosen because it is hard to resolve due to its steep gradient. The uniform and global algorithms are probably not as well suited for solving this problem as the local and average method. The global algorithm is not well suited for this function because the GWRM algorithm is actually solving the functions r(x, t) and s(x, t). These

(26)

functions do not have the same single localised peak and have several other gradients in their solutions. It is only when the two functions are combined that we get the single localised peak, all other gradients except the initial conditions should cancel each other out when taking the absolute value of equation 2.45. This means that the GWRM will need a very agile and fast domain decomposition to be able to resolve this equation throughout the time domain within a reasonable time frame.

(27)

Result

In the result section we first present the typical solution of each equation and then move on to present the performance of each algorithm applied to the different equations. Generally for all the results presented in this thesis the requirement at each temporal interval is that the maximum error  is below 1 × 10−5 for all spatial subdomains before moving on to solve the next time interval.

18

(28)

3.1 One-Dimensional Burger’s Equation

When solving the one-dimensional Burger’s Equation a viscosity of ν = 0.01 was used.

10 temporal and 12 spatial subdomains were used as well as 6 temporal and spatial modes.

Figure 3.1: Initial condition used for

Burger’s equation Figure 3.2: Solution of Burger’s equation

(29)

3.2 Fisher-Kolmogorov Equation

Figure 3.3: Initial conditions used for the Fisher-Kolmogorov equation

Figure 3.4: Solution of the Fisher-Kolmogorov equation

The Fisher-Kolmogorov equation was solved with γ = 0.001, 15 temporal and spatial subdomains and 5 temporal and spatial modes.

3.3 Non-Linear Schrödinger Equation

For solving the non-linear Schrödinger equation 10 temporal and 8 spatial subdomains, as well as 6 temporal and spatial modes were used. The solution of the imaginary and real part of the non-linear Schrödinger equation is also presented in figures 3.7 and 3.8.

(30)

Figure 3.5: Initial condition used for the non-linear Schrödinger equation

Figure 3.6: Solution of the non-linear Schrödinger equation

Figure 3.7: Imaginary part of the non-linear Schrödinger equation

Figure 3.8: Real part of the non-linear Schrödinger equation

(31)

3.4 Efficiency, Accuracy, and Resolution Compari- son

In this section we present comparative figures of the resolution quality, residuals, and the subdomain distribution. Tables of the number of time intervals, number of times the time interval size has been reduced, total solution time, and the maximum residual is also pre- sented to be able to evaluate the efficiency and accuracy of the subdomain decomposition algorithms applied to the different equations studied in this thesis. First and foremost the results for the algorithms applied to the one-dimensional Burger’s equation is presented.

3.4.1 The Global Algorithm Applied to the One-Dimensional Burger’s Equation

Figure 3.9: Subdomain distribution for the global algorithm applied to the one-dimensional Burger’s equation

Figure 3.10: Resolution quality Q of the global algorithm applied to the one-dimensional Burger’s equation

In figure 3.9 one can see the subdomain distribution for the global algorithm. Each dot represent the edge of one spatial subdomain at a specific time interval. The adaption of the algorithm to track the shock wave can be seen as all subdomain edges compresses towards the upper right corner of the figure. In figure 3.10 one can see the resolution quality of

(32)

the global algorithm at each time interval. Each dot represent the resolution quality at one specific time interval. It can be seen that the resolution seems to increase rapidly towards the end of the temporal domain, this is also where the spatial movement of the shock wave stops.

Figure 3.11: Residual for the global algorithms solution of the one-dimensional Burger’s equation

In figure 3.11 the residual for the global algorithm applied to the one-dimensional Burger’s equation at time t = 1s can be seen. One can see that the residual is much larger in the upper half of the spatial domain, where x > 0.5. This is the same area as the shock front is formed in and moves within throughout all time intervals, as can be seen in figure 3.2.

(33)

3.4.2 The Average Algorithm Applied to the One-Dimensional Burger’s Equation

Figure 3.12: Subdomain distribution for the average algorithm applied to the

one-dimensional Burger’s equation

Figure 3.13: Resolution quality of the average algorithm applied to the one-dimensional Burger’s equation

In figure 3.12 the subdomain distribution for the average algorithm is presented in the same way as for the global algorithm in figure 3.9. It can be seen that the algorithm tracks the shock wave very smoothly as there are no sudden changes or oscillations of any spatial subdomain edges. In figure 3.13 one can, as in figure 3.10, see the resolution quality of the average algorithm at each time interval. There are no sharp increases in resolution quality towards the end of the temporal domain for the average algorithm, which indicates that the average algorithm successfully tracks the shock front throughout the time domain.

(34)

Figure 3.14: Residual for the average algorithms solution of the one-dimensional Burger’s equation

The residual for the average algorithm applied to the one-dimensional Burger’s equation at time t = 1s can be seen in figure 3.14. One can see that the residual follows the same behaviour as for the global algorithm in figure 3.11, it is much larger in the upper half than in the lower half of the spatial domain.

(35)

3.4.3 The Local Algorithm Applied to the One-Dimensional Burger’s Equation

Figure 3.15: Subdomain distribution for the local algorithm applied to the one-dimensional Burger’s equation

Figure 3.16: Resolution quality of the local algorithm applied to the one-dimensional Burger’s equation

Figures 3.15 and 3.16 shows the subdomain distribution and resolution quality of the local algorithm applied to the one-dimensional Burger’s equation respectively. One can see a steady increase in the resolution quality up to a specific point in figure 3.34 where it suddenly starts to drop of. In figure 3.15 it can be seen that the time intervals are relatively constant in size until around 0.2 seconds where the time interval size decreases rapidly.

This is around the same time as the shock wave accelerates its movement in the spatial domain as can be seen in figure 3.2.

(36)

Figure 3.17: Residual for the local algorithms solution of the one-dimensional Burger’s equation

The residual for the local algorithm applied to the one-dimensional Burger’s equation at time t = 1s can be seen in figure 3.17. One can see that the residual follows the same behaviour as for the global and average algorithms in figures 3.11 and 3.14. The residual is large much further up in the spatial domain for the local algorithm in relation to the other adaptive algorithms.

(37)

3.4.4 The Uniform Algorithm Applied to the One-Dimensional Burger’s Equation

Figure 3.18: Subdomain distribution for the uniform algorithm applied to the

one-dimensional Burger’s equation

Figure 3.19: Resolution quality of the uniform algorithm applied to the one-dimensional Burger’s equation

Figures 3.18 and 3.19 shows the subdomain distribution and resolution quality of the uni- form algorithm applied to the one-dimensional Burger’s equation respectively. One can see that the overall resolution quality is, compared to the adaptive algorithms, almost con- stant in figure 3.34. This is expected since the uniform algorithm keeps the subdomain distribution constant in the spatial subdomain, which can be seen in figure 3.18.

(38)

Figure 3.20: Residual for the uniform algorithms solution of the one-dimensional Burger’s equation

The residual for the uniform algorithm applied to the one-dimensional Burger’s equation at time t = 1s can be seen in figure 3.20. One can see that the residual follows the same behaviour as for the adaptive algorithms. The residual is large in the upper half of the spatial domain, between the area where the shock starts to from and where it stops moving spatially, as can be seen in 3.2.

3.4.5 Comparison of the Algorithms for the One-Dimensional Burger’s Equation

Table 3.1: Comparison of the algorithms performance on the one-dimensional Burger’s equation. In the table the real time, amount of time intervals used, number of reductions of the time interval size, total accumulated time and highest residual for each algorithm

is presented

Local Average Global Uniform

Time [s] 1081 784 1277 290

Time intervals 98 36 433 26

Reductions 8 4 30 3

Accumulated [s] 1.0 1.0 1.0 1.0

Residual 9 ×10−3 4 ×10−4 0.11 2 ×10−2

(39)

In table 3.1 a comparison between the different algorithms applied to the one-dimensional Burger’s equation is made. In the tables first row the solution time in real time is compared between the algorithms. The reason for using real time is that the code written in Maple to evaluate these algorithms run some tasks in parallel which would lead to inaccurate results if the time was measured in CPU time instead of real time. The second row compares the number of time intervals used in the solution for the different algorithms. This value is connected to the number of reductions used by each algorithm which is presented in the third row of the table. If one algorithm uses more reduction than another it will also use more time intervals in the solution. In the fourth row of the table the total accumulated time is presented which might seem trivial for the one-dimensional Burger’s case where all the algorithms converge, although this is not the case for all equations and therefore this number is also presented in the table. In the last row of table 3.1 the maximum value of the residuals are presented for each algorithm to give a quantifiable accuracy to the different algorithms. It can be seen that the uniform algorithm was the fastest but at the same time had the second highest residual after the global algorithm while the average algorithm had the lowest residual by a factor of 22.5. The average algorithm was the second fastest after the uniform algorithm being a factor of 2.7 slower. The uniform algorithms residual is of the order 10−2 while the average and local algorithms residuals is of the order 10−4 and 10−3 respectively. It would be of interest to evaluate the amount of time the uniform algorithm would need to get the same residual as the adaptive algorithms. Another run is presented in Appendix A to evaluate the amount of time the uniform algorithm would need to get the same residuals as the average algorithm. When using 18 temporal subdomains, 25 spatial subdomains and, 6 temporal and spatial modes the maximum residual became 3−3, as can be seen in figure A.1. This run took 2402 seconds, which is 3 times longer than the average algorithm and the residual was a factor of 7.5 higher than the average algorithm.

The following is the results for the algorithms applied to the Fisher-Kolmogorov equation.

A notable result is that the Global algorithm had to be aborted since it did not converge.

Therefore there is limited data for the global algorithm applied to the Fisher-Kolmogorov equation.

(40)

3.4.6 The Global Algorithm Applied to the Fisher-Kolmogorov Equation

Figure 3.21: Subdomain distribution for the global algorithm applied to the

Fisher-Kolmogorov equation

Figure 3.22: Resolution quality of the global algorithm applied to the

Fisher-Kolmogorov equation

The subdomain distribution and the resolution quality for the global algorithm applied on the Fisher-Kolmogorov equation are presented in figures 3.21 and 3.22 respectively in the same way as for the one one-dimensional Burger’s equation. It can be seen that the global compressive algorithm decided to aggressively compress all subdomains towards the first gradient in the Fisher-Kolmogorov solution in figure 3.21. This aggressive compression would make it difficult for the global algorithm to converge on the next time interval with the subdomain distribution seen in the upper end of the time axis of figure 3.21. This results in a very small time interval and a poor resolution quality which can be observed in 3.22.

(41)

Figure 3.23: Residual for the global algorithms solution of the Fisher-Kolmogorov equation

The residual for the global algorithm applied to the Fisher-Kolmogorov equation at time t = 0.2s can be seen in figure 3.23. One can see that the residual is extremely high. This is due to the poor subdomain adaption which made it impossible for the solver to converge without using a time interval that is too small to give a solution within a reasonable time.

the smallest time interval seen before aborting the run was of the order 10−11which would mean that the solver would be done well after our life time since each time interval took between 10 and 50 seconds to calculate for the solver.

(42)

3.4.7 The Average Algorithm Applied to the Fisher-Kolmogorov Equation

Figure 3.24: Subdomain distribution for the average algorithm applied to the

Fisher-Kolmogorov equation

Figure 3.25: Resolution quality of the average algorithm applied to the

Fisher-Kolmogorov equation

In figures 3.24 and 3.25 the subdomain distribution and the resolution quality for the av- erage algorithm applied to the Fisher-Kolmogorov is presented. It can be seen in figure 3.24 that the subdomains towards the middle increases slightly in size in the first couple of time intervals which increases the resolution quality of the first time intervals as can be seen in figure 3.25.

(43)

Figure 3.26: Residual for the average algorithms solution of the Fisher-Kolmogorov equation

The residual for the average algorithm applied to the Fisher-Kolmogorov equation at t = 0.2s is presented in figure 3.26. Comparing figure 3.26 to the solution, figure 3.4, we can see that the residuals maximum value coincides with the top of the gradients in the solution of the Fisher-Kolmogorov equation.

(44)

3.4.8 The Local Algorithm Applied to the Fisher-Kolmogorov Equation

Figure 3.27: Subdomain distribution for the local algorithm applied to the

Fisher-Kolmogorov equation

Figure 3.28: Resolution quality of the local algorithm applied to the

Fisher-Kolmogorov equation

In figures 3.27 and 3.28 the subdomain distribution and the resolution quality for the lo- cal algorithm applied to the Fisher-Kolmogorov equation is presented. It can be seen in figure 3.27 and 3.28 that, similarly to the result for the average algorithm, the subdomains towards the middle increase slightly in size in the first couple of time intervals and that the resolution quality of the first time intervals increases.

(45)

Figure 3.29: Residual for the local algorithms solution of the Fisher-Kolmogorov equation

In figure 3.29 the residual plot for the local algorithm applied to the Fisher-Kolmogorov equation at t = 0.2s is presented. It can be observed, when comparing it with figure 3.4, that the peaks in the residual value coincides with the maximum of the gradients of the solution. This is the same phenomena as can be seen for the average algorithm in figure 3.26.

(46)

3.4.9 The Uniform Algorithm Applied to the Fisher-Kolmogorov Equation

Figure 3.30: Subdomain distribution for the uniform algorithm applied to the

Fisher-Kolmogorov equation

Figure 3.31: Resolution quality of the uniform algorithm applied to the

Fisher-Kolmogorov equation

Figures 3.30 and 3.31 shows the subdomain distribution and resolution quality of the uni- form algorithm applied to the Fisher-Kolmogorov equation. It can be seen here as well as for the one-dimensional Burger’s case that the subdomain distribution is, as per definition, constant in figure 3.30 and that the resolution quality is almost constant throughout the time intervals in 3.31.

(47)

Figure 3.32: Residual for the uniform algorithms solution of the Fisher-Kolmogorov equation

The residual plot for the uniform algorithm applied to the Fisher-Kolmogorov equation at t = 0.2s is presented in figure 3.32. The same phenomena as seen for the average and local algorithm in figures 3.26 and 3.29 can be seen when comparing it with figure 3.4. The maximum residual values coincides with the maximum of the gradients of the solution.

3.4.10 Comparison of the Algorithms for the Fisher-Kolmogorov Equation

Table 3.2: Comparison of the algorithms performance on the Fisher-Kolmogorov equation. In the table the real time, amount of time intervals used, number of reductions

of the time interval size, total accumulated time and highest residual for each algorithm is presented

Local Average Uniform

Time [s] 135 105 168

Time intervals 125 86 211

Reductions 11 7 16

Accumulated [s] 0.2 0.2 0.2 0

Residual 8 ×10−3 8 ×10−3 8 ×10−3

(48)

In table 3.2 a comparison between the different algorithms applied to the Fisher-Kolmogorov equation is made. Table 3.2 has the same layout as 3.1 except that the result for the global algorithm is left out since it did not converge. It can be seen that the average algorithm was the fastest followed by the local and then the uniform one. All the algorithms that converged had the same residual and the time difference between them was the smallest out of all the equations.

Next, the results from non-linear Schrödinger equation is presented. The global algorithm did not converge when applied to the non-linear Schrödinger equation but its results are still presented since it can help evaluating the performance of the other algorithms.

3.4.11 The Global Algorithm Applied to the Non-Linear Schrödinger Equation

Figure 3.33: Subdomain distribution for the global algorithm applied to the

non-linear Schrödinger equation

Figure 3.34: Resolution quality of the global algorithm applied to the non-linear

Schrödinger equation

In figures 3.33 and 3.34 the subdomain distribution and resolution quality of the global algorithm applied to the non-linear Schrödinger equation is presented. The same phenom- ena as can be seen when applying the global algorithm to the Fisher-Kolmogorov equation in figure 3.21, can be observed in figure 3.33. The subdomains has been compressed to-

(49)

wards one specific region with poor resolution quality and the resolution quality does not increase significantly in figure 3.34.

Figure 3.35: Residual for the global algorithms solution of the non-linear Schrödinger equation

The residual plot for the global algorithm applied to the non-linear Schrödinger equation at t = 6s is presented in figure 3.35. The maximum residual is in the same region as the largest gradient observed in figure 3.6. The residual is lower towards the edges of the spatial subdomain but the subdomains has still been compressed towards the edge of the spatial subdomain. This is an interesting discrepancy between the resolution calculation and residual, since they both should represent the error in each spatial subdomain.

(50)

3.4.12 The Average Algorithm Applied to the Non-Linear Schrödinger Equation

Figure 3.36: Subdomain distribution for the average algorithm applied to the

non-linear Schrödinger equation

Figure 3.37: Resolution quality of the average algorithm applied to the non-linear Schrödinger equation

In figures 3.36 and 3.37, one can see the subdomain distribution and resolution quality of the average algorithm applied to the non-linear Schrödinger equation respectively. We can see in figure 3.36 that the subdomains start off by moving away from the centre of the spatial domain where the largest gradient is and instead focus on resolving the subdomains at the edges of the spatial domain, indicating higher error at the edges of the domain. It can be seen in 3.37 that the overall resolution quality seems to decrease as the solver moves from one time interval to the next.

(51)

Figure 3.38: Residual for the average algorithms solution of the non-linear Schrödinger equation

In figure 3.38 the residual for the average algorithm applied to the non-linear Schrödinger equation at t = 6s can be seen. The same discrepancy as for the global algorithm can be seen here. The residual is much lower towards the edges of the spatial domain but the subdomain decomposition algorithm still increases the size of the subdomains towards the centre of the spatial domain and decreases the size of the subdomains closer to the edge, indicating that the resolution quality and residual calculations does not agree on where the error is largest.

(52)

3.4.13 The Local Algorithm Applied to the Non-Linear Schrödinger Equation

Figure 3.39: Subdomain distribution for the local algorithm applied to the

non-linear Schrödinger equation

Figure 3.40: Resolution quality of the local algorithm applied to the non-linear

Schrödinger equation

In figures 3.39 and 3.40 the subdomain distribution and resolution quality is presented for the local algorithm applied to the non-linear Schrödinger equation. In figure 3.39 it can be seen that the local algorithm, just as the average one presented in figure 3.36, focuses the subdomains towards the edges of the spatial domain. This despite the fact that the steepest gradient is located towards the centre of the spatial domain. No significant increase in resolution quality can be observed in figure 3.40 either.

(53)

Figure 3.41: Residual for the local algorithms solution of the non-linear Schrödinger equation

In figure 3.41 the residual for the local algorithm applied to the non-linear Schrödinger equation at t = 6s can be seen. The same discrepancy as for the other adaptive algorithms can be observed here as well. The residual is much lower towards the edges of the spatial domain but the subdomain decomposition algorithm increases the size of the subdomains towards the centre of the spatial domain and decreases the size of the subdomains closer to the edge. This is an indication of the same discrepancy between the resolution quality and residual calculations as has been observed for all adaptive algorithms applied to the non-linear Schrödinger equation.

(54)

3.4.14 The Uniform Algorithm Applied to the Non-Linear Schrödinger Equation

Figure 3.42: Subdomain distribution for the uniform algorithm applied to the

non-linear Schrödinger equation

Figure 3.43: Resolution quality of the uniform algorithm applied to the non-linear Schrödinger equation

Figures 3.42 and 3.43 shows the subdomain distribution and resolution quality of the uni- form algorithm applied to the non-linear Schrödinger equation. In figure 3.42 we can observe that the subdomains are kept uniform, which result in no significant decrease or increase in resolution quality over time as can be observed in figure 3.43.

(55)

Figure 3.44: Residual for the uniform algorithms solution of the non-linear Schrödinger equation

In figure 3.44 the residual for the uniform algorithm applied to the non-linear Schrödinger equation at t = 6s is presented. There is a large oscillation in the residual value leading up to the maximum residual value in the centre of the spatial domain. This has not been observed in any other solution of the non-linear Schrödinger equation.

3.4.15 Comparison of the Algorithms for the Non-Linear Schrödinger Equation

Table 3.3: Comparison of the algorithms performance on the non-linear Schrödinger equation. In the table the real time, amount of time intervals used, number of reductions

of the time interval size, total accumulated time and highest residual for each algorithm is presented

Local Average Global Uniform

Time [s] 228 309 1298 141

Time intervals 216 316 850 135

Reductions 16 23 65 12

Accumulated [s] 6.0 6.0 0.024 6.0

Residual 0.4 0.25 0.7 0.45

(56)

In table 3.3 a comparison between the different algorithms applied to the non-linear Schrödinger equation is made. Table 3.3 has the same layout as 3.1. It can be seen that the uniform al- gorithm was the fastest followed by the local and the average algorithms which was slower by a factor of 1.6 and 2.2 respectively. The average algorithm had the lowest residual once again but this time it was only smaller than the uniform and local algorithm by a factor of 1.8 and 1.6 respectively. The global algorithm did not converge for this case either. All the algorithms that converged had residuals of the same order of magnitude and the time difference between them was not very large. It however has to be noted that a discrepancy was detected between the residual value and subdomain distribution for all adaptive algo- rithms indicating that these results may not be representative of the effectiveness of the adaptive algorithms.

References

Related documents

Generally, a transition from primary raw materials to recycled materials, along with a change to renewable energy, are the most important actions to reduce greenhouse gas emissions

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

Från den teoretiska modellen vet vi att när det finns två budgivare på marknaden, och marknadsandelen för månadens vara ökar, så leder detta till lägre

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i

På många små orter i gles- och landsbygder, där varken några nya apotek eller försälj- ningsställen för receptfria läkemedel har tillkommit, är nätet av

Det har inte varit möjligt att skapa en tydlig överblick över hur FoI-verksamheten på Energimyndigheten bidrar till målet, det vill säga hur målen påverkar resursprioriteringar