• No results found

Numerical Analysis of the Two Dimensional Wave Equation: Using Weighted Finite Differences for Homogeneous and Hetrogeneous Media

N/A
N/A
Protected

Academic year: 2021

Share "Numerical Analysis of the Two Dimensional Wave Equation: Using Weighted Finite Differences for Homogeneous and Hetrogeneous Media"

Copied!
38
0
0

Loading.... (view fulltext now)

Full text

(1)

MAT-VET-F 20008

Examensarbete 15 hp Juni 2020

Numerical Analysis of the Two Dimensional Wave Equation

Using Weighted Finite Differences for Homogeneous and Heterogeneous Media Anton Holmberg

Martin Nilsson Lind

Christian Böhme

(2)

Teknisk- naturvetenskaplig fakultet UTH-enheten

Besöksadress:

Ångströmlaboratoriet Lägerhyddsvägen 1 Hus 4, Plan 0

Postadress:

Box 536 751 21 Uppsala

Telefon:

018 – 471 30 03

Telefax:

018 – 471 30 00

Hemsida:

http://www.teknat.uu.se/student

Abstract

Numerical Analysis of the Two Dimensional Wave Equation

Anton Holmberg, Martin Nilsson Lind, Christian Böhme

This thesis discusses properties arising when finite differences are implemented for solving the two dimensional wave equation on media with various properties. Both homogeneous and heterogeneous surfaces are considered. The time derivative of the wave equation is discretised using a weighted central difference scheme, dependent on a variable parameter gamma. Stability and convergence properties are studied for some different values of gamma. The report furthermore features an introduction to solving large sparse linear systems of equations, using so-called multigrid methods.

The linear systems emerge from the finite difference discretisation scheme. A conclusion is drawn stating that values of gamma in the unconditionally stable region provides the best computational efficiency. This holds true as the multigrid based numerical solver exhibits optimal or near optimal scaling properties.

Ämnesgranskare: Maria Strømme Handledare: Maya Neytcheva

(3)

Popul¨ arvetenskaplig sammanfattning

V˚agr¨orelse ¨ar ett i naturen vanligt f¨orekommande fenomen. Att med matem- atisk exakthet kunna beskriva hur v˚agor beter sig kan d¨arf¨or i m˚anga avseen- den vara viktigt. Det kan g¨alla allt ifr˚an beskrivning av seismiska vibrationer i ett jordskalv, till nyttjandet av v˚agor i ett v˚agkraftverk. Sedan 1700- talet har man kunnat beskriva s˚adana v˚agfenomen matematiskt med den s˚a kallade v˚agekvationen, vilken ¨ar en hyperbolisk partiell differentialekva- tion (PDE). Denna ekvation beskriver hur v˚agor propagerar och oscillerar i tiden.

Ett problem med v˚agekvationen ¨ar att den liksom de flesta PDE:r endast har analytiska l¨osningar f¨or ett mycket begr¨ansat antal fall. D¨arf¨or m˚aste idealiseringar ofta g¨oras f¨or att en exakt l¨osning ska kunna alstras, ex- empelvis g¨allande problemets geometri. F¨orekommer fysikaliska fenomen som av naturen ¨ar icke-linj¨ara, s˚asom friktion och elasticitet i problemet, om¨ojligg¨ors ofta analytiska l¨osningar helt och h˚allet. Om v˚agekvationen inte kan l¨osas exakt utan att man med f¨orenklingar skadar den fysikaliska riktigheten, m˚aste man ist¨allet f¨orlita sig p˚a den numeriska analysen. Den numeriska analysen tillhandah˚aller en approximation av l¨osningen, d¨arf¨or m˚aste en avv¨agning g¨oras ang˚aende l¨osningens fel och den ¨onskade pro- blemstorleken. F¨or att numeriskt l¨osa PDE:r finns flera tillv¨agag˚angss¨att.

Den mest klassiska metoden kallas finita differens-metoden (FDM) vilken ers¨atter ekvationens derivator med finita differenser. Ofta resulterar im- plementering av FDM i ett linj¨art ekvationssystem. Detta ekvationssystem m˚aste l¨osas f¨or att den numeriska l¨osningen ska erh˚allas. Beroende p˚a den

¨

onskade exaktheten hos den valda finita differensmetoden, kommer storleken p˚a ekvationssystemet att variera. ¨Ar systemet stort m˚aste l¨osningsmetoden v¨aljas med f¨orsiktighet d˚a mindre l¨ampliga val kan leda till mycket l˚anga ber¨akningstider.

Detta projekt kretsar kring implementering och unders¨okning av finita diff- erenser f¨or den tv˚adimensionella v˚agekvationen. F¨or v˚agekvationens rums- derivata anv¨ands en centraldifferens och f¨or tidsdiskretiseringen den inte lika etablerade γ-metoden. I γ-metoden beror diskretiseringens egenskaper p˚a en variabel parameter γ. γ-metoden unders¨oks h¨ar f¨or flera olika fall av v˚agekvationen, med avseende p˚a stabilitet och konvergenshastighet. Vi- dare unders¨oks AGMG vilken ¨ar den metod som h¨ar anv¨ands f¨or att l¨osa de finita differensernas resulterande ekvationssystem. Projektet innefattar ¨aven en grundl¨aggande introduktion till den grupp numeriska l¨osningsmetoder vilken AGMG tillh¨or; s˚a kallade multigrid-metoder.

(4)

Contents

1 Introduction 6

2 Theory 7

2.1 The Wave Equation . . . 7

2.2 Finite Differences . . . 8

2.3 Spatial Discretisation . . . 8

2.3.1 Square Domain . . . 8

2.4 The γ-method . . . 10

2.4.1 Stability of the Time Discretisation . . . 11

2.4.2 Error Estimates and Convergence of the Fully Discretised Problem . . . 12

2.4.3 Numerical Dispersion . . . 14

2.5 Convergence of Eigenvalues . . . 14

2.6 Analysis for Variable Wave Propagation Speed . . . 15

2.7 Linear Systems of Equations . . . 16

2.7.1 The Gradient and Conjugate Gradient Methods . . . 17

2.7.2 Preconditioning Techniques . . . 19

2.7.3 The Multigrid Method . . . 21

2.7.4 The Multigrid Method as a Preconditioner . . . 23

2.7.5 Algebraic Multigrid . . . 23

2.7.6 Aggregation-Based Algebraic Multigrid . . . 23

3 Numerical Study 23 3.1 Verification of the Discretisation Error . . . 24

3.1.1 Constant Coefficients . . . 24

3.1.2 Variable Coefficients . . . 25

3.1.3 Computational Complexity . . . 27

3.1.4 Computation of the Error . . . 28

3.2 Implementation . . . 28

4 Results 29 4.1 Convergence of the γ-method . . . 29

4.1.1 Constant Wave Speed . . . 29

4.1.2 Non-Constant Wave Speed . . . 32

4.2 Computational Complexity . . . 33

4.3 Scaling of the AGMG-method . . . 33

5 Discussion 34

(5)

5.1 Convergence of the γ-method . . . 34

5.1.1 Constant Wave Speed . . . 34

5.1.2 Non-Constant Wave Speed . . . 34

5.2 Computational Complexity . . . 35

5.3 Scaling of the AGMG-method . . . 36

6 Conclusion 36

(6)

1 Introduction

The wave equation is applicable in a wide range of areas in science and en- gineering. Finding a solution to the equation is thus often of great value.

Analytical solutions are in many cases impossible to obtain, and so numer- ical methods must be considered. This work focuses on a particular finite difference discretisation scheme, that allows for high order accuracy and also for efficient numerical solutions of the arising large and sparse linear systems of equations.

This work is limited to solving the wave equation on a two dimensional square domain where the focus lies on different forms of the wave equation itself. The cases of both homogeneous and heterogeneous domains are con- sidered. The time discretisation is of special interest in this project since a not so widely used method referred to as γ-discretisation is employed. The γ-discretisation uses a parameter γ that can be set differently in order to control accuracy and stability of the discretisation. When choosing γ, vari- ous factors are of interest. One is the convergence of the discretisation: how fast will the discretisation error decrease as the grid and time step are in- creasingly refined? Another such factor is computational efficiency: will the γ−method converge in a reasonable amount of time? Choosing γ > 0 results in an implicit method, meaning a system of equations must be solved.

The purpose of this work is to reach a conclusion regarding how some choices of the parameter γ are more favourable, especially in terms of computational efficiency. To reach such a conclusion, a detailed study regarding how γ affects the stability, convergence and characteristics of the resulting linear system of equations, must be performed. The solver used for the linear systems is referred to as AGMG (aggregate based algebraic multigrid), see [1]. AGMG belongs to a family of the numerical analysis called multigrid.

This project introduces the concept of multigrid at a basic level as it is the foundation on which AGMG relies upon.

An overview of other general principles and numerical solution methods such as the conjugate gradient and the principle of preconditioning are also provided as they are crucial for understanding AGMG. The project further studies the scaling properties of AGMG, as it is known to have an optimal or near optimal computational complexity.

(7)

2 Theory

2.1 The Wave Equation

The wave equation is a hyperbolic partial differential equation (PDE). The wave equation with forcing can be written on the following form:

utt = c2∆u + f (x, y, z, t). (2.1) Here u(x, y, z, t) is the displacement of the wave, c is the wave propagation speed, ∆ is the Laplace operator and f (x, y, z, t) is a forcing term acting on the medium. If the problem is limited to a two dimensional analysis, equation (2.1) can be written

2u

∂t2 = c2 ∂2u

∂x2 +∂2u

∂y2



+ f (x, y, t). (2.2) In (2.1) and (2.2) the medium is homogeneous, thus, the wave propagation speed is constant throughout the entire domain. If the medium however is non-homogeneous the wave equation can be generalised to

2u

∂t2 = ∇ · (c2∇u) + f (x, y, z, t). (2.3) In two dimensions with q = c2(x, y) this becomes

2u

∂t2 = ∂

∂x

 q(x, y)∂u

∂x

 + ∂

∂y

 q(x, y)∂u

∂y



+ f (x, y, t). (2.4) The wave equation can be solved for different geometries of the domain.

The domain is here denoted by Ω and ∂Ω is its boundary. The numerical solution of the wave equation requires initial conditions (IC) for u and its time derivative, as well as boundary conditions (BC). The initial conditions are denoted by u(x, y, 0) = g0(x, y) and ut(x, y, 0) = g1(x, y). The boundary conditions can be of different types. Here we consider boundary conditions of Dirichlet type on the whole boundary ∂Ω, namely, u(x, y, t) = h(x, y, t) for (x, y) ∈ ∂Ω.

(8)

2.2 Finite Differences

When solving differential equations numerically there are several discretisa- tion methods that can be used. A classical numerical method is the finite difference method (FDM). The idea of FDM is to approximate the deriva- tives in a differential equation by finite differences. In this way the solution is approximated at discrete points only. Thus, for a given discretisation, no information is provided on the intervals between the points.

2.3 Spatial Discretisation 2.3.1 Square Domain

For the spatial derivatives a central difference approximation is used. The resulting five-point scheme can be written

uxx+ uyy ≈ ∆h = −4u(x, y, t) + u(x + h, y, t) + u(x − h, y, t)

+ u(x, y + h, t) + u(x, y − h, t), (2.5) for the two-dimensional square domain discretised with equidistant mesh- points. Figure 1 displays the resulting stencil.

Figure 1: The five-point stencil of the central difference in (2.5)

(9)

U11

U21

U31

U12

U22

U32

U13

U23

U33

Figure 2: The spatial discretisation of a square domain with equidistant mesh-points.

The indexes of the mesh-points are defined as in Figure 2. This is a conve- nient way to index the mesh points since it matches the indexing of matrix elements. The values of the solution at the mesh-points are then rearranged to a vector so the finite difference can be expressed as a matrix vector mul- tiplication. When choosing a columnwise ordering the resulting u-vector becomes:

 U11 U21 U31

U12 ... U33

(2.6)

When applying the stencil, see Figure 1, to the entire domain a resulting N × N matrix arises, where N is the number of degrees of freedom in the

(10)

system. This matrix has the following form:

L =

−4 1 0 0 . . . 1 0 0 . . . 0

1 −4 1 0 0 . . . 1 0 0 . . . 0

0 1 −4 1 0 0 . . . 1 0 0 . . . 0

. .. . ..

1 1 −4 1 1

0 1 1 −4 1 1

0 0 1 1 −4 1 1

0 0 0 . .. . .. ... ... . ..

 (2.7) Here each row represents the equation produced by the central difference approximation at every point of the mesh.

2.4 The γ-method

For the time derivative in the wave equation we use a discretisation technique called the γ-method. We follow the derivations in [2]. The γ-method is a weighted central difference approximation. To discretise the wave equation in time, the γ-method is applied to equation (2.2) in the following way:

u(x, t + k) − 2u(x, t) + u(x, t − k) =k2[γutt(x, t + k) + (1 − 2γ)utt(x, t) + γutt(x, t − k)], t = k, 2k, ...

(2.8) where γ is a variable parameter in the interval [0,12]. Using a discrete Laplace operator as described by the resulting matrix in Section 2.3.1, we get the fully discretised scheme:

I − γc2k2h [uh(x, t + k) − 2uh(x, t) + uh(x, t − k)] =

c2k2huh(x, t) + k2[γf (x, t + k) + (1 − 2γ)f (x, t) + γf (x, t − k)], (2.9) Here uh is the finite difference approximation of u(x, y, t) at a point x in the mesh and t = k, 2k . . .. From (2.9) it is clear that when choosing γ = 0 the method becomes explicit, while for γ ≥ 0 the method is implicit. For the implicit case a linear system of equations must be solved.

(11)

2.4.1 Stability of the Time Discretisation

As the solution in time is solved step-by-step, the stability of the discretised scheme must be analysed so that initial perturbations are not amplified as t −→ ∞. The stability analysis is most easily performed for the case with homogeneous Dirichlet boundaries and a forcing term set to zero. It is further assumed that the wave propagation speed is constant, i.e. the displacement of the wave is described by (2.2). Denote the eigenvalues eigenvectors of ∆h as λi and vi(x) respectively where vi(x) = 0 on ∂Ωh. Then the eigenvalue problem of the discretised Laplace operator L(vi(x)) ≡

hvi can be written

hvi = λivi, (2.10)

with vi(x) = 0 on the boundary. To solve the fully discretised difference scheme of equation (2.9) we now make the following ansatz

uh(x, t) = (µi)t/kvi(x), t = 0, k, 2k, . . . (2.11) Here it is assumed that the initial data is an eigenfunction of the discrete Laplace operator, i.e. it can be written as uo(x) = vi(x). This is valid since any perturbation of the initial data can be rewritten as a linear combination of the eigenfunctions vi(x), x ∈ Ωh. Substituting this into equation (2.9) yields

1 + γc2k2λi

 µ2i − 2µi+ 1 µt/k−1i vi(x) = −c2k2λiµt/ki vi(x), t = k, 2k, . . . (2.12) From this we can write

µ2i − 2µi+ 1 = −µiτi, (2.13) where τi can be expressed as

τi = c2k2λi/1 + γc2k2λi . (2.14) Solving this second order equation we get

i)1,2= 1 −1 2τi±

s

 1 2τi

2

− τi . (2.15)

From equation (2.11) and (2.13) we conclude that the solutions of the ansatz will be bounded if and only if |µi| ≤ 1 + ξk, where ξ is a parameter ξ > 0.

(12)

This is called the von Neumann stability condition. Its validity is motivated by |µi|kt ≤ eξt ∀t > 0 [2]. Multiplying the two roots of (2.13) yields the value 1. Therefore, |µi| = 1, i.e. the perturbations are not amplified in time if and only if τi ≤ 4. Now using equation (2.14) the following relation is obtained: 14ckλi≤ 1 + γc2k2 = λi. From this one can conclude that in order for the perturbations to be bounded then either γ ≥ 14 or if γ < 14 then

(ck)2 < 4 1 − 4γ

1 maxiλi

(2.16)

must be satisfied. In other words, if γ is chosen greater than 1/4, then the discretised scheme is unconditionally stable; it is A-stable. Otherwise if γ is less than 1/4 the scheme is only conditionally stable, hence the time step must be chosen sufficiently small. For the unit square problem the eigenvalues are bounded as maxiλih82 [2]. Thus, for γ = 0 the stability condition according to equation (2.16) becomes k ≤ h

2c. For γ = 1/12 the stability condition is k ≤

3h

2c . Why γ =1/12is a case of particular interest is described in Section 2.4.2.

2.4.2 Error Estimates and Convergence of the Fully Discretised Problem

Consider next the behaviour of the time and space discretisation error when using the γ -method in time and central difference in space. To analyse this one considers the local truncation error. The operator form (2.9), when divided by k2, can be expressed as

Lh,kuh := 1

k2I − γc2k2h [uh(x, t + k) − 2uh(x, t) + uh(x, t − k)]

− c2huh(x, t) = ¯f (x, t), (2.17) with ¯f (x, t) = γf (x, t + k) + (1 − 2γ)f (x, t) + γf (x, t − k).

Then, the local truncation error reads,

Lh,ku − ¯f (x, t) = Lh,k(u − uh) . (2.18) Let us now apply Taylor expansion to the central difference approximations, where the solution is assumed to be sufficiently smooth. The truncation

(13)

error then becomes:

Lh,k(u − uh) = 1

k2I − γc2k2h [u(x, t + k) − 2u(x, t) + u(x, t − k)]

− c2hu(x, t) − [γf (x, t + k) + (1 − 2γ)f (x, t) + γf (x, t − k)]

=I − γc2k2h



utt(x, t) + k2

12u(4)t (x, t) + O k4



−c2hu(x, t) − f (x, t) − γk2ftt(x, t) + O k4

=utt(x, t) − c2∆u(x, t) − f (x, t) +k2

12u(4)t (x, t) − γc2k2(∆u)tt− γk2ftt(x, t) −c2h2 12



u(4)x + u(4)y  +O k4 + O h2k2 + O h4 .

(2.19) Using the differential equation utt= c2∆u + f this results in:

Lh,k(u − uh) = 1 12 − γ



k2u(4)t (x, t) − c2h2 12



u(4)x + u(4)y

 + O k4 + O h2k2 + O h4 .

(2.20)

The discretisation error is now defined by eh := ku − uhk. By using equation (2.20) we see that the discretisation error depends on the parameter γ in the following way:

(a) ku − uhk ≤ CT k2+ h2 , h, k → 0, 0 ≤ t ≤ T ,

(b) ku − uhk ≤ CT k4+ h2 , h, k → 0, 0 ≤ t ≤ T if γ = 121,

where C is some constant, independent of k and h. It is clear that if γ 6=1/12

the scheme will have second order convergence in both time and space, similar to the regular central difference. When γ =1/12however, the method will show fourth order convergence in time. This allows for the time step to be chosen somewhat larger having the same accuracy, as long as the stability criterion is satisfied.

For hyperbolic problems it is necessary to comment on the so-called Courant Lewy Friedrich (CFL) condition, which is important, in particular for ex- plicit methods. The CFL condition requires that the numerical domain of dependence must contain the physical domain of dependence, for a numeri- cal scheme to be convergent. For explicit schemes, i.e. γ = 0, the necessary CFL condition states for the wave equation, that the numerical wave speed

(14)

h/k of the discrete problem must be at least as large as the physical wave speed c of the continuous problem [2]. For the implicit scheme, the CFL condition is automatically satisfied since the numerical domain of depen- dence is the whole (x,y)-plane. This is motivated by using the fact that the inverse of the operator [I − γc2k2h] in equation (2.9) is a full matrix, i.e.

any element in the solution vector is affected by all elements in the right hand side.

2.4.3 Numerical Dispersion

Part of the error that can be observed when solving the wave equation numerically can be attributed to a phenomenon called numerical dispersion.

Numerical dispersion occurs when the numerical wave propagates with a different speed than the physical one. To describe this, one can introduce the numerical dispersion number d = clω where l is the wave number. An analysis of the numerical dispersion number for the discretised scheme can be performed to see to what degree this affects the error of the solution [2].

2.5 Convergence of Eigenvalues

When solving an eigenvalue problem, it is important to know how the nu- merical eigenvalues depend on the spatial discretisation parameter h. The eigenvalue problem of the Laplace operator is expressed by

L(u) ≡ ∆u = λu. (2.21)

It is straightforward to see, that on a square domain Ω = {[0, 1] × [0, 1]}

with homogeneous Dirichlet BC u ∈ ∂Ω = 0 the eigenfunctions of (2.21) are u = sin(kπx)sin(lπy), k = 1, 2, ... l = 1, 2, .... (2.22) For the continuous problem one obtains

∆u = −[(kπ)2+ (lπ)2]sin(kπx)sin(lπy). (2.23) Now we pose the question: after applying the central difference approxima- tion, will the numerical eigenvalues approach the exact ones as h goes to zero? Applying the central difference we obtain

Lhu = 1

h2[sin(kπ(i − 1)h)sin(lπjh) + sin(kπ(i + 1)h)sin(lπjh) + sin(kπih)sin(lπ(j − 1)h) + sin(kπih)sin(lπ(j + 1)h)

− 4sin(kπih)sin(lπjh)],

(2.24)

(15)

which can be rewritten on the form Lhu = 1

h2[sin(kπih)sin(lπjh)(2cos(kπh) + 2cos(lπh) − 4)]. (2.25) With further trigonometric manipulation (2.25) can be expressed as

Lhu = 1

h2(−4sin2(kπh

2) − 4sin2(lπh

2)sin(kπih)sin(lπjh), (2.26) and finally

Lhu = −((k2π2)sin2(kπh2)

(kπh2)2 + (l2π2)sin2(lπh2)

(lπh2)2 )sin(kπih)sin(lπjh). (2.27) Now using the property limx→0sinxx = 1 we conclude that the numerical eigenvalues approach the analytical ones as h → 0. When applying the Taylor expansion to sinxx , the rate with which this happens can be observed to be O(h2). The numerical eigenvalues, thus converge with second order to the ones of the continuous problem.

2.6 Analysis for Variable Wave Propagation Speed

Now consider the spatial discretisation for the case with nonhomogenous wave speed. As before, equation (2.4) is sampled at each mesh point. We write:

2

∂t2u(xi, yj, tn) = ∇ · ((q(xi, yj)∇u(xi, yj, tn)) + f (xi, yj, tn). (2.28) The term ∇ · ((q(xi, yj)∇u(xi, yj, tn)) must be discretised in a different man- ner as compared to the simple Laplace operator in the constant wave speed case. This is done by firstly discretising the outer derivative. Here it should be mentioned that using the chain rule is not recommended since it leads to a more difficult discretisation and at some point loss of physical interpretation [3]. We write

φ = q(x, y)∇u(x, y, t). (2.29)

Now a central difference is applied to the divergence of φ at the point (x, y) = (xi, yi):

∇ · φ ≈ φi+1

2,j− φi−1 2,j

h +

φj+1

2,i− φj−1 2,i

h (2.30)

(16)

with equidistant mesh points, then tackling the outer derivative:

φi+1

2,j ≈ qi+1 2,j

uni+1,j− uni,j

h . (2.31)

The same methodology is then applied to the three other points. Finally we can write:

2

∂t2u(xi, yj, tn) ≈ 1 h2

 qi+1

2,j(uni+1,j − uni,j) − qi−1

2,j(uni,j− uni−1,j) + qj+1

2,i(unj+1,i− unj,i) − qj−1

2,i(unj,i− unj−1,i)

 .

(2.32)

This scheme leads to a matrix of the same form as discussed in Section 2.3.1 but with different elements on the diagonals. Now the γ-discretisation can be applied for the time discretisation as discussed in previous sections. Going back to Section 2.4.1 one can note that the derived stability condition is dependent of the wave propagation speed. Thus this condition must now be revised. For example it was derived that for γ =1/12the stability condition is k ≤

3h

2c . It can be concluded that the time step is now limited by the maximal value of the speed propagation coefficient. The maximal value of the wave speed is written: ¯c =pmaxx,y∈Ωc(x, y). For γ =1/12the stability condition thus becomes k ≤

3h c .

2.7 Linear Systems of Equations

When γ 6= 0, i.e. the implicit case, is used in equation (2.9), a linear system of equations arises that has to be solved at every time step. As refinement of the discretisation leads to an increasing number of mesh-points the size of the system grows. The better the approximation of the PDE, the larger the system becomes. Finite difference methods generally produce sparse systems. In our case the matrix is five-diagonal with and also symmetric and positive definite. Using the γ − discretisation discussed in Section 2.4 a system must be solved at each time-step. For time and computational effi- ciency one must use a suitable method. A general linear system of equations in matrix form reads:

Ax = b, (2.33)

where x is the solution vector.

A plethora of methods have been developed for solving such linear systems of equations. For sparse systems of large dimensions, the so called direct

(17)

solution methods would be unfavourable to use due to their relatively large computational and memory demands. Examples of such direct methods are the classical direct Gauss elimination techniques including LU-factorisation.

For large sparse systems iterative solvers are preferred.

2.7.1 The Gradient and Conjugate Gradient Methods

Since the matrix A ∈ Rm,m in (2.33) is symmetric and positive definite, a whole class of optimisation methods can be used to obtain the solution vector x. Among these is the method of steepest descent. The method of steepest descent is based on the observation that the problem of solving Ax = b is equivalent to minimising the function:

F (x) = 1

2xTAx − bTx, x ∈ Rm, (2.34) since a stationary point is the x satisfying

f (x) = ∇F (x) = Ax − b = 0.

To minimise (2.34), a method composed of iterative steps can be constructed.

Firstly a search direction dk is defined and then F (xk+ αdk) is minimised with the solution αk. The solution of the next iteration can then be defined as:

xk+1= xk+ αkdk. (2.35) With F defined as in (2.34), the exact αk can be computed directly as

αk = − gkTdk

dkTAdk (2.36)

where gk is a residual vector defined as

gk= b − Axk= ∇F (xk).

In each iteration the approximation will move towards the minimum in the opposite direction of the gradient. The method is thus referred to as the gradient method or the method of steepest descent [4].

However, as the method of steepest descent follows the gradient at each iteration, the resulting path will show a ”zigzag”-behaviour, as illustrated in Figure 3.

(18)

Contour lines where f is constant x0

Figure 3: Method of steepest descent

In practice the method often requires too many iterations to achieve an ac- ceptable tolerance. Used on sparse problems, the convergence rate is often as poor as for J acobi0s method or similar methods [5]. The problem lies in the fact that the current search direction and the next can be almost parallel with respect to the inner product of the vector space spanned by the columns of the matrix A, defined as < x, y >A := xTAy. This is a so called energy inner product. The method is optimised when the search directions dkare orthogonal with respect to the above mentioned inner prod- uct. Such vectors are called conjugate. A method can now be formulated where the next search direction is constructed as the linear combination of the current residual and all the previous search directions. Since all the search directions are forced to be conjugate the method is referred to as the conjugate gradient method (CG). The conjugate gradient belongs to a group of numerical methods called Krylov subspace methods [6] and it can be seen as an accelerated version of the gradient method [5]. Favourably, the conjugate gradient avoids the ”zigzag” behaviour of the gradient method.

An important feature of CG is that it can in theory be implemented as an exact solver since it solves the system in a finite number of iterations. How- ever, due to rounding errors and floating point error, the search directions will not be exactly orthogonal and a truly exact solution is in reality never achieved. Though if the system is large, CG is more efficient when used with a certain stopping criterion on the residual. Algorithms for both the standard and the preconditioned conjugate gradient methods are displayed in Algorithm 1. The preconditioned conjugate gradient is commonly abbre- viated PCG and the principles of preconditioniers are discussed in Section 2.7.2.

(19)

Unpreconditioned CG PreconditionedCG

x = x0 x = x0

r = A*x-b r = A*x-b; C*h = r

delta0 = (r,r) delta0 = (r,h)

g = -r g = -h

Repeat: h = A*g Repeat: h = A*g

tau = delta0/(g,h) tau = delta0/(g,h)

x = x + tau*g x = x + tau*g

r = r + tau*h r = r + tau*h;C*h = r

delta1 = (r,r) delta1 = (r,h)

if delta1 <= eps, stop if delta1 <= eps, stop beta = delta1/delta0 beta = delta1/delta0 g = -r + beta*g g = -h+ beta*g

Algorithm 1: Principle algorithm of unpreconditioned and preconditioned conjugate gradient.

2.7.2 Preconditioning Techniques

Iterative methods are very attractive, particularly for sparse linear systems, as they have an optimal computational complexity per iteration. Namely, each iteration consists in general of one matrix-vector multiplication, a few vector updates and some scalar products. Thus, for sparse linear systems the computational complexity is linear in the number of degrees of freedom.

Iterative methods however, risk being inefficient if the matrix of the sys- tem is ill-conditioned. The problem is that for an ill-conditioned matrix the number of iterations may become large and rounding errors may accumu- late, possibly resulting in stagnation and even divergence of the method.

How well the matrix is conditioned is dependent of the distribution of its eigenvalues. This is usually measured by the so-called condition number [7]. For symmetric positive definite matrices, as in the target problem, the condition number is defined simply as the ratio κ(A) = λminλmax. For the ma- trices at hand, κ(A) = O(h−2). Thus, when we decrease h to reduce the discretisation error, we increase the condition number of the corresponding matrix. Therefore, to mitigate the high condition number, so-called precon- ditioning techniques are used. We see from (2.33) that if we choose C = A, then the preconditioning matrix is C−1 and C−1A is the identity matrix and the system is solved in one iteration. This however, is infeasible since constructing A−1 costs more than solving the system with it. Therefore, a

(20)

huge scientific effort has been put in constructing good preconditioners, that satisfy the following conditions:

• C−1A has a smaller condition number than A.

• C is easy and cheap to construct.

• Solving systems with C is much cheaper than with A.

• The construction and solution with C is easily parallelisable.

Clearly, achieving all these four goals is not easy and some of them are somewhat contradicting. Still, for some classes of matrices, such precondi- tioners exist and are referred to as of ”optimal order”. The optimal order preconditioner implies two things. First, κ(C−1A) = O(1), meaning it does not depend on the discretisation parameters k and h. This means that the iterative method will converge in a number of iteration, independant on the number of degrees of freedom. Second, the cost to apply a preconditioner is linear in the number of degrees of freedom. Preconditioner that possess the above properties are some multigrid, multilevel and domain decomposition techniques.

In general the preconditioner can be applied to a linear system in several ways. We have the left side preconditioning:

C−1Ax = C−1b (2.37)

where C is the preconditioner. Right side preconditioning can generally be expressed as:

AC−1Cx = b. (2.38)

Here one must first solve AC−1y = b and then Cx = y.

One of the simplest preconditioning methods is the incomplete LU factori- sation (ILU). The standard full LU-factorisation computes for the upper and lower trangular matrices such that A = LU . The incomplete LU- factorisation however, computes matrices L and U , so A can be written in the following way:

A = LU − R (2.39)

Here R is a residual matrix. LU is thus only an approximation for the matrix A. ILU can then be used as a preconditioner that improves the efficiency of other iterative solvers. One such is the aforementioned conjugate gradient method. The incomplete LU-factorisation is relatively easy and inexpensive

(21)

to compute, but may still be inefficient as it often requires many iterations.

However, there are alternative versions of the (ILU) that can remedy this to some degree [7].

2.7.3 The Multigrid Method

Multigrid is a family of numerical methods that was first developed for solving the linear systems of equations arising from discrete elliptic PDE’s, where the solution is defined on a specific mesh. The multigrid idea is based on two principles: error smoothing and coarse grid correction. The error smoothing is performed using an iterative method such as the Jacobi method or the Gauss-Seidel method. The profile of the error might at some levels be very sharp. The important feature of these iterative methods is that the error will be smoothened out at each iteration. Coarse grid correction uses the principle that a quantity which on one grid is smooth, can without essential loss of information be approximated on a less refined grid. Thus we can formulate the coarse grid correction (CGC) quantity principle. In the multigrid algorithm this principle is applied to the error as it is a so called correction quantity.

The theory of multigrid is widely based on Fourier analysis and so the above mentioned principles of error smoothing and CGC can easily be illustrated in such a context. This is done by applying a Fourier expansion to the error:

eh(x, y) =

n−1

X

k,l=1

αk,lsin(kπx)sin(lπy), (2.40)

as φk,lh = sin(kπx)sin(lπy) are the eigenfunctions to the discrete Laplace operator. The fact that the iterative methods have a smoothing effect can be interpreted such that the high frequency components (i.e. when k, l are large) of the error becomes small, whilst the low frequency components (i.e.

when k, l are small) remain more or less unaltered. The concept of high and low frequencies of the error is also applicable to the CGC principle. The low frequencies of the fine grid will be visible as high frequencies on the coarse grid. The high frequencies however, will not be visible on the coarse grid as they coincide with the low frequencies, in what is referred to as an aliasing phenomenon [8].

So far only one step coarsening has been considered. This is referred to as

”two-grid”, but the method can nevertheless reach higher efficiency when more than two grids are used. The alternating between different meshes

(22)

allows us to introduce the multigrid algorithm cycles. Figure 4 displays the so called V- and W-cycles where the solution is produced using four levels of different grid refinements. The classical implementation of multigrid uses predetermined grid sizes in the operating cycle. This is often referred to as geometric multigrid. At the lowest level direct methods are often used since they at such coarse grids do not necessarily decrease the computational efficiency. The direct solver is represented by the red dots in Figure 4.

V-CYCLE W-CYCLE

1 1

2 2

3 3

4 4

Figure 4: Four level V- and W-cycles where level 1 is the finest and level 4 is the coarsest grid.

The multigrid cycle can be summarised heuristically as: Gauss-Seidel or Jacobi (or similar methods) are iterated on different grid levels as they decrease the corresponding high frequency errors. As the process covers all frequencies the overall error will reduce rapidly [8]. The principal Multigrid algorithm is displayed in Algorithm 2.

Procedure M G: u(k)← M G

u(k), f(k), k, {νj(k)}kj=1

; if k = 0, then solve A(0)u(0) = f(0) exactly or by smoothing, else

u(k)

s1 S(k)1 

u(k), f(k)



,perform s1 pre-smoothing steps, Correct the residual:

r(k)=A(k)u(k)− f(k);form the current residual,

r(k−1)←R r(k),restrict the residual on the next coarser grid, e(k−1)← M G

0, r(k−1), k − 1, {νj(k−1)}k−1j=1

;

e(k)←P e(k−1); prolong the error from the next coarser to sthe current grid,

u(k)= u(k)− e(k);update the solution, u(k)

s2 S(k)2 

u(k), f(k)

,perform s2 post-smoothing steps.

endif

end Procedure M G

(23)

Algorithm 2: Principle algorithm of multigrid.

2.7.4 The Multigrid Method as a Preconditioner

Multigrid can be used as a preconditioner to increase the efficiency of an iterative solver when dealing with large sparse linear systems of equations.

Here a multigrid iteration operator on matrix form is used in a similar manner as discussed in Section 3.6.2. Multigrid preconditioning is widely used to accelerate the conjugate gradient [9].

2.7.5 Algebraic Multigrid

Requiring a sequence of geometric meshes is a disadvantage in various cases where important features of the problem are not resolved on coarse meshes or simply, when such meshes are not available. In such cases the so called algebraic multigrid methods are preferable in comparison to the previously discussed geometric multigrid. Algebraic multigrid uses the same ingredients as geometric multigrid, but does so without meshes, to achieve the same efficiency.

2.7.6 Aggregation-Based Algebraic Multigrid

AGMG is an implementation of the algebraic multigrid method. In AGMG the algebraic multigrid preconditioner is used to accelerate the conjugate gradient method (PCG) [10]. AGMG does not operate on the regular multi- grid V- or W-cycle, but on a so called K-cycle (also known as the ”non-linear algebraic multilevel iteration (AMLI) -cycle”) and can be seen as a ”W-cycle with Krylov acceleration at all intermediate levels” [10].

3 Numerical Study

The method is tested using MATLAB. For the accuracy studies the \- operator in MATLAB, which is a direct solver, is used to solve the linear systems. In the performance comparison AGMG is used instead. For this project we obtained an academic licence for the sequential version of the AGMG software that can be called by a MATLAB function. All the tests are carried out on a square domain Ω = {[0, 1] × [0, 1]}. Since the γ-method uses three points in time, the first step needs to be computed with a different method. This is done using Euler’s implicit method in most cases. However, for the eigenvalue problem a more accurate method is required to get the

(24)

correct order of accuracy for the γ-method, consequently the trapezoidal rule is used. For all of the tests u = 0 for {x, y} ∈ ∂Ω is the boundary condition.

3.1 Verification of the Discretisation Error 3.1.1 Constant Coefficients

To verify the implementation of the method, an analytic solution is needed.

One option is to use the regular eigenfunction solutions to the wave equation without a forcing term. The disadvantage of this is mainly that the effect of the forcing term is not tested, but also that it can be difficult to isolate the temporal error from the spatial, or vice versa. This is due to the conditional stability for some values of γ, which leads to difficulties having a time- step that differs enough from the spatial-step to see the convergence of the different errors.

The other option is to use a manufactured solution. Some function that fits the boundary conditions is used as the solution. It is then substituted into the equation and a forcing term is picked so that the chosen function solves the equation. If the function used is a polynomial of low enough degree, it can be approximated exactly with a finite Taylor series, meaning that a finite difference method can solve the equation exactly. This can then be utilised to isolate the temporal or spatial error.

Three different functions are used to test the method. First a polynomial of sixth order in time and second order in space

u (x, y, t) = x2− x

y2− y

t6+ 1 f (x, y, t) =30t4 x2− x

y.2− y

− c2 2 y2− y

t6+ 1 + 2 x2− x

t6+ 1 ,

(3.1)

so that there is only a temporal error. Then, a polynomial of first order in time and fourth order in space

u (x, y, t) = x4− x

y4− y (t + 1)

f (x, y, t) = − c2 12x2 y4− y (t + 1) + 12y2 x4− x (t + 1) , (3.2) so that there is only a spatial error. Lastly, the simplest eigenfunction solution

u (x, y, t) = sin (πx) sin (πy) cos cπt√

2

. (3.3)

(25)

For all of the solutions, u (x, y, 0) is the initial condition. The time step and mesh-size are set equal to get fair comparisons. The wave speed is set to c = 0.4 to achieve numerical stability for all cases.

3.1.2 Variable Coefficients

When considering the case with variable coefficients, the spatial discreti- sation is being tested. Thus, for convenience γ = 1/2 is used as it grants unconditional stability when testing the convergence. Three different func- tions describing the wave speed are examined. First a continuous function with a not so steep gradient is used:

q(x, y) = arctan x + arctan y. (3.4)

Figure 5: How the wave speed varies over the domain with q(x, y) as in equation (3.4).

To find an exact solution, the same approach as in the constant wave speed case is used. A function that fits the boundary condition is chosen as a solution, and a forcing term is picked so that the function solves the equation.

In this case, a polynomial solution (with high enough order to not get an exact solution) is picked.

u(x, y, t) = (x4− x)(y4− y)(t4+ 1)

f (x, y, t) = ∇ · (q(x, y)∇u) − utt. (3.5)

(26)

The source term, f (x, y, t), is computed symbolically using the symbolics package in MATLAB. The initial condition is the exact solution evaluated at t = 0. The second case tested is still continuous, but with a much steeper gradient:

q(x, y) = arctan(w(x − 0.25)) − arctan(w(−0.25)) (3.6) where w is a parameter that controls how steep the function is. In our tests it is set as w = 50.

Figure 6: How the wave speed varies over the domain with q(x, y) as in equation (3.6).

The exact solution is manufactured in the same way as in the previous case, hence being the same as in (3.5). But f (x, y, t) is obviously different since q(x, y) is different. The initial condition is the exact solution evaluated at t = 0. The third case is a domain with a discrete jump in the coefficients:

q(x, y) =

(1, for x < 0.5

3, for x ≥ 0.5 (3.7)

(27)

Figure 7: How the wave speed varies over the domain with q(x, y) as in equation (3.7).

In this case there are no known exact solutions, instead a very precise nu- meric solution is used to compute the error. The initial condition is set as a Gaussian bell function:

u(x, y, 0) = exp(−((x − 0.8)2+ (y − 0.5)2)/0.01), with the source term set to zero.

3.1.3 Computational Complexity

To study the computational complexity of the γ-method with AGMG, the variable coefficient case is studied with q(x, y) as in equation (3.4). A toler- ance level is set for the absolute error of the solution in the final time step.

The time step k and the mesh-size h are then adjusted to achieve stability, according to the criteria given in (2.16), and to satisfy the given tolerance.

Since AGMG uses a direct solver for smaller systems, and we wanted to test the scaling of the actual AGMG-method, the tolerance needed to be set quite low. This results in fairly large systems.

The tolerance is first set to 5e-5 and secondly 1e-6. The total execution time and the number of iterations required by AGMG every time step are recorded. For the first tolerance the final time is set to T = 2, and with the

(28)

finer tolerance the final time is T = 1. In this way the iterations in time are kept the same, so that if the degrees of freedom in the matrix is doubled, the total execution time should be about double too. The tests are done using γ = 0, 1/12, 1/2.

Then, AGMG is tested by itself using the matrix corresponding to the con- stant coefficient case with γ = 1/2, k = h and c = 1, but with a random right hand side. Solution time and number of iterations are recorded. The test is done on fairly large matrices to see if the method behaves optimally, i.e. that the number of iterations stays the same for the different problem sizes, and that the total execution time increases linearly with the problem size.

3.1.4 Computation of the Error

To compute the error we use the L2 norm of the absolute and relative error in the final time step. The absolute error is defined as

Eabs = v u u th2

N

X

i=1 N

X

j=1

(uhi,j− ui,j)2

and the relative error as

Erel = v u u t

PN i=1

PN

j=1(uhi,j− ui,j)2 PN

i=1

PN

j=1u2i,j . 3.2 Implementation

The boundary conditions are implemented in three steps. First the effect on the right hand side from the boundary points is taken into account at the three time levels. The second step is changing the matrix so that the boundary points in the solution are equal to the boundary points in the right hand side. The final step is to force the boundary points in the right hand side to have the value desired in the solution. This is a classical methodology when implementing boundary conditions for finite difference problems. The implementation can deal with homogeneous and non-homogeneous Dirichlet boundary conditions, see ”main wave test.m” and ”dirichlet.m” in [11].

The variable coefficient matrix is implemented with a continuously defined function for the coefficients. Instead of averaging the values in the mesh- points, the defined values between the points are used. This however, is

(29)

easily changed. The coefficients corresponding to the different derivatives are placed in their respective off-diagonals, and in the main diagonal, see

”variable coeff matrix.m” in [11].

4 Results

4.1 Convergence of the γ-method 4.1.1 Constant Wave Speed

When looking at the temporal error using the case in (3.1), the following results are obtained.

Table 1: Convergence for the polynomial solution (3.1) with γ = 0

DOF k = h Absolute error Factor Rate

289 0.0625 1.69e-04 - -

1089 0.03125 4.09e-05 4.14 2.05

4225 0.015625 1.00e-05 4.07 2.03 16641 0.0078125 2.49e-06 4.04 2.01 66049 0.0039063 6.19e-07 4.02 2.01 Table 2: Convergence for the polynomial solution (3.1) with γ =1/12

DOF k=h Absolute error Factor Rate

289 0.0625 6.93e-07 0 0

1089 0.03125 5.02e-08 13.8 3.79

4225 0.015625 3.37e-09 14.9 3.90 16641 0.0078125 2.18e-10 15.4 3.95 66049 0.0039063 1.39e-11 15.7 3.97 Table 3: Convergence for the polynomial solution (3.1) with γ =1/2

DOF k = h Absolute error Factor Rate

289 0.0625 7.37e-04 - -

1089 0.03125 1.90e-04 3.88 1.95

4225 0.015625 4.84e-05 3.93 1.97 16641 0.0078125 1.22e-05 3.96 1.99 66049 0.0039063 3.07e-06 3.98 1.99

(30)

For the cases γ =1/2, γ = 0 the convergence approaches second order when h decreases, see Tables 1 and 3. For γ = 1/12 fourth order convergence is observed, see Table 2.

Then, looking at the error from the space discretisation using equation (3.2) the following results are obtained.

Table 4: Convergence for the polynomial solution (3.2) with γ = 0

DOF k = h Absolute error Factor Rate

289 0.0625 9.00e-04 - -

1089 0.03125 2.33e-04 3.87 1.95

4225 0.015625 5.91e-05 3.93 1.98 16641 0.0078125 1.49e-05 3.97 1.99 66049 0.0039063 3.74e-06 3.98 1.99 Table 5: Convergence for the polynomial solution (3.2) with γ =1/12

DOF k = h Absolute error Factor Rate

289 0.0625 9.00e-04 - -

1089 0.03125 2.33e-04 3.87 1.95

4225 0.015625 5.91e-05 3.93 1.98 16641 0.0078125 1.49e-05 3.97 1.99 66049 0.0039063 3.74e-06 3.98 1.99 Table 6: Convergence for the polynomial solution (3.2) with γ =1/2

DOF k = h Absolute error Factor Rate

289 0.0625 9.00e-04 - -

1089 0.03125 2.33e-04 3.87 1.95

4225 0.015625 5.91e-05 3.93 1.98 16641 0.0078125 1.49e-05 3.97 1.99 66049 0.0039063 3.74e-06 3.98 1.99

Here in Tables 4-6 second order convergence is observed for all the cases.

(31)

Then, looking at the error from the eigensolution from equation (3.3) the following is obtained.

Table 7: Convergence for the eigenfunction solution (3.3) with γ = 0

DOF k=h Absolute Error Factor Rate

289 0.0625 7.62e-04 - -

1089 0.03125 1.93e-04 3.95 1.98

4225 0.015625 4.84e-05 3.98 1.99

16641 0.0078125 1.21e-05 3.99 2

66049 0.0039063 3.04e-06 4 2

Table 8: Convergence for the eigenfunction solution (3.3) with γ =1/12

DOF k=h Absolute error Factor Rate

289 0.0625 1.09e-03 - -

1089 0.03125 2.80e-04 3.90 1.96

4225 0.015625 7.08e-05 3.95 1.98 16641 0.0078125 1.78e-05 3.98 1.99 66049 0.0039063 4.46e-06 3.99 2 Table 9: Convergence for the eigenfunction solution (3.3) with γ =1/2

DOF k=h Absolute error Factor Rate

289 0.0625 2.71e-03 - -

1089 0.03125 7.14e-04 3.8 1.92

4225 0.015625 1.82e-04 3.91 1.97 16641 0.0078125 4.61e-05 3.96 1.99 66049 0.0039063 1.16e-05 3.98 1.99

Here in Tables 7-9 the convergence approaches second order as h decreases.

(32)

4.1.2 Non-Constant Wave Speed

Table 10: Convergence for wave speed with quite mellow gra- dient, γ =1/2

DOF k=h Absolute error Factor rate

289 0.0625 5.28e-03 - -

1089 0.03125 1.36e-03 3.88 1.95

4225 0.015625 3.46e-04 3.94 1.98 16641 0.0078125 8.72e-05 3.97 1.99 66049 0.0039063 2.19e-05 3.98 1.99

In Table 10 the result from the solution presented in (3.5) is shown. The wave speed is the arctan function presented in (3.4). Here the convergence approaches second order as h decreases.

Table 11: Convergence for wave speed with steep gradient, γ =1/2

DOF k=h Absolute error Factor rate

289 0.0625 2.67e-02 - -

1089 0.03125 2.94e-03 9.08 3.18

4225 0.015625 1.12e-04 26.2 4.71

16641 0.0078125 3.58e-05 3.13 1.64 66049 0.0039063 9.03e-06 3.96 1.99 263169 0.0019531 2.27e-06 3.99 2.00

In Table 11 the result from the solution presented in equation (3.5) is shown.

Where the wave speed is the steeper, almost vertical, arctan function pre- sented in equation (3.6). Here the convergence approaches second order as h decreases although at first the convergence is a bit faster.

Table 12: Convergence with discrete jump in wave speed, γ =

1/2

DOF k=h Absolute error Factor rate

289 0.0625 1.07e-01 - -

1089 0.03125 1.04e-01 1.03 0.0392

4225 0.015625 6.79e-02 1.54 0.620 16641 0.0078125 3.51e-02 1.93 0.951

In Table 12 the result from the case with a discrete jump in the wave speed, see equation (3.7), is shown. Here the convergence approaches first order as

(33)

h decreases.

4.2 Computational Complexity

Table 13: Solution time with tolerance 5e-5 and final time T = 2

γ DOF k Absolute error Time(s) Iterations in AGMG

1/2 66049 k = h 1.16e-05 117 6

1/12 66049 k =h/2 1.21e-05 221 2

0 66049 k =h/2 1.21e-05 199 -

Table 14: Solution time with tolerance 1e-6 and final time T = 1

γ DOF k Absolute error Time(s) Iterations in AGMG

1/2 263169 k = h 4.44e-07 496 6

1/12 263169 k =h/2 7.47e-07 964 4

0 263169 k =h/2 8.69e-07 832 -

In Table 13 and 14 the results of the test of computational complexity is shown. The parameters were set as described in section 3.1.3. The total solution time seems to increase linearly with the problem size and γ = 1/2 is the fastest in both cases.

4.3 Scaling of the AGMG-method

Table 15: Scaling of the AGMG method with tolerance 1e-6

DOF Total time(s) Solution time(s) Setup time(s) Iterations

263169 7.50e-02 6.20e-02 1.30e-02 6

1050625 4.56e-01 2.06e-01 2.50e-01 6

4198401 1.09 8.95e-01 1.14 6

16785409 11.41 6.37 5.04 6

In Table 15 the result from the test of the scaling of AGMG is presented, with parameters given in Section 3.1.3. The number of iterations stays the same independent of the number of degrees of freedom and the total solution time increases about one order of magnitude at every increase in size.

(34)

5 Discussion

5.1 Convergence of the γ-method 5.1.1 Constant Wave Speed

For the three different solutions tested in the constant coefficient case the results are consistent with the theory in Section 2.4.2 and 2.5. For the eigen- solutions and the polynomial with an error bounded by the space discretisa- tion, second order convergence is observed. For the polynomial solution that isolates the error from the time discretisation, (3.1), there is second order convergence with values of γ 6= 1/12. For γ =1/12 we observe fourth order convergence. Unfortunately the fourth order convergence will be difficult to observe in a ”real-world” scenario since the stability criterion forces the size of the discretisation step to be about the same in space and time, see (2.16).

5.1.2 Non-Constant Wave Speed

The case when the medium is non-homogeneous has several interesting phys- ical interpretations. The wave propagation speed c(x, y) described by a con- tinuous function (see Figure 5) can to some degree be interpreted as varying elasticity in the material the wave propagates through. The case where c(x, y) is a discontinuous function (see Figure 7) represents propagation of waves through two different materials. This is of interest for example when studying seismic phenomena. When the vibrations caused by an earth- quake reverberates through the surroundings, different materials will affect the vibrations in different ways. Such inter-material wave transfers can be modelled by discontinuities in similar manners as performed in this work.

In the first variable coefficient case, see (3.4), the solution exhibits a sec- ond order convergence, Table 10. This is expected according to the theory, since the spatial discretisation should be second order even when the wave coefficient is not constant. Furthermore, the temporal discretisation is the same as in the constant coefficient case, so the accuracy in time should be unchanged.

In the second case, see (3.6), where the coefficients are still continuous but have a much steeper gradient, the convergence varies a bit at first but goes to second order when h gets small enough, Table 11. Since the error is quite large at first compared to the previously discussed domain, our belief is that this is due to the source term being very steep, i.e. containing very short wavelengths that can not be represented on a coarser mesh. Then, when

(35)

the mesh goes from being too coarse to being sufficiently refined for those wavelengths to be represented, the convergence is a bit faster than expected.

In the case with a discrete jump, see (3.7), in the coefficients the rate of convergence deteriorates to only first order, Table 12. It can be shown that this is expected for a central difference in one dimension, see proposition two in [12]. That this would extend to the two dimensional case is not that far of a stretch. So if possible when computing a real case it might be better to approximate a discrete jump in the coefficients with a very steep continuous function to keep the order of accuracy. As mentioned in Section 3.1.2 the convergence rate is obtained by comparing with a more accurate numerical solution. The validity of this is motivated by the fact that the method has been verified to work for the other cases.

5.2 Computational Complexity

In both of the tests γ = 1/2 is the fastest, Table 13 and 14. This is due to the fact that k does not have to be changed for stability reasons so half the amount of iterations in time are required. The values of γ that leads to unconditional stability will always require more iterations in time as long as the maximum value of the coefficients, max(q(x, y)), is larger than some constant, depending on γ, defined by the stability criterion given in equation (2.16). The case γ = 1/12 allows for a higher convergence rate as derived in Section 2.4.2 giving us the opportunity to choose the time step larger in comparison to the space discretisation and thus optimising the computational efficiency. The advantage of this however, never really comes to action as the size of the time step is still limited by the stability criterion.

The only unconditionally stable case tested was for γ = 1/2, though this might not be the fastest since the matrix is changed depending on γ and a lower γ might need one iteration less of AGMG which could make that faster. This could be optimised by further testing.

Since the solution time increases linearly with the degrees of freedom in the matrix, computational complexity is optimal. Though the iterations in AGMG are not constant for γ = 1/12, they probably would be if the matrix was larger but that was not really feasible to test due to hardware constraints. However, the time still scaled almost linearly with the problem size. The iterations are constant for γ =1/2so there it is definitely optimal.

References

Related documents

All of the above mentioned boundary conditions can be represented by Robin solid wall boundary conditions on the tangential velocity and tempera- ture. This allows for a transition

The goal of this study is to provide a working, stable and high order accurate numerical method for the Soliton model and for some boundary conditions.. In section 3 a set of

It was found that sampling with around 2000 points was typically sufficient to see convergence with Monte Carlo sampling to an accuracy comparable to the accuracy of the

KASAM är skapad av Aaron Antonovsky (2005) och innebär kortfattat att individen har en möjlighet till hälsa genom att inneha höga värden inom komponenterna: begriplighet,

Med hjälp av Design and Creation så har vi här föreslagit, testat och förfinat en anpassad utvecklingsmetod som är till för att tillämpas i utveckling av IT-System med hjälp

Då denna studie syftade till att redogöra för hur stora svenska företag, som är topprankade inom CSR, designar sina MCS- paket för att styra medarbetare i linje med

Stable High Order Finite Difference Methods for Wave Propagation and Flow Problems on Deforming Domains. Sonja Radosavljevic Samir a Nikk ar St able High Or der Finit e Diff er

Nevertheless, despite the multiple sensor approach, much of the work concerned the investigation of the Time-of-Flight camera as it is a quite new type of 3D imaging sen- sors and