• No results found

Numerical Solution of the Cauchy Problem for the Helmholtz Equation

N/A
N/A
Protected

Academic year: 2021

Share "Numerical Solution of the Cauchy Problem for the Helmholtz Equation"

Copied!
16
0
0

Loading.... (view fulltext now)

Full text

(1)

' $

Department of Mathematics

Numerical Solution of the Cauchy Problem

for the Helmholtz Equation

Fredrik Berntsson, Vladimir A Kozlov, Lydie Mpinganzima and

Bengt Ove Turesson

(2)

Department of Mathematics

Link¨

oping University

(3)

Numerical Solution of the Cauchy Problem for

the Helmholtz Equation

Fredrik Berntsson

Vladimir A Kozlov

Lydie Mpinganzima

∗†

Bengt Ove Turesson

April 2, 2014

Abstract

The Cauchy problem for the Helmholtz equation appears in applications related to acoustic or electromagnetic wave phenomena. The problem is ill–posed in the sense that the solution does not depend on the data in a stable way. In this paper we give a detailed study of the problem. Specifically we investigate how the ill–posedness depends on the shape of the computational domain and also on the wave number. Furthermore, we give an overview over standard techniques for dealing with ill–posed problems and apply them to the problem.

1. Introduction

The Helmholtz equation arises in applications related to acoustic and electro-magnetic wave phenomena. In the case of acoustic waves, u describes the pres-sure and the wave number is given by k = ω/c, where ω > 0 is the frequency and c is the speed of sound. For electromagnetic waves, the wave number k is given in terms of the electric permeability ε and magnetic permeability µ by k = ω√εµ; see [2, 6]. The Cauchy problem for the Helmholtz equation has applications in optoelectronics [11] and characterization of sound sources [8, 13]. Let Ω ⊂ R2 be a simply connected domain with a piecewise smooth

bound-ary Γ. The boundbound-ary is divided into different parts, namely Γ1, Γ2and Γ3. The

function is considered unknown on Γ1, and both function values and normal

derivatives are available on Γ2. Also either function values or normal

deriva-tives are available on Γ3, i.e., on the rest of Γ. It is assumed that Γ1is separated

from Γ2. Note that we do not consider the case when Γ = Γ1∪ Γ2. This domain

domain is illustrated in Figure 1.

Department of Mathematics, Link¨oping University, SE-58183 Link¨oping, Sweden.

Emails: fredrik.berntsson@liu.se, vladimir.kozlov@liu.se, lydie.mpinganzima@liu.se, bengt-ove.turesson@liu.se

Department of Applied Mathematics, University of Rwanda, Huye Campus, Box 56

(4)

Ω Γ2 Γ1 Γ′′ 3 Γ′ 3

Figure 1: The domain Ω. The boundary consists of Γ1 where the function is

unknown, Γ2 where Cauchy data is available, and Γ3= Γ′3∪ Γ ′′

3 where Dirichlet

data is available. The boundary pieces Γ1and Γ2are seperated from each other.

The Cauchy problem for the Helmholtz equation can be formulated as fol-lows: Find u ∈ C2(Ω) ∩ C1( ¯Ω) such that

       ∆u + k2u = 0, in Ω, u = g, on Γ2, ~ν · ∇u = 0, on Γ2, u = 0, on Γ3, (1) where k is the wave number, ~ν denotes the outward unit normal and [u, ~ν ·∇u] = [g, 0] is the Cauchy data available only on a part of the boundary Γ. We remark that the case of a non–zero normal derivative ~ν · ∇u = h on Γ1, or a non–zero

function value u on Γ3, can be reduced to the above problem by first solving a

well–posed boundary value problem and using the linearity of the equation. The Cauchy problem (1) has been previously studied by many authors; see for instance [1, 11, 16, 15]. The problem is ill–posed in the sense that a small perturbation in the Cauchy data can cause an arbitrarily large change in the solution. This means that in the presence of measurement noise, or even com-putational errors, special regularization methods have to be used in order to find a stable solution to problem (1).

It is often useful to reformulate the Cauchy problem (1) as a linear operator equation. In order to do this we replace the boundary condition u|Γ2 = g in

the above equation, by the condition u|Γ1 = f . The resulting mixed boundary

value problem is well–posed and can be used to define an operator, K : f 7→ g = u|Γ1.

This is useful since, after discretization, the operator K is approximated by a matrix, and standard linear algebra techniques, such as the truncated singular value decomposition, for solving ill–conditioned linear equation systems; see [4], can be used to solve the Cauchy problem efficiently in a stable way. A variety of iterative methods for solving ill–conditioned linear systems can also be used. The aim of this paper is to investigate the performance of standard regular-ization techniques when applied to problem (1). The structure of the paper is as follows: In Section 2 we reformulate problem (1) as a linear operator equa-tion. We also use the singular value decomposition to provide a definition of the degree of ill–posedness of the formulated operator equation. In Section 3 we

(5)

give details of the finite difference implementation for solving a mixed–boundary value problem for the Helmholtz equation. We explain how to use the finite dif-ference code to compute a matrix approximation of the linear operator. The degree of ill–posedness of the linear operator depends strongly on the size and shape of the domain Ω. This is studied further in Section 4. In Section 5 we illustrate the use of standard regularization techniques for solving the Cauchy problem for the Helmholtz equation. Both direct and iterative methods are con-sidered. Finally we discuss in Section 6 our numerical results and draw some conclusions.

2. An ill–posed operator equation

In this section we reformulate the Cauchy problem (1) as a linear operator equation. In order to do this, we replace the boundary condition u|Γ2= g in (1)

by the condition u|Γ1 = f . The resulting mixed boundary value problem is

well–posed and can thus be used to define an operator, K : f 7→ g = u|Γ1.

This is useful since, after discretization, the operator K is approximated by a matrix so that standard linear algebra techniques such as the truncated singular value decomposition, can be used to solve the Cauchy problem efficiently in a stable way.

2.1. The operator equation

As previously, let Ω ⊂ R2be a domain with a piecewise smooth boundary Γ such

that Γ = Γ1∪ Γ2∪ Γ3. Consider the following mixed boundary value problem

for the Helmholtz equation: Find u ∈ C2(Ω) ∩ C1( ¯Ω) such that        ∆u + k2u = 0, in Ω, u = f, on Γ1, ~ν · ∇u = 0, on Γ2, u = 0, on Γ3. (2) The above problem is well–posed and defines an operator,

Kf = g, (3)

where g = u|Γ1 and u is the unique solution of the problem (2). The operator K

is well–defined.

The operator K is compact, provided that appropriate function spaces are introduced, and the solution to the operator equation (3) can be written in terms of the singular system {σn, un, vn}∞n=1. It holds that

f = ∞ X n=1 (un, g) σn vn, (4)

where {σn, un, vn} is the singular system of the operator K. The singular

val-ues σi decrease monotonically towards zero. Hence the formula (4) is

numeri-cally unstable: small perturbations in the coefficients (un, g) can cause drastic

changes in the computed solution f because they are divided by the small num-ber σn. This observation leads us to the following definition [3, Section 2.2]:

(6)

Definition 2.1. Let {σn} be the singular values of the compact operator K.

The degree of ill–posedness of the equation (3) is defined as follows: If the sequence {σn} decay as O(n−α), for some α > 0, then the equation is mildly

ill–posed. If, on the other hand, the sequence behaves as O(e−αn

) then the equation is severely ill-posed.

Remark 2.2. We shall mention here that we excluded the case when the bound-ary Γ = Γ1∪ Γ2since we only consider the case when the operator K is compact.

If the problem is consistent, i.e., there is an exact solution f so that the exact data g satisfies g = Kf , then the series (4) is convergent and f = K−1g exists.

This is typically not the case for measured data gδ containing noise. Hence

Regularizationmethods are used to approximate K−1. The following definition

is often used:

Definition 2.3. A regularization of K−1is a family of operators Rλ such that

g

δ→ K−1g = f, as, gδ → g,

where λ := λ(δ, gδ, f ) is a regularization parameter, kg − gδk ≤ δ is the noise

level, g is the exact data such that Kf = g, and gδ is the available data.

2.2. A concrete test problem

In order to more closely examine the properties of the operator equation (3), we specify a concrete test case. Let the domain be Ω = [0, 1] × [0, L] and consider the following well–posed boundary value problem: Find u(x, y) such that

       ∆u + k2u = 0, 0 < x < 1, 0 < y < L, uy(x, 0) = 0, 0 < x < 1, u(0, y) = u(1, y) = 0, 0 < y < L, u(x, L) = f (x), 0 < x < 1. (5) Using this boundary value problem we define the operator

K1: f (x) 7→ u(x, 0) = g(x). (6)

Note that in this case both f (x) and g(x) can be taken from the same function space, e.g. C([0, 1]) or L2([0, 1]), and the operator K

1 is self–adjoint. This

property is naturally inherited by the matrix approximation leading to simpler numerical methods.

Example 2.4. Let L = 0.2 in (5) and consider k2 = 20.5 in the Helmholtz

equation. By discretizing the equation (5) using on a grid (xi, yj) of size 400×81

we obtain a matrix approximation K1 ∈ R400×400. In order to illustrate the

properties of (6) in this case, we compute singular value decomposition (SVD) of K1given by K1= U ΣVT. The singular values are displayed in Figure 3. We

remark that the singular values decay exponentially, and only level off at the machine precision µ ≈ 1.1 · 10−16. From Definition (2.1), the problem is thus

severly ill–posed.

Further we select an exact solution u(x, y) of the Helmholtz equation, from which we evaluate two functions f (x) and g(x) such that (K1f )(x) = g(x).

(7)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 −0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 x u(L,x)=f(x) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 x u(0,x)=g(x)

Figure 2: The function u(L, x) = f (x) is presented on the left. The corre-sponding function K1f (x) = u(0, x) = g(x) is shown on the right. Note that

the operator K1 is strongly smoothing and the wiggly features of f (x) do not

appear in the solution g(x).

0 10 20 30 40 50 60 70 80 90 100 10−18 10−16 10−14 10−12 10−10 10−8 10−6 10−4 10−2 100 102 σk k 0 5 10 15 20 25 30 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100 101 σk (x) and (u k ,gδ ) (o) k

Figure 3: The first 100 singular values σn of the matrix K1 (left). The

sin-gular values decay exponentially and thus the operator equation is severly ill– posed. We also display both the 30 singular values and also the Fourier coeffi-cients (un, gδ) (right). Initially the coefficients (un, gδ) tend to zero faster than

(8)

very smooth compared to f (x). This smoothing property is typical for ill–posed problems.

We finally add normally distributed noise with variance 10−5to obtain

sim-ulated measurements gδ. We display in Figure 3 both the singular values σi and

the coefficients (un, gδ). We know that if a solution to K1f = g exists then the

series (4) is convergent. This is often referred to as the Picard criterion [3, 4]. This condition holds for the exact data g(x) but not for the noisy data gδ(x).

We see from Figure 3 on the left that initially the coefficients (un, gδ) decay

faster than the singular values σn, but the coefficients level out at the noise

level δ = 10−5. We conclude that we can only hope to accurately reconstruct

at most 15 − 20 singular components of the solution.

Remark 2.5. Suppose now that the domain is eΩ = [0, L1] ×[0, L2]. In this case

a change of variables (x, y) 7→ (x/L1, y/L1) transforms the Helmholtz equation

into

uxx+ uyy+ (L1k)2u = 0, in Ω = [0, 1] × [0, L2/L1].

It is thus sufficient to consider the case where the interval in the x variable is [0, 1].

3. Numerical implementation

In this section we present details of our numerical implementation. The well– posed boundary value problem (5) has been discretized using finite differences and the resulting code used to create a matrix approximation of the linear operator (6).

3.1. The Helmholtz equation in a rectangle

In our implementation of the finite difference method for problem (5) we pick a regular grid:

{(xi, yj) : xi= i∆x, yj = j∆y, 0 ≤ i ≤ N, 0 ≤ j ≤ M}

Let ui,j denote the numerical approximation of u(xi, yj). The finite difference

discretization [14] of the Helmholtz equation at each interior point (xi, yj)

sim-plifies to 1 ∆x2(ui−1,j+ ui+1,j) + (− 2 ∆x2 − 2 ∆y2 + k 2)u i,j+ 1

∆y2(ui,j−1+ ui,j+1) = 0.

At the boundaries corresponding to x = 0, and x = 1, the value of the func-tion u(x, y) is zero and the corresponding variables ui,j are explicitly set to zero,

and at the boundary y = L we have a Dirichlet condition, ui,M = f (xi), i = 0, ... , N,

where u(x, L) = f (x) is the prescribed Dirichlet data at y = L. The discretization of the Neumann boundary condition uy(x, 0) = 0 is given by

(9)

This particular choice means that we approximate the Neumann condition with accuracy O(∆y2); and the resulting the finite difference method is also O(∆x2+ ∆y2) accurate.

The finite difference formulas above leads to a large and sparse linear sys-tem of equations Au = b, where the matrix A is of size M N × MN and b is a vector that contains the boundary data. Since the matrix A contains roughly 5 non–zero elements on each row, the total number of non–zeros el-ements is only nnz(A) ≈ 5(M + N), and the system can be efficiently solved using any iterative method such as GMRES for example. In our application we want to solve the same well–posed boundary value problem multiple times, with different data f (x). Since the matrix A depends only on the size of the domain, i.e., L, the parameter k2, not on the actual boundary values f (x), we

can save the sparse LU factorization and reuse it. This significantly improves the computational speed for the methods we consider. On the other hand the matrix A is banded the number of non-zeros in L and U remains relatively low.

3.2. The matrix approximation

Recall that the linear operator K1, see (6), maps function values u|Γ1= f onto

function values u|Γ2 = g. In the previous section we have explained the

de-tails of the finite difference implementation that allows us to evaluate the the operator K1 by solving a well–posed boundary value problem.

Suppose that n1 is the number of gridpoints located on the boundary Γ1

and n2 is the number of grid points located on Γ2. Then the functions f and g

are represented by vectors F ∈ Rn1 and G ∈ Rn2. Since the problem (2), and

hence the corresponding operator, is linear the operator can be approximated by a matrix:

e

K · F = G, K ∈ Re n2×n1.

Further to obtain the matrix we simply pick a set of basis vectors {ei} for Rn1

and evaluate the operator,

Kei= ki, K(:, i) = ke i, i = 1, ... , n1.

Thus by solving the mixed–boundary value problem (2) using the finite dif-ference discretization, and data discrete F = ei we obtain one column in the

matrix eK approximating the operator K. In the rest of the paper we will use the symbol K to denote either the matrix or the operator.

Note that since we can reuse the sparse LU decomposition and evaluate K1

for multiple basis vectors {ei}ni=11 the cost of computing the matrix eK is often

not unresonably large. Also this means that if an iterative method requires more than n1iterations the same amount of computational resources could have been

spent on building the matrix instead.

4. Numerical study of the Helmholtz equation

In this section we investigate, numerically, the properties of the operator K1,

given by (6). Thus we select a discretization of the x–axis of size N = 400. The discretization of the y-axis is selected so that ∆x = ∆y. We choose for example M = 81 points in the case L = 0.2. Thus the matrix representation

(10)

0 5 10 15 20 25 30 35 40 45 50 10−18 10−16 10−14 10−12 10−10 10−8 10−6 10−4 10−2 100 σk / σ1 k L=0.1 L=0.5 1 2 3 4 5 6 7 8 10−3 10−2 10−1 100 101 σk k k2=8 k2=28

Figure 4: The first 50 singular values of the matrix K1, for k2 = 20.5 and

different values of L (right). We see that the degree of ill–posedness depends strongly on the distance L between Γ1and Γ2. Also the first 8 singular values for

the case L = 0.3 and for different values of k2. We see that the first few singular

values depend strongly on k2 but the rate of decay, and hence the degree of

ill–posedness does not.

of the operator will be a 400 × 400 matrix. In all cases matrix is computed using the finite difference discretization presented in Section 3. Since the ma-trix K1 is rather small we can compute its singular value decomposition (SVD)

by dense matrix methods, for example Matlabs svd routine. The parameters that we investigate are L, i.e., the distance between Γ1 and Γ2, and also the

wave number k2.

We first investigate the dependence of the singular values on the parame-ter L, i.e. the distance between Γ1 and Γ2. Thus we computed the matrix

approximation of the operator K1, for equally spaced L values between 0.1

and 0.5. The results are displayed in Figure 4. We observe that for L = 0.5 only about 20 singular values are above the machine precision, and the problem is very challenging to solve with realistic noise levels. For L = 0.1, the singular values decay at a much slower rate and about 140 singular values are above the machine precision.

We then investigate the dependence of the singular values on the wave num-ber k2. In the experiment we computed the matrix K

1for equally spaced values

of k2between 8 and 28. In Figure 4 we see that only the first few singular values

depend strongly on the wave number. The rate of decay, and thus the degree of ill–posedness of the problem, does not depend on the wave number at all. Hence the difficulty in solving the Cauchy problem should depend very little on the wave number.

5. An overview of regularization methods

In this section we study methods for solving the ill–conditioned linear system K1F = Gδ, (7)

where Gδ is the discrete, and noisy, data vector. Since the original operator

(11)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 −0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 x F(x) (black−dashed) Gm(x) (solid,blue) 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 0.25 −0.2 0 0.2 0.4 0.6 y x U(x,y)

Figure 5: The functions F and Gδ used for the test problem (left), and the

solution u(x, y) (right). problem [4].

Methods for solving discrete ill–posed problems are either direct, i.e., they work directly with the matrix K1 and compute the solution using a formula

based on a decomposition of the matrix, or iterative, i.e., they compute a se-quence Fk that converges to the desired solution. Iterative methods only access

the matrix indirectly by computing matrix vector products, or repeated evalu-ations of the operator K1. Thus the matrix is not needed explicitly. A finite

difference solver for the mixed–boundary value problem (5) can instead be used to evaluate a matrix–vector product K1F , for instance.

Iterative methods often require a subroutine for computing matrix–vector products involving both K and its transpose KT. An example is the Landweber

method[7],

Fn+1= Fn+ γKT(Gδ− KFn), n = 0, 1, 2, ... ,

which has been applied to the Cauchy problem the the Helmholtz equation [1]. Matrix–vector products involving KT are computed by solving an adjoint

prob-lem to (5). This means by solving a mixed–boundary value probprob-lem involving the same equation, but with a different set of boundary conditions, see [5] for details.

5.1. Direct regularization methods

In this section we investigate direct regularization methods for solving the dis-crete ill–posed system K1F = Gδ, where Gδ represents the noisy data vector.

For direct methods we assume that the matrix K1 is available, and of relatively

small dimension, so it is feasible to compute a singular value decomposition K1= U ΣVT.

Example 5.1. Let Ω = [0, 1] × [0, 0.2] and the wave number be k2= 20.5. As

previously we discctretize the domain using a grid of size 400×81 and compute a matrix K1of size 400 ×400. Since the matrix is of small size we can compute its

SVD K1= U ΣVT. We also pick an exact solution F and G, satisfying K1F =

G, and add normally distributed noise of variance 10−4, to obtain G to G

δ. We

(12)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 −0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 x F(x) (black−dashed) F tsvd 8 (x) (solid,blue) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 −0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 x F(x) (black−dashed) F tsvd 13 (x) (solid,blue) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 x F(x) (black−dashed) F tsvd 18 (x) (solid,blue) 0 2 4 6 8 10 12 14 16 18 20 0 1 2 3 4 5 6 k Error || F−F tsvd k ||2

Figure 6: The TSVD solutions FW

tsvd (blue solid curve) computed using W = 8

(top, left), W = 13 (top, right), and W = 18 (bottom, left). We also display the exact solution F (black dashed curve). The error kF −FW

tsvdk2, for W = 1, ... , 20,

is also displayed (bottom, right). We see that the error is small for a resonably large range of values W .

Since the ill–posedness of associated with the small singular values an obvious solution is to truncate the series (4). Thus for a positive number W we define the truncated singular value decomposition (TSVD) as,

FtsvdW = W X n=1 uT nGδ σn vn. (8)

The corresponding operator RW is bounded and we have restored stability of

the solution with respect to measurement errors.

The method includes the parameter W , i.e., the number of components to include in the solution. If the noise level kG − Gδk2 ≤ δ is known then the

Morozov discrepancy principle; see [10], picks the smallest W such that, kK1FtsvdW − Gδk2≤ δ. (9)

In order to demonstrate the performance of the method we compute TSVD solutions for a range of W –values. In Figure 6 we display the TSVD solutions FW

tsvd, for W = 8, 13, and 18. We also display the error kF − FtsvdW k2, as

a function of W . Using an appropriate level of regularization we manage to obtain a fairly good solution. For this example the discrepancy prindiple would suggest W = 11 as the appropriate regularization parameter to use.

(13)

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 x 10−3 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 k Error || F−F tik λ|| 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 −0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 x F(x) (black−dashed) F tik λ(x) (solid,blue)

Figure 7: The error kFλ

tikh − F k2, for 10

−4

≤ λ ≤ 2 · 10−3 (left). Also the

solution Fλ

tikh, for λ = 3 · 10

−4 (right). This demonstrates that the Tikhonov

method can produce good solutions.

Another direct regularization method is Tikhonov regularization. The solu-tion Fλ

tik is the unique minimizer of the Tikhonov functional,

kK1F − Gδk22+ λ2kF k22, (10)

where λ > 0 is the regularization parameter. The minimization problem repre-sents a compromize between minimizing the residual kK1F −Gδk2and enforcing

stability, i.e. keeping the solution norm kF k2 small. Using the SVD of K1 we

can write the Tikhonov solution as, Fλ tik= N X i=1 σiuTiGδ σ2 i + λ2 vi. (11)

The performance of the Tikhonov method is illustrated in Figure 7. We present the error as a function of the regularization parameter λ and also the solution for a good choice of the regularization parameter. The discrepancy principle can be used to find an appropriate value for the parameter λ, and for this example we obtain λ = 1.3 · 10−3.

5.2. Iterative regularization methods

Iterative regularization methods typically access the matrix indirectly. It is assumed that a subroutine for evaluating matrix–vector products is available. This typically means that in each iteration a well-posed boundary value problem is solved. In our case the finite difference solver implemented for the problem (5) is used to evaluate the effect of applying the operator K1 to avector F . If

the iterative method involves the transpose KT

1 then an adjoint problem can be

solved in order to evaluate the effect of applying KT

1 to a vector, see e.g. [5] for

details.

Krylov subspace methods[12] are based on seeking an approximate solution in the Krylov subspace,

(14)

1 2 3 4 5 6 7 8 0 0.5 1 1.5 2 2.5 3 Iteration number k Error || F cg k −F|| 2 1 2 3 4 5 6 7 8 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Iteration number k Residual || K 1 Fcg k−G m ||2

Figure 8: The error kFn

cg− F k2, for the first 8 CG iterations (left) and also the

residuals kK1Fcgn− Gδk2(right). This illustrates the fast convergence of the CG

method.

where r(0) is the residual corresponding to a starting guess x(0). Such methods

can be used as regularization methods since it has been noted that the Krylov subsapce Kn tends to approximate the subspace span(v1, ... , vn) spanned by the

first n singular vectors (which correspond to large singular values)[4, p. 145]. Thus the solution computed by Krylov subspace methods can be considered as approximations of the TSVD solution.

The two standard Krylov subspace methods are generalized minimal residual (GMRES) for general non-singular matrices, and the conjugate gradient method (CG) for symmetric and positive definite matrices. Both methods work by using matrix-vector multiplications to build an orthogonal basis for the Krylov subspace Kn and computing the solution by solving a least squares problem.

Efficient implementations exist.

Example 5.2. In order to demonstrate the regularizing effects of Krylov sub-space methods we once again consider the case detailed in Example 5.1. The matrix K1is of size 400×400, symmetric and positive definite (if ignoring

eigen-values close to the machine precision). Therefore we attempt to solve the linear system K1F = Gδ using the the conjugate gradient method.

The residuals kK1Fcgn − Gδk2 and errors kFcgn − F k2, for the first CG 8

iterations using the starting guess F0= 0, are displayed in Figure 8. We see that

the method converges very rapidly. The errors initially decrease, but instead start to increase after only a few iterations. This is often refered to as semi-convergence[4, p. 135]. It is important to stop the iterations before the error starts to increase, and the number of iterations used acts as a regularization parameter. For this particular experiment the discrepancy principle would have stopped the iterations at n = 6. We also display the solutions obtained after n = 5 and n = 10 iterations in Figure 9. Clearly, by using n = 10 iterations we approximate K−1

1 too accurately and the ill-posedness of the problems shows

(15)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 −0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 x F(x) (black−dashed) F cg 5 (x) (solid,blue) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 −10 −8 −6 −4 −2 0 2 4 6 8 10 x F(x) (black−dashed) F cg 10(x) (solid,blue)

Figure 9: The exact solution F (black,dashed) and the CG solution Fn

cgproduced

after n iterations (blue,solid). We use both n = 5 (left) and n = 10 iterations (right). Clearly the case n = 10 gives an unphysical solution.

6. Concluding Remarks

In this paper we have presented a Cauchy problem for the Helmholtz equation. Since the problem is important for applications we wanted to understand the properties of the problem and also investigate possible options for computing the solution. Since the problem is severely ill-posed regularization methods are needed to restore stable dependence with respect to errors in the Cauchy data. By rewriting the problem as an ill-posed operator equation and computing the SVD of the corresponding operator equation we found that the ill-posedness of the problem strongly depends on the size and shape of the domain Ω, while the wave number k2does not influence the degree of ill-posedness of the problem.

Also, evaluating the linear operator is equivalent to solving a well-posed mixed boundary value problem for the Helmholtz equation. By implementing a finite difference solver for the well-posed boundary value problem we can approximate the linear operator by a matrix. Hence we can use standard linear algebra methods for solving discrete ill-conditioned methods. We test both the TSVD and Tikhonov methods and manage to produce good results.

The draw back of using a method, such as TSVD, that rely on a decomposi-tion of the matrix is that a large number of well-posed boundary value problems needs to be solved in order to first compute the matrix explicitly, i.e. to obtain a matrix K of size 400 × 400 we need to solve 400 well-posed boundary value problems. Hence we look for iterative methods that only rely on a subroutine for computing matrix-vector products. Typically in each iteration one well-posed boudnary value problem is solved. Thus as long as the number of iterations remains low such methods are vastly more efficient than direct methods. In the paper we apply the CG method and manage to obtain good solutuions after only a small number of iterations.

We observe that usually direct methods such as TSVD or Tikhonov produce better solutions and it is easier to select a suitable level of regularization. Hence if an iterative method requires a large number of iterations it is usually preferable to switch to a direct method.

(16)

References

[1] F. Berntsson, V.A. Kozlov, L. Mpinganzima, and B.O. Turesson. An alter-nating iterative procedure for the cauchy problem for the helmholtz equa-tion. Inverse Problems in Science and Engineering, 22(1):45–62, 2014. [2] D. Colton and R. Kress. Inverse Acoustic and Electromagnetic Scattering

Theory. Springer–Verlag, 2nd edition, 1998.

[3] H. Engl, M. Hanke, and A. Neubauer. Regularization of Inverse Problems. Kluwer Academic Publishers, Dordrecht, the Netherlands, 1996.

[4] P. C. Hansen. Rank-Deficient and Discrete Ill-Posed Problems. Numerical Aspects of Linear Inversion. Society for Industrial and Applied Mathemat-ics, Philadelphia, 1997.

[5] D.N. H`ao and D. Lesnic. The cauchy problem for the laplace’s equation via the conjugate gradient method. IMA Journal of Applied Mathematics, 65:199–217, 2000.

[6] D.S. Jones. Acoustic and Electromagnetic Waves. Clarendon Press, 1986. [7] L Landweber. An iteration formula for fredholm integral equations of the

first kind. Amer. J. Math., 73:615–624, 1951.

[8] C. Langrenne and A. Garcia. Data completion method for the characteri-zation of sound sources. J. Acoust. Soc. Am., 130(4):2016–2023, 2011. [9] D. Maxwell. Kozlov–maz’ya iteration as a form of landweber iteration.

arXiv: 1107.2194v1 [math.AP], 2011.

[10] V. A. Morozov. On the solution of functional equations by the method of regularization. Soviet. Math. Dokl., 7:414–417, 1966.

[11] T. Regi´nska and K. Regi´nski. Approximate solution of a Cauchy problem for the Helmholtz equation. Inverse Problems, 22(3):975–989, 2006. [12] Y Saad. Iterative Methods for Sparse Linear Systems. International

Thom-son Publishing, 1996.

[13] A. Schuhmacher, J. Hald, H.B. Rasmussen, and P.C. Hansen. Sound source reconstruction using inverse boundary element calculations. J. Acoust. Soc. Am., 113(1):114–127, 2003.

[14] G.D. Smith. Numerical Solution of Partial Differential Equations. Oxford University Press, Oxford, third edition, 1985.

[15] Xiang–Tuan Xiong. A regularization method for a cauchy problem of the helmholtz equation. Journal of Computational and Applied Mathematics, 233(8):1723–1732, 2010.

[16] Xiang–Tuan Xiong and Chu-Li Fu. Two approximate methods of a Cauchy problem for the Helmholtz equation. Comput. Appl. Math., 26(2):285–307, 2007.

References

Related documents

[r]

Since the data collected is at a national level, it  cannot be determined whether fighting was fiercer surrounding the natural resources, and the  concrete effects of natural

In other words, we have proved that, given a linear transformation between two finite dimensional inner product spaces, there will always be a singular value decomposition of

It is known that deciding membership for shuffles of regular languages can be done in polynomial time, and that deciding (non-uniform) membership in the shuffle of two

In this study, four strains of yeast isolated from the habitats of lager beer, ale, wine and baker´s yeast were grown in YPD media containing isobutanol concentrations of 1.5 %, 2

Linköping Studies in Science and Technology Dissertations, No.. Linköping Studies in Science

Visiting address: UniversitetsomrŒdet, Porsšn, LuleŒ Postal address: SE-971 87, LuleŒ, Sweden Telephone: +46 920 910 00. Fax: +46 920

To be able to handle the data that any arbitrary signal contains there is a very useful mathematical tool named Singular Value Decomposition, or SVD, applied in linear algebra and