• No results found

Eigenvalue analysis for summation-by-parts finite difference time discretizations

N/A
N/A
Protected

Academic year: 2021

Share "Eigenvalue analysis for summation-by-parts finite difference time discretizations"

Copied!
37
0
0

Loading.... (view fulltext now)

Full text

(1)

Department of Mathematics

Eigenvalue analysis for

summation-by-parts finite

difference time

discretizations

Andrea A. Ruggiu, Jan Nordström

(2)

Linköping University

(3)

Eigenvalue analysis for summation-by-parts finite

difference time discretizations

Andrea A. Ruggiua,∗, Jan Nordstr¨oma

aDepartment of Mathematics, Computational Mathematics, Link¨oping University,

SE-581 83 Link¨oping, Sweden

Abstract

Diagonal norm finite-difference based time integration methods in summation-by-parts form are investigated. The second, fourth and sixth order accu-rate discretizations are proven to have eigenvalues with strictly positive real parts. This leads to provably invertible fully-discrete approximations of ini-tial boundary value problems.

Our findings also allow us to conclude that the second, fourth and sixth order time discretizations are stiffly accurate, strongly S-stable and dissipa-tively stable Runge-Kutta methods. The procedure outlined in this article can be extended to even higher order summation-by-parts approximations with repeating stencil.

Keywords: Time integration, Initial value problem, Summation-by-parts operators, Finite difference methods, Eigenvalue problem

1. Introduction

Implicit time-integration methods can be used to reduce severe stability restrictions for stiff spatially discretized well-posed initial boundary value problems. Summation-By-Parts (SBP) operators [1, 2], with Simultaneous-Approximation-Terms (SAT) weakly imposing boundary and initial condi-tion, allow for energy-stable and high-order accurate approximations [3, 4, 5, 6]. The solution of the resulting fully-discrete problem is unique, pro-vided that the eigenvalues of the time discretization operator have strictly

Corresponding author

Email addresses: andrea.ruggiu@liu.se (Andrea A. Ruggiu), jan.nordstrom@liu.se (Jan Nordstr¨om)

(4)

positive real parts [4, 5, 7]. This assumption on the eigenvalues has been proved for pseudo-spectral collocation methods [8] and second order finite-difference discretizations [4]. However, it does not hold for all types of SBP-SAT approximations [7, 9] and it has only been conjectured for higher order finite-difference methods [4, 5].

The dual-consistent [10, 11] SBP-SAT time discretizations based on finite difference methods are also A-, L- and B-stable [5]. However, the existence of eigenvalues with strictly positive real parts allows for the proof of vari-ous additional stability properties [9]. In this article, we prove that second-, fourth- and sixth-order accurate SBP-SAT time approximations based on diagonal-norm finite difference methods have eigenvalues with strictly posi-tive real parts. We also provide a general procedure that can be used to show that this property also holds for higher order approximations with repeating stencil.

The paper is organized as follows. In section 2, the finite-difference based SBP-SAT time discretizations are introduced for initial value problems. The invertibility of the resulting algebraic problem is reinterpreted in terms of the eigenvalues of the time operator. Section 3 deals with the eigenvalue analysis of second order approximations in SBP form. In this case, the eigenvalues can be computed in closed form by means of a necessary and sufficient con-dition for the existence of the eigenvectors. This concon-dition outlines a general procedure for a proof that the fourth- (section 4) and sixth-order (section 5) accurate time approximations fulfill the eigenvalue assumption. In section 6, we demonstrate numerically that the theoretical results are correct. We also provide final remarks about our findings and relate these to Runge-Kutta methods. The conclusions are given in section 7.

2. Finite difference implicit methods for initial value problems In this section we briefly introduce finite difference based SBP-SAT ap-proximations for initial value problems [4] and discuss their invertibility [8].

2.1. SBP-SAT time discretizations

Let t = [t0, t1, . . . , tN]T be a set of N + 1 equidistant nodes tj = α + j∆t,

with j = 0, . . . , N , and ∆t = (β−α)/N . A finite difference based Summation-By-Parts operator discretizing the first derivative on t can be defined as follows [2]:

(5)

Definition 2.1. A discrete operator D = P−1Q is a (p, q)-accurate approx-imation of the first derivative with the Summation-By-Parts (SBP) property if i) its truncation error is O (∆tp) in the interior and O (∆tq) at the bound-aries, ii) P is a symmetric positive definite matrix and iii) Q + QT = B =

diag (−1, 0, . . . , 1).

SBP operators based on centered finite-differences and diagonal norms P are available for even orders p = 2q in the interior, while the boundary closure is qth order accurate. We will for stability reasons [12, 13, 14] only consider operators with a diagonal P . Under this assumption, the global truncation error of first derivative SBP-SAT discretizations with diagonal norms and pointwise bounded solution is O (∆tq+1) [15]. However, dual consistent formulations exhibit a superconvergent behavior for the solution at the last time-step, with an accuracy of O (∆t2q) [10, 11]. For this reason

we will refer to (2q, q)-accurate approximations as 2qth order discretizations. The restriction to diagonal norms makes the 2nd and 4th order accurate SBP first derivative operators with minimal bandwidth unique [2]. For higher order approximations, the SBP operators D are usually given in terms of free parameters which can be tuned to fulfill additional constraints, such as for example minimizing the bandwidth [2], reducing the error constant or minimizing the spectral radius [16].

2.2. The initial value problem and its discretization Consider the initial value problem

ut+ λu = 0, 0 < t < T,

u(0) = f, (1)

where the complex constant λ represents a suitable spatial discretization of an initial-boundary value problem. Assuming that the corresponding semi-discrete approximation is energy-stable, it is known that Re(λ) ≥ 0 [4].

To obtain an estimate of the solution at t = T , we apply the energy-method (multiplying by the complex conjugate of u, integrating in time and using integration by parts) to (1) and get

|u(T )|2+ 2Re(λ)kuk2 = |f |2. (2)

In (2), kuk2 =RT

0 |u|2dt and the solution at the final time is bounded by the

(6)

A discrete approximation of the initial value problem (1) can be obtained by using the SBP operators in Definition 2.1. These allow for a perfect mimicking of the energy-method used to get the continuous estimate (2). In particular, by considering the grid t = [t0, . . . , tN]T ⊂ [0, T ], the SBP

discretization of (1) with a weakly imposed initial condition using SAT reads

P−1Qu + λu = σP−1E0(u − f e0) , (3)

where e0 = [1, 0, . . . , 0]T. In (3), the vector u = [u0, . . . , uN]T contains the

numerical approximations of u at each ti, i = 0, . . . , N , i.e. ui ≈ u(ti).

Moreover, σ is a penalty parameter and E0 = diag(1, 0, . . . , 0)T. If σ = −1,

the approximation is dual consistent [10, 17] and it is also known to be A-, L- and B-stable [5].

By applying the discrete-energy method to (3) (multiplying by u∗P , where ∗ denotes the conjugate transpose, and using the SBP property), we find [4]

|uN|2+ 2Re(λ)kuk2P = (1 + 2σ)|u0|2− σ(u0f + u0f ). (4)

In (4), the bar indicates complex conjugation. The dual consistent choice σ = −1 leads to

|uN|2 + 2Re(λ)kuk2P = |f |

2 − |u

0 − f |2,

which mimics the continuous estimate (2). The additional term on the right-hand side adds numerical dissipation, which vanishes as the number of nodes increases.

The estimate (4) indicates that the discretization (3) is energy-stable for σ ≤ −1/2, which implies that the approximation of the solution at the final time is bounded by data. However, the invertibility of (3) for SBP-SAT discretizations based on centered finite-differences with order higher than two is unclear [4, 6, 8]. To begin the analysis, we recast the problem (3) as

(D + λI)u = F, (5)

where D = P−1(Q − σE0) and F = −σf P−1e0. Under the assumption

Re(λ) ≥ 0, we conclude that the following proposition holds.

Proposition 2.2. The discrete problem (5) leads to an invertible system if the operator D has eigenvalues with strictly positive real parts.

(7)

The eigenvalues of the discrete operator D will henceforth be denoted by µ. These eigenvalues were proved to have strictly positive real parts for SBP-SAT approximations based on pseudo-spectral collocation methods [8]. On the other hand, Re (µ) > 0 is not fulfilled for all types of SBP operators [7, 9] and hence the invertibility of (5) is in general not guaranteed.

For dual consistent finite-difference approximations (i.e. for σ = −1), Re (µ) > 0 does not only lead to the uniqueness of the solution of (5), but also to null-space consistency of the SBP operator D = P−1Q associated to D [9, 18]. If this property holds, the problem (5) can be reinterpreted as a stiffly accurate, strongly S-stable and dissipatively stable Runge-Kutta time-integration method. In the following we will prove strict positivity of Re (µ) for discretizations based on SBP operators with a repeating stencil, such as the 2nd-, 4th- and 6th-order accurate finite difference formulations.

2.3. The eigenvalue problem for repeating stencil approximations We recall the following theorem [8, 19, 20]

Theorem 2.3. Given a matrix A ∈ Rn×n, suppose that H = 12(GA + ATG) and S = 12(GA − ATG) for some positive definite matrix G and some positive

semidefinite matrix H. Then A has eigenvalues with strictly positive real parts if, and only if, no eigenvector of G−1S lies in the null space of H.

By applying Theorem 2.3 to A = D = P−1(Q − σE0) with G = P , we

get H = 1 2 î (Q − σE0) + (Q − σE0)T ä = 1 2[EN − (1 + 2σ) E0] which is positive semidefinite for σ ≤ −1/2, and

S = 1 2 î (Q − σE0) − [(Q − σE0)T ó = Q − 1 2EN + 1 2E0, which is skew-symmetric.

Note that the matrix P−1S is similar to the skew-symmetric matrix P−12SP−

1

2, and hence its eigenvalues lie on the imaginary axis. For this rea-son, the eigenvalue problem associated to P−1S can be rewritten as P−1Sv = iξv, with ξ ∈ R. By limiting the analysis to σ < −1/2, the nullspace of H is given by the set of vectors v = (v0, v1, . . . , vN)T with v0 = vN = 0, since

Hv = 0 is fulfilled by all vectors with first and last components equal to zero. Therefore, as a direct consequence of Theorem 2.3 it follows that

(8)

Lemma 2.4. Let σ < −1/2. For SBP operators of finite difference type, the matrix D = P−1(Q − σE0) has eigenvalues with strictly positive real parts

if, and only if, P−1S does not have imaginary eigenvalues with eigenvectors where the first and last components are equal to zero.

Remark 2.5. A similar criterion was already introduced in [4]. The con-dition to check is independent of σ, for σ < −1/2. However, since we are interested in dual-consistent formulations, we will henceforth only consider σ = −1.

Before the eigenvalue analysis, we mention that, due to the structure of Q, the matrix S is also skew-centrosymmetric, i.e. it is such that J S = −SJ , where J =     1 ... 1    .

As a result, we can prove that

Lemma 2.6. If (iξ, v) is an eigenpair of P−1S, then (−iξ, J v) is also an eigenpair.

Proof. The relation P−1Sv = iξv can be multiplied from the left by the nonsingular matrix J . Since P−1 is centrosymmetric, it commutes with J and we get P−1J Sv = iξJ v. By the skew-centrosymmetricity of S, we get P−1S (J v) = −iξ (J v) and the claim follows.

Remark 2.7. In the upcoming sections we will implicitly assume (unless otherwise stated) that ∆t = 1, since P−1Sv = iξv can be rewritten as P−1Sv = iξ∆tv, with P = P/∆t independent of ∆t. This implies that the time-step acts only as a rescaling parameter for the eigenvalues of P−1S and does not modify its eigenvectors. In particular, the existence of eigenvectors v with v0 = vN = 0 does not depend on ∆t.

3. Spectral analysis for the second order approximation

In this section, we study the second order approximation of (1). Despite that the invertibility of this problem was already proved in [4], the second order case clarifies and identifies a sufficient and necessary condition for the existence of the eigenvectors to P−1S. For second order approximations, this

(9)

condition enables a complete eigenvalue analysis of the operator. For higher order approximations, it provides a clear path for excluding the existence of eigenvectors with first and last components equal to zero.

3.1. Condition for the existence of the eigenvectors

We start by considering the matrix P−1S for the standard second order centered finite difference approximations on SBP form:

P−1S =            0 1 −1 2 1 2 −1 2 1 2 . .. ... ... −1 2 1 2 −1 0            . (6)

To find the spectrum of this operator, we solve the internal stencil relation 1

2vk+1− 1

2vk−1 = iξvk, k = 1, . . . , N − 1.

The ansatz vk = rkgives rise to the characteristic equation r2− 2iξr − 1 = 0,

which is solved by r1,2(ξ) = iξ ±

1 − ξ2. As a consequence, assuming that

r1 6= r2, i.e. ξ 6= ±1, the general expression for the interior components of

the eigenvectors is

vk = c1(ξ) rk1(ξ) + c2(ξ) rk2(ξ) , k = 1, . . . , N − 1, (7)

where the coefficients c1(ξ), c2(ξ) must be determined by additional

condi-tions.

Since v0 and vN are not given explicitly, we must derive a set of equations

that relate the unknown coefficients c1(ξ), c2(ξ) to the boundary closure of

P−1S. In particular, by solving explicitly the first two equations of P−1Sv = iξv with v0 as a free parameter we get v1 = iξv0 and v2 = (1 − 2ξ2) v0. These

two components should match the solution in (7), leading to the homogeneous linear rectangular system

ñ r1 r2 −iξ r2 1 r22 2ξ2− 1 ô    c1(ξ) c2(ξ) v0   = ñ 0 0 ô .

(10)

The 3 × 2 matrix has rank 2, since the determinant of the first 2 × 2 block is equal to r1r2(r1− r2) 6= 0. By the rank-nulllity theorem [21], the nullspace

of the matrix, which provides the solution to the problem, has dimension one and it can be computed as the set of vectors

[c1(ξ) , c2(ξ) , v0]T = v0 ñ 1 2, 1 2, 1 ôT . (8)

We will henceforth denote the coefficients in (7) that fit with the left bound-ary equation by the vector cL(ξ) = v

0 î cL 1 (ξ) , cL2(ξ) óT = v0[1/2, 1/2]T.

Remark 3.1. Solving explicitly the eigenvector problem also for v3 would

have led to a third equation, c1(ξ) r31(ξ) + c2(ξ) r23(ξ) = iξ (3 − 4ξ2) v0, and

to the square system

   r1 r2 −iξ r21 r22 2ξ2 − 1 r3 1 r32 iξ (4ξ2− 3)       c1(ξ) c2(ξ) v0   =    0 0 0   .

However, the additional equation is linearly dependent on the other two. It can be obtained by summing the first equation with 2iξ times the second equa-tion. Indeed, both r1 and r2 fulfill r3 = r + 2iξr2.

The candidate eigenvector with vk = (v0/2)

î

rk1(ξ) + rk2(ξ)ófor k = 1, . . ., N − 1, should also match the equations at the right boundary. To guarantee that this is the case, one could substitute these components in the last two equations of P−1Sv = iξv, solving these separately for vN and equating

the two expressions. The ξ values fulfilling the resulting equation would determine the eigenvalues of P−1S. However, this procedure would lead to a problem as difficult as finding the eigenvalues of the operator. To avoid the direct computation of the eigenvalues iξ, we mimic the approach used for the left boundary nodes by explicitly solving the last two equations of P−1Sv = iξv for vN −2 and vN −1 with vN as free parameter. Thus, equating

the formal stencil expressions of vN −2 and vN −1 as functions of c1(ξ), c2(ξ)

with the explicit solutions in terms of vN, we get the homogeneous system

ñ rN −21 r2N −2 2ξ2− 1 rN −11 r2N −1 iξ ô    c1(ξ) c2(ξ) vN   = ñ 0 0 ô .

(11)

Again, the nullspace of the matrix has dimension one since the determinant of the first 2×2 block is rN −21 rN −22 (r1− r2) 6= 0. The nullspace of this matrix

can be computed as [c1(ξ) , c2(ξ) , vN] T = vN î cR1 (ξ) , cR2 (ξ) , 1óT (9) with cR1 (ξ) = −iξ + r2(1 − 2ξ 2) rN −21 (r1− r2) , cR2 (ξ) = iξ + r1(1 − 2ξ 2) rN −22 (r1− r2) .

The two resulting sets of coefficients, cL(ξ) =îcL1(ξ) , cL2 (ξ)óT and cR(ξ) =

î

cR

1 (ξ) , cR2 (ξ)

óT

, lead to the same eigenvector v in (7) if, and only if, v0cL(ξ) =

vNcR(ξ) 6= 0 for some ξ ∈ R. This implies that the existence of the

eigen-vectors for some fixed eigenvalue iξ is restricted by the linear dependence of the vectors cL(ξ) and cR(ξ), i.e.

det Çñ cL 1(ξ) cR1 (ξ) cL 2(ξ) cR2 (ξ) ôå = 0 ⇔ 1 = − Ç r2 r1 åN −2 iξ + r2(1 − 2ξ2) iξ + r1(1 − 2ξ2) . (10)

Vice versa, the linear independence of the coefficients can be used to exclude the existence of eigenvectors with v0 = vN = 0 for higher order

approxi-mations, thus proving Re (µ) > 0. However, before we move on to these problems, we complete the eigenvalue analysis of P−1S in (6) by further studying (10).

3.2. Computation of the spectrum By substituting r1 = iξ +

1 − ξ2, r

2 = iξ −

1 − ξ2 and recalling that

r1r2 = −1, we can rewrite the right hand-side of (10) as

Ç r2 r1 åN −2 1 − 2ξ2− 2iξ1 − ξ2 1 − 2ξ2+ 2iξ1 − ξ2 = Ç r2 r1 åN −2Ç r2 r1 å2 =Ä−r22äN . The equation (−r2 2) N

= 1 can be solved by writing 1 in complex form as e2iπm, with m ∈ N. In particular, we can write −r22 = e2iπm/N, or equivalently r2

2 = e(2m/N +1)iπfor m = 0, . . . , N . Taking the square root of both sides yields

r2 = e( m N+ 1 2)iπ = cos ÇÇ m N + 1 2 å π å + i sin ÇÇ m N + 1 2 å π å , m = 0, . . . , N.

(12)

-0.2 -0.1 0 0.1 0.2 Real part -1 -0.5 0 0.5 1 Imaginary part Eigenvalues of A, FD-(2,1), N = 10 Computed eigenvalues Theoretical eigenvalues -0.2 -0.1 0 0.1 0.2 Real part -1 -0.5 0 0.5 1 Imaginary part Eigenvalues of A, FD-(2,1), N = 25 Computed eigenvalues Theoretical eigenvalues

Figure 1: Computed and theoretical eigenvalues of P−1S for the second order accurate finite difference approximation with N = 10 and N = 25.

This expression implies that r2 must lie on the unit circle. On the other

hand, the closed form expression for this complex number, r2 = iξ −

1 − ξ2,

indicates that r2 has a modulus equal to one only if −1 ≤ ξ ≤ 1. Under this

condition, we find that the imaginary part of r2 is also equal to ξ, yielding

ξ = sin ÇÇ m N + 1 2 å π å , m = 0, . . . , N. (11)

These N + 1 values of ξ uniquely determine the spectrum of P−1S.

This conclusion seems to contradict the assumption of single roots for which ξ 6= ±1, since these values can be obtained from (11) for m ∈ {0, N }. However, repeating the argument above for the double root case (ξ = ±1), which gives vk = (c1(ξ) + c2(ξ) k) rk1 for k = 1, . . . , N − 1, leads to

cL1 (ξ) = 1, cL2 (ξ) = 0, cR1 (ξ) = eiπN2 , cR

2 (ξ) = 0.

These coefficients are compatible with the single roots ansatz (since the term krk

1 vanishes), and hence the double root analysis leads to the same closed

form of ξ as in (11).

Remark 3.2. The eigenvectors of P−1S in (6) do not satisfy v0 = vN = 0,

since substituting v0 = 0 into (8) and vN = 0 into (9) give v = 0. Hence the

operator D has eigenvalues µ with positive real parts due to Lemma 2.4 (as previously proved in [4]).

In Figure 1, the spectrum of P−1S is shown for different N . Note that the closed form of the eigenvalues iξ agrees with the eigenvalues computed

(13)

numerically. Moreover, all the values for ξ are bounded by the double root conditions ξ = ±1.

3.3. Summarizing the proof procedure

The analysis made in this section helps us to design a general procedure in order to prove the positivity of Re (µ) also for higher order discretizations. Rather than computing the eigenpairs of P−1S in closed form, we will focus on the coefficients ci(ξ) that define the interior components of the

eigen-vectors. By solving explicitly the first and last equations of P−1Sv = iξv, we can find two sets of coefficients cL(ξ), cR(ξ) that fit the left and right

boundary equations, respectively. As shown for the second order case, the existence of the eigenvectors depends on the linear dependence of the vectors cL(ξ), cR(ξ) for all the eigenvalues iξ.

Similarly, the existence of eigenvectors v with first and last component equal to zero is restricted by the linear dependence of the vectors cL(ξ), cR(ξ)

that are compatible with the constraints v0 = vN = 0. In particular, if these

constraints give rise to cL(ξ), cR(ξ) that are provably linearly independent for every eigenvalue iξ and dimensions N , then Re (µ) > 0 follows due to Theorem 2.3. Of course it is not feasible to check the linear dependence for all the eigenvalues of P−1S, since it would require an a-priori knowledge of the spectrum itself. However, that is not necessary since the Gershgorin theorem yields a bound on viable values of ξ.

Thus, to prove that no eigenvectors with v0 = vN = 0 exists for a

dis-cretization with a repeating stencil, the condition of linear dependence will be analyzed as a function of the ξ parameter, which is bounded in a closed interval Iξ (given by the Gershgorin theorem). Forcing the linear dependence

of the vectors cL(ξ), cR(ξ) yields an equation which may be solved for the

dimension of the operator, N . If no ξ ∈ Iξ exists for which N is an integer,

then no eigenvector of P−1S with first and last component equal to zero can exist and we can conclude that Re (µ) > 0 due to Theorem 2.3. When the constraint of linear dependence can not be solved analytically for N , the condition will be checked numerically (see section 5).

Below, we summarize the procedure for the 2qth-order discretization. 1. Determine the interval Iξ in which ξ can be bounded by the Gershgorin

theorem.

(14)

internal stencil relation for the roots rj(ξ), j = 1, . . . , 2q. Identify the

condition for multiple roots in terms of ξ.

3. Assume that the characteristic equations has single-roots rj(ξ), j =

1, . . . , 2q. Solve explicitly the first 3q equations of the eigenvalue prob-lem for v2q, . . . , v4q−1 by fixing v0 = 0 and using v1, . . . , vq−1 as free

parameters. By equating these components with

vk = 2q

X

j=1

cj(ξ) rkj (ξ) , k = 2q, . . . , 4q − 1

it is possible to write a 2q × (3q − 1) linear homogeneous system, which is solved by q − 1 linearly independent vectors. The solution of this problem leads to the coefficients of the internal stencil relation, which can be written as a linear combination cL(ξ) =Pq−1

k=1vkcL,k(ξ).

4. Repeat the third step by solving the last 3q equations of the eigenvalue problem for vN −2q, . . ., vN −4q+1 by fixing vN = 0 and using vN −q+1, . . .,

vN −1as free parameters. The coefficients of the internal stencil relation

can be found as a linear combination cR(ξ) =Pq−1

k=1vN −kcR,k(ξ).

5. If an eigenvector with v0 = vN = 0 exists, the two vectors cL(ξ) and

cR(ξ) must be equal for some (v

1, . . . , vq−1, vN −q+1, . . . , vN −1) ∈ R2q−2.

If the vectors cL,1(ξ) , . . . , cL,q−1(ξ) , cR,1(ξ) , . . . , cR,q−1(ξ) are linearly

independent for every (ξ, N ) ∈ Iξ × N, then D has eigenvalues with

strictly positive real parts due to Lemma 2.4.

6. Repeat the third, fourth and fifth step for the double-roots case.

4. The fourth order approximation

The matrix P−1S for the fourth order approximation is given by [2]

P−1S =            0 5934 −4 17 − 3 34 −1 2 0 1 2 0 4 43 − 59 86 0 59 86 − 4 43 3 98 0 − 59 98 0 − 32 49 − 4 49 1 12 − 2 3 0 2 3 − 1 12 . .. . .. . .. . .. ...            . (12)

(15)

The eigenvalues iξ of this matrix satisfy |ξ| ≤ 35/17, due to the Gershgorin theorem. Moreover, the stencil relation for the eigenvalue problem

− 1 12vk+2+ 2 3vk+1− 2 3vk−1+ 1 12vk−2 = iξvk, k = 4, . . . , N − 4 yields the characteristic equation r4− 8r3+ 12iξr2+ 8r − 1 = 0, whose roots

rj(ξ), j = 1, . . . , 4, can be found in closed form.

Since the roots of the characteristic equation are continuous functions of the parameter ξ, it is possible to uniquely identify them by an ”initial” condition, i.e. the value that each rj(ξ) assumes for ξ = 0. With r1(ξ) we

denote the root such that r1(0) = 4 −

15. The other roots are r2(0) = −1,

r3(0) = 4 +

15 and r4(0) = 1 (see Figure 2). The explicit form of the

functions rj(ξ) will not be provided here, since they are neither easy to write

down nor particularly useful for the upcoming study. For further details on closed form solutions of quartic equations, see for example [22].

-1 -0.5 0 0.5 1 Real part 0 0.5 1 1.5 2 Imaginary part

Roots of the characteristic equation r1(ξ) r2(ξ) r4(ξ) 0 2 4 6 8 Real part -4 -3 -2 -1 0 1 2 3 Imaginary part

Roots of the characteristic equation r1(ξ) r2(ξ) r3(ξ) r4(ξ)

Figure 2: The roots rj(ξ) of the characteristic equation r4− 8r3+ 12iξr2+ 8r − 1 = 0

are shown in the complex plane for ξ ∈ Iξ = [0, 35/17]. The roots rj(ξ) are continuous

functions of ξ labeled according to the conditions r1(0) = 4 −

15, r2(0) = −1, r3(0) =

4 +√15 and r4(0) = 1.

We split the analysis in two parts, by considering the single- and double-roots cases separately. In order to increase the readability of the upcoming proofs, we introduce the notations

πk= 4 Y j=1,j6=k (rk− rj) , σ (n) k = X τ ∈A(n)k Y j∈τ rj, (13)

(16)

where A(n)k is the family of sets of n indices in {1, . . . , 4} not containing k. For example, if k = 3 and n = 2, we get

π3 = (r3− r1) (r3− r2) (r3− r4) , A (2)

3 = {{1, 2} , {1, 4} , {2, 4}} ,

σ3(2) = r1r2+ r1r4+ r2r4.

4.1. The single-root case To start with, we prove

Lemma 4.1. If ξ 6= ±16»9 + 24√6, the eigenvalue problem for (12) is not solved by any eigenpair (iξ, v) with v0 = vN = 0.

Proof. If the characteristic equation has single roots (ξ 6= ±16»9 + 24√6), the general expression of the interior components of the eigenvectors v of P−1S in (12) is vk= 4 X j=1 cj(ξ) rjk(ξ) , k = 4, . . . , N − 4. (14)

The structure of P−1S in (12) enables the explicit computation of the first eigenvector components as linear functions of v0 and v1, e.g.

v2 = v0+ 2iξv1,

v3 = 13[− (8 + 34iξ) v0+ (59 − 16iξ) v1] .

The following eigenvector components v4, v5, . . . can be computed

sequen-tially by substitution. In particular, the constraint v0 = 0 leads to vj =

wj(ξ) v1, j = 4, . . . , 7, where

w4(ξ) = 16(826 − 236iξ + 129ξ2) ,

w5(ξ) = 13(3304 − 1711iξ + 320ξ2) ,

w6(ξ) = 13(25960 − 18510iξ + 1144ξ2− 774iξ3) ,

w7(ξ) = 13(204435 − 186800iξ − 11896ξ2− 10032iξ3) .

By equating these expressions of vj with the stencil form (14), we get the

rectangular homogeneous system

     r41 r24 r43 r44 −w4(ξ) r5 1 r25 r53 r45 −w5(ξ) r6 1 r26 r63 r46 −w6(ξ) r71 r27 r73 r47 −w7(ξ)              c1(ξ) c2(ξ) c3(ξ) c4(ξ) v1         =      0 0 0 0      .

(17)

The nullspace of the matrix has dimension one, since the first 4 × 4 block has the determinant 4 Y k=1 rk4· Y k<j (rk− rj) 6= 0.

The general solution of the system is

[c1(ξ) , c2(ξ) , c3(ξ) , c4(ξ) , v1] T

= v1

î

cL1 (ξ) , cL2 (ξ) , cL3 (ξ) , cL4 (ξ) , 1óT , where the closed form expression for the coefficients cL

j (ξ), j = 1, . . . , 4 can

be computed by using the command NullSpace in MAPLE. In particular, with the notation in (13) we can write

cLj (ξ) = −w4(ξ) σ (3) j − w5(ξ) σ (2) j + w6(ξ) σ (1) j − w7(ξ) r4 jπj , j = 1, . . . , 4. (15) In a similar fashion, the last 6 equations of the eigenvalue problem can be used to find explicitly vj = wj(ξ) vN −1, j = N − 7, . . . , N − 4. The

coeffi-cients of the stencil relation (14) that are compatible with these eigenvector components are given by vN −1cR(ξ), where

cRj (ξ) = −w7(ξ) σ (3) j − w6(ξ) σ (2) j + w5(ξ) σ (1) j − w4(ξ) rN −7j πj , j = 1, . . . , 4. (16) Once again, the bar indicates the complex conjugation of the coefficients of the polynomials wi(ξ).

An eigenvector v with v0 = vN = 0 exists if, and only if, it is possible to

find (v1, vN −1) ∈ R2 and ξ ∈ R such that v1cL(ξ) = vN −1cR(ξ). This implies

that the existence of v is guaranteed if the vectors cL(ξ), cR(ξ) are linearly

dependent for some eigenvalue iξ, i.e. if

det Çñ cL i (ξ) cRi (ξ) cLj (ξ) cRj (ξ) ôå = 0, ∀i, j = 1, . . . , 4, i 6= j. (17)

It suffices to consider a couple of indices. The linear dependence constraint (17) for i = 1, j = 3 gives rise to the equation

Ç

r3(ξ)

r1(ξ)

åN −11

(18)

where g (ξ) =  w4σ (3) 1 − w5σ (2) 1 + w6σ (1) 1 − w7   w7σ (3) 3 − w6σ (2) 3 + w5σ (1) 3 − w4   w7σ (3) 1 − w6σ (2) 1 + w5σ (1) 1 − w4   w4σ (3) 3 − w5σ (2) 3 + w6σ (1) 3 − w7 .

If (18) holds, then the same relation is fulfilled for the absolute values of the left and right hand-sides. In particular, we find

N = log (|g (ξ)|) log (|r3(ξ) /r1(ξ)|)

+ 11 =: G (ξ) . (19)

Since for the matrix in (12) the Gershgorin theorem states that |ξ| ≤ 35/17, we can evaluate G (ξ) in this interval and find the values ξ for which this function becomes an integer. As can be seen in Figure 3, the only integer value assumed by G (ξ) is 3, which is not compatible with the dimension of a fourth order accurate SBP operator.

0 0.5 1 1.5 2 2.5 ξ 2.9 3 3.1 3.2 3.3 3.4 3.5 3.6 G( ξ ) Plot of G(ξ)

Figure 3: Plot of G (ξ) in (19) for ξ ∈ [0, 35/17]. The only integer value assumed by the function is N = 3, which is not compatible with the dimension of a fourth order accurate SBP operator.

(19)

Remark 4.2. Due to Lemma 2.6, it suffices to evaluate the function G (ξ) for ξ ∈ [0, 35/17], rather than for ξ ∈ [−35/17, 35/17]. Indeed, if an eigenvector with the property v0 = vN = 0 does exist for a fixed ξ, it exists for −ξ as

well.

4.2. The double-roots case

To conclude the proof, we must repeat the analysis for the double-root case, i.e. for ξ = ±16»9 + 24√6. We prove

Lemma 4.3. If ξ = ±16»9 + 24√6, the eigenvalue problem for (12) is not solved by any eigenpair (iξ, v) with v0 = vN = 0.

Proof. Due to Lemma 2.6, it suffices to consider ξ = 16»9 + 24√6. For this ξ value r2(ξ) = r4(ξ) (see Figure 2) and we must consider the double-root

ansatz

vk = c1r1k+ (c2+ c4k) r2k+ c3r3k, k = 4, . . . , N − 4. (20)

As for the single-root analysis, we can find the coefficients that are compatible with the left boundary closure of P−1Sv = iξv by solving the rectangular homogeneous system      r14 r42 r34 4r24 −w4 r5 1 r52 r35 5r25 −w5 r6 1 r62 r36 6r26 −w6 r17 r72 r37 7r27 −w7              c1 c2 c3 c4 v1         =      0 0 0 0      .

Also in this case, the nullspace of the matrix has dimension one, since the determinant of the first 4 × 4 block is

r41r29r43(r1− r2)2(r1− r3) (r2− r3)2 6= 0.

In particular, the problem is solved by v1cL, where the closed form expression

for the coefficients cL

j, j = 1, . . . , 4 can be computed by using the command

NullSpace in MAPLE. Similarly, it is possible to find the coefficients that fit the right boundary equations of the eigenvalue problem as vN −1cR.

A necessary and sufficient condition for the existence of an eigenvector v with v0 = vN = 0 is that the vectors cL, cR are linearly dependent, i.e. that

(17) holds. However, the coefficients

cL1 = −w4r 2 2r3 − w5(r22+ 2r2r3) + w6(2r2 + r3) − w7 r4 1(r1− r2)2(r1− r3) ,

(20)

cL3 = −w4r 2 2r1 − w5(r22+ 2r2r1) + w6(2r2 + r1) − w7 r4 3(r3− r2)2(r3− r1) , cR1 = −w7r 2 2r3 − w6(r22+ 2r2r3) + w5(2r2 + r3) − w4 r1N −7(r1− r2) 2 (r1− r3) , cR3 = −w7r 2 2r1− w6(r22+ 2r2r1) + w5(2r2+ r1) − w4 rN −73 (r3− r2) 2 (r3− r1)

have the same formal expression as for the single-root case, provided that r2 = r4. Thus, the linear independence of cL and cR can be checked by

studying Figure 3 at ξ = 16»9 + 24√6 ≈ 1.3722. Since G16»9 + 24√6 ∈/ N, the claim follows.

Remark 4.4. The linear dependence constraint (17) for i = 1, j = 3 takes into account only the coefficients associated to the non-repeated roots. The purpose of Lemma 4.3 is to show that this condition varies continuously with respect to the values ξ ∈ Iξ = [0, 35/17], despite that two different ansatz are

considered. Note that the determinant in (17) for i, j ∈ {2, 4} can not be a continuous function of ξ ∈ Iξ, because at ξ = 16

»

9 + 24√6 the numerators of cL,Ri (ξ) in (15), (16) do not vanish, while π2 = π4 = 0.

For the sixth-order approximations, we will verify and mention the conti-nuity of the linear dependence constraint without stating a dedicated lemma.

Due to Theorem 2.4, Lemma 4.1 and Lemma 4.3 imply

Theorem 4.5. The fourth order diagonal-norm finite difference SBP-SAT operator D has eigenvalues with strictly positive real parts.

5. The sixth order approximations

The argument used for the fourth order approximation can be generalized to sixth order approximations with repeating stencil. Although the idea to prove the invertibility is the same, there are some additional technicalities one must take care of. To start with, the sixth order approximations with minimal stencil bandwidth are not unique and are defined in terms of a free

(21)

parameter x [2]. In this case, the matrix S is given by                 0 s0,1 s0,2 s0,3 s0,4 s0,5 −s0,1 0 s1,2 s1,3 s1,4 s1,5 −s0,2 −s1,2 0 s2,3 s2,4 s2,5 −s0,3 −s1,3 −s2,3 0 s3,4 s3,5 a1 −s0,4 −s1,4 −s2,4 −s3,4 0 s4,5 a2 a1 −s0,5 −s1,5 −s2,5 −s3,5 −s4,5 0 a3 a2 a1

−a1 −a2 −a3 0 a3 a2 a1

. .. . .. ... ... ... ... ...                 (21)

where the si,j’s are linear functions of x and a1 = 1/60, a2 = −3/20, a3 = 3/4.

Hence, for sixth order approximations the condition for the existence of the eigenvectors must be studied as a function of both ξ and x.

Remark 5.1. The coefficients si,j, as well as the matrix P and all the

quan-tities that are not written explicitly in this section, can be found in Appendix A.

Furthermore, the internal stencil relation for the eigenvalue problem 1 60vk+3− 3 20vk+2+ 3 4vk+1− 3 4vk−1+ 3 20vk−2− 1 60vk−3 = iξvk, k = 6, . . . , N −6 leads to the sextic equation r6 − 9r5 + 45r4 − 60iξr3 − 45r2 + 9r − 1 = 0,

which can not be solved in closed form due to the Abel-Ruffini theorem [23]. This implies that in order to prove the statement we must compute the roots numerically for every ξ ∈ Iξ, where Iξ is the closed interval in

which ξ can be bounded by the Gershgorin theorem. On the other hand, the width of Iξ may depend on x, making the proof more challenging. However,

three common choices for the free parameter are x1 = 342523/518400 [24],

x2 = 89387/129600 [2] and x3 = 0.70127127127 . . . (the value for which the

spectral radius of P−1Q is minimized [16],[25]), and for all these three values the Gershgorin theorem leads to Iξ = [0, 4.21].

Remark 5.2. The parameter x = x3 = 0.70127127127 . . . minimizes the

spectral radius of P−1S for sixth order approximations on SBP form, but its value is yet unpublished [25].

Limiting the analysis to x ∈ {x1, x2, x3}, the roots of the characteristic

(22)

-2 -1 0 1 2 Real part -0.5 0 0.5 1 1.5 2 2.5 3 Imaginary part

Roots of the characteristic equation r 1(ξ) r2(ξ) r3(ξ) r4(ξ) -2 0 2 4 6 8 Real part -8 -6 -4 -2 0 2 4 6 Imaginary part

Roots of the characteristic equation

r1(ξ) r2(ξ) r3(ξ) r4(ξ) r5(ξ) r 6(ξ)

Figure 4: The roots rj(ξ) of the characteristic equation r6− 9r5+ 45r4− 60iξr3− 45r2+

9r − 1 = 0 are shown in the complex plane for ξ ∈ Iξ = [0, 4.21]. The roots are ordered,

for any fixed ξ, in ascending order by their modulus. If the moduli of two roots match, the roots are ordered in ascending order by their real part. With these criteria, rj(ξ) are

continuous functions of ξ. For ξ = 0, we get r1(0) ≈ 0.0950032 − 0.1127423i, r2(0) ≈

0.0950032 + 0.1127423i, r3(0) = −1, r4(0) = 1, r5(0) ≈ 4.400500 − 4.986139i, r6(0) ≈

4.400500 + 4.986139i.

A further additional difficulty compared to the fourth order case is due to the polynomial

ψx = −1728

Ä

64900362240000x2− 99055767153600x + 37572249231809ä

which determines the form of the coefficients cj(ξ) in the single-root ansatz

vk= 6

X

j=1

cj(ξ) rjk(ξ) , k = 6, . . . , N − 6. (22)

In particular the candidate eigenvector may have a different form whether ψx

is different or equal to zero, i.e. whether x = x∗ = 764319191/1001548800 ± √

3467141604054577/1001548800 is fulfilled or not. However, the critical values x∗ ∈ {x/ 1, x2, x3} and as a result we will henceforth consider the case

ψx 6= 0.

In order to increase the readability of the upcoming proofs, we generalize the notations (13) to six indices as

πk= 6 Y j=1,j6=k (rk− rj) , σ (n) k = X τ ∈A(n)k Y j∈τ rj,

(23)

where A(n)k is the family of sets of n indices in {1, . . . , 6} not containing k. For example, if k = 3 and n = 4, we get

π3 = (r3− r1) (r3 − r2) (r3− r4) (r3− r5) (r3− r6) ,

A(4)3 = {{1, 2, 4, 5} , {1, 2, 4, 6} , {1, 2, 5, 6} , {1, 4, 5, 6} , {2, 4, 5, 6}} , σ3(4) = r1r2r4r5 + r1r2r4r6+ r1r2r5r6+ r1r4r5r6 + r2r4r5r6.

5.1. The proof for sixth order approximations

Before discussing the invertibility of D for x ∈ {x1, x2, x3}, we first outline

the argument for any x ∈ R. Consider the eigenvalue problem P−1Sv = iξv with S in (21) and v0 = vN = 0. We start by studying the case in which the

characteristic equation has single roots, thus

ξ 6= ξ±∗ = ± Ã 1 9 + 3 2 3   5 2 + 1 3 √ 20.

By solving sequentially the first 9 equations of the eigenvalue problem with v1 and v2 as free parameters, we can obtain the eigenvector components

v3, . . ., v11 as linear functions of v1, v2. In particular, equating

vj = wj(ξ, x) v1+ zj(ξ, x) v2, j = 6, . . . , 11

with vj in (22) yields the 6 × 8 homogeneous linear system

          r6 1 r26 r36 r46 r65 r66 −w6(ξ, x) −z6(ξ, x) r7 1 r27 r37 r47 r75 r76 −w7(ξ, x) −z7(ξ, x) r81 r28 r38 r48 r85 r86 −w8(ξ, x) −z8(ξ, x) r9 1 r29 r39 r49 r95 r96 −w9(ξ, x) −z9(ξ, x) r10 1 r210 r310 r104 r105 r610 −w10(ξ, x) −z10(ξ, x) r111 r211 r311 r411 r115 r611 −w11(ξ, x) −z11(ξ, x)                          c1(ξ) c2(ξ) c3(ξ) c4(ξ) c5(ξ) c6(ξ) v1 v2                =           0 0 0 0 0 0           .

The rank of the matrix above is 6 since the first 6 × 6 block has a nonzero determinant in the single-root case. As a consequence, the nullspace of the matrix has dimension 2 and the solution of the system can be expressed as the linear combination

(24)

where the coefficients cL,jk can be computed with MAPLE as cL,1k (ξ, x) = − 1 r6 kπk h w6(ξ, x) σ (5) k − w7(ξ, x) σ (4) k + w8(ξ, x) σ (3) k − w9(ξ, x) σ (2) k + w10(ξ, x) σ (1) k − w11(ξ, x) i = κ L,1 k (ξ, x) ψxrk6πk , and cL,2k (ξ, x) = − 1 r6 kπk h z6(ξ, x) σ (5) k − z7(ξ, x) σ (4) k + z8(ξ, x) σ (3) k − z9(ξ, x) σ (2) k + z10(ξ, x) σ (1) k − z11(ξ, x) i = κ L,2 k (ξ, x) ψxr6kπk .

Similarly, by solving the last 9 equations of the eigenvalue problem sequen-tially from the bottom with vN −2 and vN −1 as free parameters, we find the

coefficients cR(ξ, x) = vN −1cR,1(ξ, x) + vN −2cR,2(ξ, x) , where cR,1k (ξ, x) = − 1 rN −11k πk h w11(ξ, x) σ (5) k − w10(ξ, x) σ (4) k + w9(ξ, x) σ (3) k −w8(ξ, x) σk(2)+ w7(ξ, x) σ(1)k − w6(ξ, x) i = κ R,1 k (ξ, x) ψxrkN −11πk , and cR,2k (ξ, x) = − 1 rN −11k πk h z11(ξ, x) σ (5) k − z10(ξ, x) σ (4) k + z9(ξ, x) σ (3) k − z8(ξ, x) σ (2) k + z7(ξ, x) σ (1) k − z6(ξ, x) i = κ R,2 k (ξ, x) ψxrN −11k πk ,

Thus, there exists an eigenvector v with v0 = vN = 0 if, and only if, it is

possible to find v1, v2, vN −2, vN −1 such that

v1cL,1(ξ, x) + v2cL,2(ξ, x) = vN −1cR,1(ξ, x) + vN −2cR,2(ξ, x) .

In other words, the existence of eigenvectors with first and last component equal to zero is subject to the linear dependence of the vectors cL,1(ξ, x),

(25)

cL,2(ξ, x), cR,1(ξ, x) and cR,2(ξ, x) for some fixed ξ and x. For example, one

can consider the indices {1, 2, 5, 6} and study the determinant D (ξ, x, N ) of the matrix       cL,11 (ξ, x) cL,21 (ξ, x) cR,11 (ξ, x) cR,21 (ξ, x) cL,12 (ξ, x) cL,22 (ξ, x) cR,12 (ξ, x) cR,22 (ξ, x) cL,15 (ξ, x) cL,25 (ξ, x) cR,15 (ξ, x) cR,25 (ξ, x) cL,16 (ξ, x) cL,26 (ξ, x) cR,16 (ξ, x) cR,26 (ξ, x)       which is 1 ψ4 xπ1π2π5π6r61r26r65r66   Ä κR,21 κR,12 − κR,22 κR,11 ä ÄκL,25 κL,16 − κL,26 κL,15 ä (r1r2) N −17 − Ä κR,21 κR,15 − κR,25 κR,11 ä ÄκL,22 κL,16 − κL,26 κL,12 ä (r1r5) N −17 + Ä κR,21 κR,16 − κR,26 κR,11 ä ÄκL,22 κL,15 − κL,25 κL,12 ä (r1r6) N −17 + Ä κR,22 κR,15 − κR,25 κR,12 ä ÄκL,21 κL,16 − κL,26 κL,11 ä (r2r5)N −17 − Ä κR,22 κR,16 − κR,26 κR,12 ä ÄκL,21 κL,15 − κL,25 κL,11 ä (r2r6)N −17 + Ä κR,25 κR,16 − κR,26 κR,15 ä ÄκL,21 κL,12 − κL,22 κL,11 ä (r5r6) N −17   or, equivalently D (ξ, x, N ) = 1 ψ4 xπ1π2π5π6r16r62r56r66 " τ1,2(ξ, x) (r1r2)N −17 + τ1,5(ξ, x) (r1r5)N −17 + τ1,6(ξ, x) (r1r6)N −17 + τ2,5(ξ, x) (r2r5)N −17 + τ2,6(ξ, x) (r2r6)N −17 + τ5,6(ξ, x) (r5r6)N −17 # . (23)

If D (ξ, x, N ) = 0 does not hold for any (ξ, x, N ) ∈ Iξ × R × N, then the

eigenvalues of D have strictly positive real parts due to Lemma 2.4.

In order to prove the result for the double-roots case, it suffices to show that the coefficients cL,1k (ξ, x), cL,2k (ξ, x), cR,1k (ξ, x) and cR,2k (ξ, x) are the same as for the single-roots analysis for k ∈ {1, 2, 5, 6}. According to the

(26)

labeling of the roots in Figure 4, at ξ = ξ+∗ we find r3

Ä

ξ+∗ä= r4

Ä

ξ+∗ä. Hence, the double-roots ansatz is

vk = c1r1k+ c2rk2 + (c3+ c4k) rk3 + c5rk5 + c6r6k, k = 6, . . . , N − 6

which leads to the following homogeneous system for the left boundary:

          r16 r62 r63 6r36 r65 r66 −w6 −z6 r7 1 r72 r73 7r73 r75 r76 −w7 −z7 r8 1 r82 r83 8r83 r85 r86 −w8 −z8 r19 r92 r93 9r39 r95 r96 −w9 −z9 r10 1 r102 r310 10r103 r105 r610 −w10 −z10 r11 1 r112 r311 11r113 r115 r611 −w11 −z11                          c1 c2 c3 c4 c5 c6 v1 v2                =           0 0 0 0 0 0           .

One can verify that this problem is solved by the same cL,1k (ξ, x), cL,2k (ξ, x) as in the single-roots case for k ∈ {1, 2, 5, 6}, which are the indices of the non-repeated roots. Similarly, also the coefficients cR,1k (ξ, x) and cR,2k (ξ, x) are unchanged for these indices. Thus, the solvability of D (ξ, x, N ) = 0 determines the existence of the eigenvectors with v0 = vN = 0 for the

double-roots case, as well.

5.2. Invertibility analysis for some parameter values

Since the problem D (ξ, x, N ) = 0 is not explicitly solvable for N , it ap-pears prohibitive to show the invertibility of D for all the possible sixth order approximations with minimal bandwidth. In fact, one may even conjecture the existence of a parameter x such that the SBP-SAT operator is not in-vertible for some N . For this reason, we will limit our analysis to the values x ∈ {x1, x2, x3} that are, to the best of our knowledge, the only ones used in

the literature, so far.

Remark 5.3. We only consider sixth order approximations with at least one interior stencil relation, hence N ≥ 12.

We show

Theorem 5.4. The sixth order diagonal-norm finite difference SBP-SAT op-erator D with x = x1 = 342523/518400 has eigenvalues with strictly positive

(27)

Proof. We start by noticing that the expression of D (ξ, x, N ) in (23) depends on the products of the roots r1r2, r1r5, r1r6, r2r5, r2r6, r5r6. The modulus of

their reciprocals are shown in Figure 5 as functions of ξ ∈ Iξ = [0, 4.21]. For

0 1 2 3 4 5 Real part 0 10 20 30 40 50 60 70 Imaginary part

Absolute values of the products of the roots

|r1(ξ)r2(ξ)|-1 |r 1(ξ)r5(ξ)| -1 |r 1(ξ)r6(ξ)| -1 |r2(ξ)r5(ξ)|-1 |r 2(ξ)r6(ξ)| -1 |r 5(ξ)r6(ξ)| -1

Figure 5: The modulus of (r1r2)−1, (r1r5)−1, (r1r6)−1 (r2r5)−1, (r2r6)−1, (r5r6)−1 as

functions of ξ ∈ Iξ.

any fixed ξ, the magnitude of 1/ (r1r2) is significantly greater than the ones

of the other reciprocals. In particular, we can write

44.228 / 1 r1r2 / 69.898, 0.952 / 1 r1r6 ≤ 1, 1 ≤ 1 r2r5 / 1.051, 0.014 / 1 r5r6 / 0.023, (24)

while r1r5 = r2r6 = 1, ∀ξ ∈ Iξ. As a consequence, for N ≥ 17 the first term

in (23) is significantly larger than the other contributions of D (ξ, x, N ), since |τ1,2(ξ, x1)| ' 2.507 · 1087, |τ1,5(ξ, x1)| / 3.050 · 1075,

|τ1,6(ξ, x1)| / 1.798 · 1075, |τ2,5(ξ, x1)| / 3.737 · 1075,

|τ2,6(ξ, x1)| / 3.050 · 1075, |τ5,6(ξ, x1)| / 5.935 · 1055,

(28)

The term with 1/ (r1r2)N −17 has a reduced magnitude as N gets smaller,

but it is still dominant for N ≥ 12. Indeed, due to (24) for N = 12 we can write τ1,2(ξ, x1) (r1r2) −5 ' 1.502 · 1078, τ1,5(ξ, x1) (r1r5) −5 / 3.050 · 1075, τ1,6(ξ, x1) (r1r6) −5 / 2.299 · 1075, τ2,5(ξ, x1) (r2r5) −5 / 3.737 · 1075, τ2,6(ξ, x1) (r2r6) −5 / 3.050 · 1075, τ5,6(ξ, x1) (r5r6) −5 / 1.104 · 1065, for ξ ∈ Iξ.

Hence, we proved that D (ξ, x1, N ) 6= 0 for any ξ ∈ Iξ. This implies

that it is not possible to have eigenvectors v of P−1S with v0 = vN = 0 for

x = x1 = 342523/518400, and the claim follows due to Lemma 2.4.

In a similar fashion, we prove

Theorem 5.5. The sixth order diagonal-norm finite difference SBP-SAT operator D with x = x2 = 89387/129600 and x = x3 = 0.70127127127 . . . has

eigenvalues with strictly positive real parts. Proof. See Appendix B and C.

6. Final remarks on the strict positivity of the eigenvalues

The spectrum of D is shown in Figure 6 for σ = −1, N ∈ {50, 200}, ∆t = 1/N and several SBP-SAT finite-difference approximations. Note that for 6th-order discretizations the free parameter, x leads to spectra with different features.

If x = x1 = 342523/518400, the eigenvalues are more distant from the

imaginary axis than for the other x values. On the other hand, this parameter choice also results in two outliers which significantly increase the stiffness of the time discretization operator for large values of N . Two outliers, close to the origin of the complex plane, can be found for x = x2 = 89387/129600 as

well. These eigenvalues have a strictly positive real part which increases as N increases. The last parameter choice, x = x3 = 0.70127127127 . . ., yields

the spectrum closest to the imaginary axis, but it does not have outliers. As predicted by our analysis, the time discretization operator has eigen-values with strictly positive real parts. This property allow for provably

(29)

unique and invertible fully-discrete approximations of initial boundary value problems. Another direct consequence of our findings is that the 2nd-, 4th-and 6th-order accurate first derivative SBP operators D are null-space con-sistent, i.e. their nullspaces consist only of the vector 1 = [1, . . . , 1]T. This allows us to conclude that the dual-consistent SBP-SAT time-integration methods based on these operators are stiffly accurate, strongly S-stable and dissipatively stable Runge-Kutta schemes. (See [9] for further details.)

0 2 4 6 8 10 12 Re(µ) -100 -50 0 50 100 Im( µ ) Eigenvalues of P-1(Q + E0) for N = 50 2nd order 4th order 6th order, x = x1 6th order, x = x2 6th order, x = x3 0 0.005 0.01 0.015 0.02 Re(µ) 0 10 20 30 40 50 60 70 80 90 Im( µ ) Eigenvalues of P-1(Q + E0) for N = 50 2nd order 4th order 6th order, x = x1 6th order, x = x2 6th order, x = x3 0 5 10 15 20 Re(µ) -400 -300 -200 -100 0 100 200 300 400 Im( µ ) Eigenvalues of P-1(Q + E 0) for N = 200 2nd order 4th order 6th order, x = x1 6th order, x = x2 6th order, x = x3 0 2 4 6 8 Re(µ) ×10-4 0 50 100 150 200 250 300 350 Im( µ ) Eigenvalues of P-1(Q + E 0) for N = 200 2nd order 4th order 6th order, x = x1 6th order, x = x2 6th order, x = x3

Figure 6: The spectrum of D for σ = −1, N ∈ {50, 200} and ∆t = 1/N is shown for several SBP-SAT finite-difference based approximations.

7. Conclusions

We have identified a necessary and sufficient condition for the existence of eigenvectors to a matrix with repeating stencil. This criterion has been used to investigate the eigenvalue problems associated to SBP-SAT time

(30)

dis-cretizations. The condition enabled a detailed eigenvalue analysis for finite-difference based second-order approximations.

We have also studied fourth- and sixth-order approximations of energy-stable initial value problems. Strict positivity of the eigenvalues of these formulations has been shown irrespective of the discretization parameter. In particular, the dual consistent choice leads to stiffly accurate, strongly S-stable and dissipatively S-stable Runge-Kutta time-integration methods. Our approach can be used to prove this result for even higher order approxima-tions with repeating stencil.

8. Acknowledgments

This work was supported by VINNOVA, the Swedish Governmental Agency for Innovation Systems, under contract number 2013-01209.

A. Coefficients and auxiliary polynomials for the sixth order ap-proximation

The coefficients of S in (21) are [2]

s0,1 = x − 953 16200, s0,2 = −4x + 715489 259200, s0,3 = 6x − 62639 14400 s0,4 = −4x + 147127 51840 , s0,5 = x − 89387 129600, s1,2 = 10x − 57139 8640 , s1,3 = −20x + 745733 51840 , s1,4 = 15x − 18343 1728 , s1,5 = −4x + 240569 86400 , s2,3 = 20x − 176839 12960 , s2,4 = −20x + 242111 17280 , s2,5 = 6x − 182261 43200 , s3,4 = 10x − 165041 25920 , s3,5 = −4x + 710473 259200, s4,5 = x,

The centrosymmetric matrix P = ∆t · diag (p0,0, p1,1, p2,2, p3,3, p4,4, p5,5, 1, . . .)

have coefficients p0,0 = 13649 43200, p1,1 = 12013 8640 , p2,2 = 2711 4320, p3,3 = 5359 4320, p4,4 = 7877 8640, p5,5 = 43801 43200.

(31)

The polynomials wj(ξ, x) and zj(ξ, x) with j = 6, . . . , 11 are given by w6(ξ, x) = 4 ψx Ä 85968164649369600000ix2ξ +34601523051479040000x2ξ2− 130925704210304486400iξx −34514642471761495429 − 52760948337434131200xξ2 +49521812834468080661iξ − 60045408811745280000x2 +19985164192464163800ξ2+ 91347425343271180800xä, w7(ξ, x) = 4 ψx Ä 1065056293548195840000ix2ξ +413132758434201600000x2ξ2− 1614594755100447129600iξx −412120665819592531057 − 627263956804040582400xξ2 +608395474600054186485iξ − 717030630940999680000x2 +236709838471838695412ξ2+ 1090612145284761715200xä, w8(ξ, x) = 4 ψx Ä 5920623284184023040000ix2ξ +2443936759048028160000x2ξ2− 8951692427837922585600iξx −2110499233697581321519 − 3693390408357925420800xξ2 +3365816289578040654493iξ − 3671907973017108480000x2 +1388166602591785744486ξ2+ 5584453258245090854400xä, w9(ξ, x) = 24 ψx Ä 346015230514790400000ix2ξ3 −527609483374341312000ixξ3+ 260404207587164160000ix2ξ +199851641924641638000iξ3− 292272196178165760000x2ξ2 −356607105447971289600iξx + 473651105596461100800xξ2 +122756596536943142108iξ − 98023449664880640000x2 −18829201299579240547ξ2+ 148233252115973260800x −56360803784096215047) ,

(32)

w10(ξ, x) = 48 ψx Ä 3622732329487564800000ix2ξ3 −5510562459204738816000ixξ3− 24289959255094149120000ix2ξ +2082881581020080848060iξ3− 15675513485529784320000x2ξ2 +36921230363424734726400iξx + 23856764225763152937600xξ2 −13942452595064941719821iξ + 13100048842656890880000x2 −9019971825478792684617ξ2− 19927030424828770771200x +7529401904282397341322) , w11(ξ, x) = 24 ψx Ä 74077864148090880000000ix2ξ3 −112381601597419193856000ixξ3− 477798340514167848960000ix2ξ +42380210597670438999940iξ3− 325166633349679595520000x2ξ2 +724562083605692135692800iξx + 492999042336751076025600xξ2 −273104932981806697334542iξ + 234925048347263631360000x2 −185799169127337922651951ξ2− 357315653928573599500800x +135026783702317505775087) , z6(ξ, x) = 1 ψx Ä 43695367720796160000ix2ξ +15617202862325760000x2ξ2− 66830266947715891200iξx −23296384421857621513 − 23878172962818662400xξ2 +25387158792819872968iξ − 40270212002119680000x2 +9071486710640233680ξ2+ 61481685481182950400xä, z7(ξ, x) = 1 ψx Ä 601561273785384960000ix2ξ +186465147442790400000x2ξ2− 905927045191010611200iξx −348335310089525885509 − 282923497528708377600xξ2 +339545385968289956060iξ − 627123377100718080000x2 +106779856825842827824ξ2 + 936638354766497356800xä,

(33)

z8(ξ, x) = 1 ψx Ä 3778404388294164480000ix2ξ +1103057113756631040000x2ξ2− 5627137491353217945600iξx −2034047000306985377179 − 1658625567833151820800xξ2 +2088349011720406798302iξ − 3733369035290542080000x2 +621071144802388085816ξ2+ 5518863448241712307200xä, z9(ξ, x) = 6 ψx Ä 156172028623257600000ix2ξ3 −238781729628186624000ixξ3+ 738682188325355520000ix2ξ +90714867106402336800iξ3− 180856612393943040000x2ξ2 −1010214750110777395200iξx + 302290549192744012800xξ2 +345221672882333601665iξ − 854541521679974400000x2 −123113796918437809636ξ2+ 1192208719892161996800x −416258564887890370659) , z10(ξ, x) = 12 ψx Ä 1635099866018611200000ix2ξ3 −2489135270970381696000ixξ3− 13815202137502187520000ix2ξ +942116186108024654720iξ3− 7899560790553313280000x2ξ2 +20986177758900952089600iξx + 12020245428086150457600xξ2 −7923482606911181366279iξ + 9999335244583895040000x2 −4546737733818474369172ξ2− 15093907545333282163200x +5664846618108190866030i) , z11(ξ, x) = 6 ψx Ä 33434627437854720000000ix2ξ3 −50645512722529990656000ixξ3− 314802210504889958400000ix2ξ +19086633778180219487120iξ3 − 170462527753646407680000x2ξ2 +471705802315568322969600iξx + 256946608933388322278400xξ2 −175989809585607907698739iξ + 213800389798992445440000x2 −96397406768479044336336ξ2− 318408630433391268403200x +118121838276427015819707) .

(34)

B. Proof of Theorem 5.5 for x = x2 = 89387/129600

We follow the steps of Theorem 5.4. To start with, we observe that also in this case for N ≥ 17 the first term of D (ξ, x2, N ) in (23) is dominant,

since

|τ1,2(ξ, x2)| ' 7.576 · 1077, |τ1,5(ξ, x2)| / 6.459 · 1073,

|τ1,6(ξ, x2)| / 4.615 · 1073, |τ2,5(ξ, x2)| / 7.648 · 1073,

|τ2,6(ξ, x2)| / 6.459 · 1073, |τ5,6(ξ, x2)| / 1.210 · 1054,

for ξ ∈ Iξ = [0, 4.21]. This results in D (ξ, x2, N ) 6= 0 for any N ≥ 17, and

for these values the statement is proved due to Lemma 2.4.

However, differently from Theorem 5.4, by using (24) it can be verified that the first term of (23) ceases to be dominant for N ≤ 15. For such values of N cancellation errors may occur in the computation of the determinant leading to untrustworthy results. On the other hand, for N ∈ {12, 13, 14, 15} the claim can be verified numerically. Figure B.7 shows the spectrum of D

0 2 4 6 8 Re(µ) -30 -20 -10 0 10 20 30 Im( µ )

Eigenvalues 6th order with x = x 2, σ = -1 N = 12 N = 13 N = 14 N = 15 0 0.5 1 1.5 2 Re(µ) ×10-3 -0.01 -0.005 0 0.005 0.01 Im( µ )

Eigenvalues 6th order with x = x 2, σ = -1

N = 12 N = 13 N = 14 N = 15

Figure B.7: The spectrum of the sixth order approximation D with x = x2 =

89387/129600, σ = −1, N ∈ {12, 13, 14, 15} and ∆t = 1/N .

for the sixth order approximation with x = x2, ∆t = 1/N and σ = −1.

Note that the minimum value of Re (µ) is positive for each N , but extremely close to the imaginary axis. This phenonemon appears to be peculiar for the parameter choice x = x2 when N is small.

(35)

C. Proof of Theorem 5.5 for x = x3 = 0.70127127127 . . .

As for Theorem 5.4, for N ≥ 17 the term with 1/ (r1r2)N −17 in (23) is

dominant, since

|τ1,2(ξ, x3)| ' 1.576 · 1083, |τ1,5(ξ, x3)| / 7.559 · 1071,

|τ1,6(ξ, x3)| / 6.121 · 1071, |τ2,5(ξ, x3)| / 9.728 · 1071,

|τ2,6(ξ, x3)| / 7.559 · 1071, |τ5,6(ξ, x3)| / 2.219 · 1052,

for ξ ∈ Iξ = [0, 4.21]. This implies that D (ξ, x3, N ) 6= 0 for any N ≥ 17.

The determinant can not be zero also for smaller values of N ≥ 12. Indeed, the term with 1/ (r1r2)N −17 dominates even for N = 12, value for

which we find τ1,2(ξ, x3) (r1r2) −5 ' 9.446 · 1073, τ1,5(ξ, x3) (r1r5) −5 / 7.559 · 1071, τ1,6(ξ, x3) (r1r6) −5 / 7.828 · 1071, τ2,5(ξ, x3) (r2r5) −5 / 9.728 · 1071, τ2,6(ξ, x3) (r2r6) −5 / 7.559 · 1071, τ5,6(ξ, x3) (r5r6) −5 / 4.126 · 1061, for ξ ∈ Iξ.

As a result, the determinant D (ξ, x3, N ) 6= 0 for N ≥ 12, ξ ∈ Iξ. Hence,

no eigenvectors of P−1S in (21) have first and last component equal to zero, if x = x3 = 0.70127127127 . . .. As a result, the eigenvalues µ of D have

strictly positive real parts due to Lemma 2.4.

References

[1] H. O. Kreiss, G. Scherer, Finite element and finite difference methods for hyperbolic partial differential equations in C. de Boor, Mathematical Aspects of Finite Elements in Partial Differential Equations, Academic Press, 1974.

[2] B. Strand, Summation by Parts for Finite Difference Approximations for d/dx, Journal of Computational Physics, Vol. 110, pp. 47-67, 1994.

(36)

[3] M. H. Carpenter, J. Nordstr¨om, D. Gottlieb, A stable and conservative interface treatment of arbitrary spatial accuracy, Journal of Computa-tional Physics, Vol. 148, pp. 341-365, 1999.

[4] J. Nordstr¨om, T. Lundquist, Summation-by-parts in time, Journal of Computational Physics, Vol. 251, 487-499, 2013.

[5] T. Lundquist, J. Nordstr¨om, The SBP-SAT technique for initial value problems, Journal of Computational Physics, Vol. 270, pp. 86-104, 2014. [6] P. D. Boom, D. W. Zingg, High-Order Implicit Time-Marching Methods Based on Generalized Summation-By-Parts Operators, SIAM Journal of Scientific Computing, Vol. 37(6), pp. A2682-2709, 2015.

[7] H. Ranocha, Some Notes on Summation by Parts Time Integration Methods, Results in Applied Mathematics, to appear in 2019.

[8] A. A. Ruggiu, J. Nordstr¨om, On pseudo-spectral time discretizations in summation-by-parts form, Journal of Computational Physics, Vol. 360, pp. 192-201, 2018.

[9] V. Linders, J. Nordstr¨om, S. H. Frankel, Convergence and stability properties of summation-by-parts in time, Tech. Report LiTH-MAT-R-2019/04-SE, Link¨oping University, Department of Mathematics, 2019. [10] J. Berg, J. Nordstr¨om, Superconvergent functional output for

time-dependent problems using finite differences on summation-by-parts form, Journal of Computational Physics, Vol. 231, pp. 6846-6860, 2012. [11] J. E. Hicken, D. W. Zingg, Dual consistency and functional accuracy:

a finite-difference perspective, Journal of Computational Physics, Vol. 256, p. 161-182, 2014.

[12] J. Nordstr¨om, Conservative Finite Difference Formulations, Variable Coefficients, Energy Estimates and Artificial Dissipation, Journal of Sci-entific Computing, Vol. 29, pp.375-404, 2006.

[13] T. C. Fisher, M. H. Carpenter, J. Nordstr¨om, N. K. Yamaleev, C. Swan-son, Discretely conservative finite-difference formulations for nonlinear conservation laws in split form: Theory and boundary conditions, Jour-nal of ComputatioJour-nal Physics, Vol. 234, pp. 353-375, 2013.

(37)

[14] M. Sv¨ard, On coordinate transformation for summation-by-parts opera-tors, Journal of Scientific Computing, Vol. 20, Issue 1, 2004.

[15] M. Sv¨ard, J. Nordstr¨om, On the order of accuracy for difference approx-imations of initial-boundary value problems, Journal of Computational Physics, Vol. 218, pp. 333-352, 2006.

[16] M. Sv¨ard, K. Mattsson, J. Nordstr¨om, Steady-State Computations Using Summation-by-Parts Operators, Journal of Scientific Computing, Vol. 24, pp. 79-95, 2005.

[17] J. Nordstr¨om, F. Ghasemi, On the relation between conservation and dual consistency for summation-by-parts schemes, Journal of computa-tional Physics, Vol. 344, pp. 437-439, 2017.

[18] M. Sv¨ard, J. Nordstr¨om, On the convergence rates of energy-stable finite-difference schemes, Tech. Report LiTH-MAT-R-2017/14-SE, Link¨oping University, Department of Mathematics, 2017.

[19] D. Carlson, B. N. Datta, C. R. Johnson, A semi-definite Lyapunov the-orem and the characterization of tridiagonal D-stable matrices, SIAM Journal on Matrix Analysis and Applications, Vol. 3, pp. 293-304, 1982. [20] D. Carlson, Controllability, Inertia, Stability for Tridiagonal Matrices,

Linear Algebra and its Applications, Vol. 56, pp. 207-220, 1984.

[21] R. A. Horn, C. R. Johnson, Matrix Analysis, Second Edition, Cambridge University Press, 2013.

[22] S. L. Shmakov, A universal method of solving quartic equations, In-ternational Journal of Pure and Applied Mathematics, Vol. 71(2), pp. 251-259, 2011.

[23] J. M. Howie, Fields and Galois Theory, Springer Undergraduate Math-ematics Series, 2006.

[24] B. Gustafsson, H. O. Kreiss, J. Oliger, Time Dependent Problems and Difference Methods, John Wiley & Sons, Inc., 1995.

References

Related documents

Genom att som sjuksköterska lyssna på patienten, förmedla kunskap om diabetessjukdomen, vara öppen och lyhörd för den enskilde patientens behov samt uppmuntra patienten till

Då denna studie syftade till att redogöra för hur stora svenska företag, som är topprankade inom CSR, designar sina MCS- paket för att styra medarbetare i linje med

Dessa anses dock inte vara lika stora problem inom det svenska samhället eftersom det finns stor kunskap kring riskerna och att Sverige ligger långt fram inom dessa punkter,

Flera kvinnor i vår studie upplevde att deras omgivning inte var villiga att stötta dem i den känslomässiga processen och detta kan ha lett till att det uppkommit känslor av skam

156 Anledningen till detta är, enligt utredningen, att en lagstiftning vilken innebär skolplikt för gömda barn skulle vara mycket svår att upprätthålla: ”Det kan inte krävas

The image-to-image translation network SPADE is subsequently trained on the image pairs in the real dataset, to learn a transformation from segmentation masks to brain MR images, and

Figure 4.5 shows how the existing variables browser is used to expose interactive simulation variables red from the embedded server to the user.. The motivation behind is that the

Nedan följer en kort beskrivning av tillvägagångssättet för en fallstudie utifrån Creswell (2007) och Stake (1995). Till att börja med måste forskaren fastställa om