• No results found

A Subspace Iteration Algorithm for Fredholm Valued Functions

N/A
N/A
Protected

Academic year: 2022

Share "A Subspace Iteration Algorithm for Fredholm Valued Functions"

Copied!
16
0
0

Loading.... (view fulltext now)

Full text

(1)

This is the published version of a paper published in Mathematical problems in engineering (Print).

Citation for the original published paper (version of record):

Engström, C., Grubisic, L. (2015)

A Subspace Iteration Algorithm for Fredholm Valued Functions.

Mathematical problems in engineering (Print), 2015: 459895 http://dx.doi.org/10.1155/2015/459895

Access to the published version may require subscription.

N.B. When citing this work, cite the original published paper.

Permanent link to this version:

http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-111101

(2)

Research Article

A Subspace Iteration Algorithm for Fredholm Valued Functions

Christian Engström1and Luka GrubišiT2

1Department of Mathematics and Mathematical Statistics, Ume˚a University, MIT-Huset, 90187 Ume˚a, Sweden

2Department of Mathematics, University of Zagreb, Bijeniˇcka 30, 10000 Zagreb, Croatia

Correspondence should be addressed to Luka Grubiˇsi´c; luka.grubisic@math.hr Received 2 July 2015; Revised 2 October 2015; Accepted 5 October 2015 Academic Editor: Ruben Specogna

Copyright © 2015 C. Engstr¨om and L. Grubiˇsi´c. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

We present an algorithm for approximating an eigensubspace of a spectral component of an analytic Fredholm valued function.

Our approach is based on numerical contour integration and the analytic Fredholm theorem. The presented method can be seen as a variant of the FEAST algorithm for infinite dimensional nonlinear eigenvalue problems. Numerical experiments illustrate the performance of the algorithm for polynomial and rational eigenvalue problems.

1. Introduction

In this paper we study analytic operator eigenvalue problems defined on an open connected subsetΩ ⊆ C in a separable Hilbert spaceH. Throughout this paper we assume that 𝑆 : Ω → L(H) is an analytic function with values in the spaceL(H) of bounded linear operators. A scalar 𝜆 ∈ Ω is called an eigenvalue of𝑆 if 𝑆(𝜆) is not injective. Hence, the eigenvalue problem is to find𝜆 ∈ Ω and 𝑢 ∈ H\{0} such that

𝑆 (𝜆) 𝑢 = 0. (1)

Such problems are, for example, used to study the dispersion and damping properties of waves [1–3]. Given a closed contourΓ, we would like to approximate all the eigenvalues of 𝑆 inside Γ to the sufficient degree of accuracy. In this paper the numerical method is based on contour integrals of the generalized resolvent𝑆−1. The state-of-the-art results in the contour integration based methods for solving nonlinear matrix eigenvalue problems are presented in [4–6] including the references therein. Results for contour integration based solution methods for Fredholm valued eigenvalue problems can be found in, for example, [7, 8].

The spectrum𝜎(𝑆) of the operator function 𝑆 is the set of all𝜆 ∈ Ω such that 𝑆(𝜆) is not invertible in L(H) and the resolvent set is defined as the complement𝜌(𝑆) = Ω \ 𝜎(𝑆).

For𝐹 ∈ L(H) we define the operator norm by the expression

‖𝐹‖ = √spr(𝐹𝐹), where spr(⋅) denotes the spectral radius.

We call an operator𝐹 ∈ L(H) a Fredholm operator if the

dimensions of its null space Ker(𝐹) and of the orthogonal complement of its range CoKer(𝐹) = Ran(𝐹)are finite. By Φ(H) we denote the set of all Fredholm operators on H and the number ind(𝐹) = dim Ker(𝐹) − dim CoKer(𝐹) is called the index of𝐹 ∈ Φ(H). In what follows we will assume that 𝑆(𝜆) ∈ Φ(H) for all 𝜆 ∈ Ω. If in addition the resolvent set of such𝑆 is nonempty, the analytic Fredholm theorem, for example, [9, Theorem 1.3.1], implies that the generalized resolvent𝑧 󳨃→ 𝑆(𝑧)−1is finitely meromorphic. This in turn implies that the spectrum𝜎(𝑆) is countable and the geometric multiplicity of𝜆, that is, dim(Ker 𝑆(𝜆)), is finite. Moreover, the associated Jordan chains of generalized eigenvectors have finite length bounded by the algebraic multiplicity; see [9].

The results of this paper are a combination of matrix techniques from [6] with the specialization of the results from [7] to Hilbert spaces. In particular, we leverage the technique of block operator matrix representation of Fredholm valued operator functions and prove that the convergence rate for the inexact subspace iteration algorithm depends primarily on the spectral properties of the operator function. Also, we make the case for the problem dependent determination of the number of integration nodes depending on the clustering of eigenvalues towards the contour of integration (cf. [6]), where the use of 16 nodes of Gauss-Legendre integration formula is recommended. To assess the quality of a computed eigenpair, rank one perturbations of the operator function are studied. In particular, we construct a perturbation based on the residual functional and estimate the approximation

Volume 2015, Article ID 459895, 14 pages http://dx.doi.org/10.1155/2015/459895

(3)

errors by estimating the norm of the residual by an auxiliary subspace technique. Our algorithm consists of the inexact subspace iteration for the zeroth moment of the resolvent to construct the approximate eigenspace for the eigenvalues contained inside a contour Γ. We then use the moment method of Beyn et al. [4, 7] to extract information on eigenvalues from the computed approximate eigenspace. As the convergence criterion we use a hierarchical residual estimate. In the case in which the convergence criterion is not satisfied the procedure is repeated. This structure of the algorithm will be replicated directly in the structure of the paper and is presented as Algorithm 2.

The paper is organized as follows. In Section 2 we estab- lish a criterion based on the residual norm for assessing approximations of simple eigenvalues. In Section 3 we present inexact subspace iteration algorithm based on contour integration and prove that its convergence rate essentially depends on the properties of the operator function. It is shown that the influence of the integration formula dimin- ishes exponentially with the number of integration nodes.

Finally, in Section 4 we present numerical experiments.

2. Notation and Basic Analytic Results

In this section we present the machinery of quasi-matrices from [10–12]. In particular we present basic results on the angles between finite dimensional subspaces of a Hilbert space in terms of quasi-matrix notation. Finally, we will prove an error estimation result for simple eigenvalues of a Fredholm valued function.

A quasi-matrix is a bounded linear operator𝑉 from the finite dimensional spaceC𝑟to an (in general) infinite dimen- sional Hilbert spaceH. Then, the product 𝑉𝑉 denotes the Gram matrix:

(𝑉𝑉)𝑖𝑗= (𝑉𝑒𝑖, 𝑉𝑒𝑗) , 𝑖, 𝑗 = 1, 2, . . . , 𝑟, (2) which depends on the inner product(⋅, ⋅) of H. Let 𝑃1,𝑃2, 𝑄1, and𝑄2be orthogonal projections such that𝑃1⊕ 𝑃2 = 𝐼 and 𝑄1 ⊕ 𝑄2 = 𝐼, where 𝐼 is the identity operator on H.

Furthermore, letH𝑖 := 𝑃𝑖H, ̂H𝑖 := 𝑄𝑖H, 𝑖 = 1, 2, and identifyH and the two spaces H1⊕ H2 and ̂H1⊕ ̂H2 by isomorphism. Let𝐵 ∈ L(H) and take

𝑢 = [𝑢1

𝑢2] ∈ H1⊕ H2:= H. (3) Then,𝑄𝑖𝐵𝑢 = 𝐵𝑖1𝑢1+ 𝐵𝑖2𝑢2,𝑖 = 1, 2, where 𝐵𝑖𝑗: H𝑗 → ̂H𝑖 is defined by restricting𝐵𝑖𝑗:= 𝑄𝑖𝐵𝑃𝑗onto appropriate spaces.

Hence, the bounded operator𝐵 has a block operator matrix representation𝐵 : H1⊕ H2 → ̂H1⊕ ̂H2in terms of𝐵𝑖𝑗and 𝐵𝑢 is computed following the rules of matrix algebra:

𝐵𝑢 = [𝐵11 𝐵12 𝐵21 𝐵22] [𝑢1

𝑢2] = [𝐵11𝑢1+ 𝐵12𝑢2

𝐵21𝑢1+ 𝐵22𝑢2] . (4) The multiplication of two block operator matrices also follows the rules of matrix algebra. Represent, for example,𝐴 : H → H

by𝐴 = [𝐴1 𝐴2], where H = H1⊕ H2and𝐴1 : H1 → H and𝐴2 : H2 → ̃H. Then, the operator 𝐴𝐵 : H → H has the block representation:

𝐴𝐵 = [𝐴1𝐵11+ 𝐴2𝐵21 𝐴1𝐵12+ 𝐴2𝐵22] . (5) Here we have exemplarily taken the trivial partition of unity 𝑄1 = 𝐼 and 𝑄0 = 0 and we assume that both 𝑃1 ̸= 0 and 𝑃2 ̸= 0. This illustrates the flexibility we have in choosing the block operator matrix representations for bounded operators.

For more details on this construction see a recent book by𝐶.

Tretter [13]. To make the paper more readable we will use𝐼𝑟to denote the identity operator on the finite dimensional space C𝑟and𝐼 will denote the identity operator on H.

Let𝑃1⊕ 𝑃2= 𝐼 be given such that dim Ran(𝑃1) = 𝑟. Let 𝑉 be a unitary operator such that𝑉 = [𝑉1 𝑉2] and Ran(𝑃1) = Ran(𝑉1) and Ran(𝑃2) = Ran(𝑉2). Note that in this setting we have𝑃1= 𝑉1𝑉1. For the quasi-matrix𝑋 : C𝑟 → H such that 𝑋𝑋 = 𝐼𝑟we can write

𝑋 = 𝑉1𝐶 + 𝑉2𝑆, (6)

where 𝐶 = 𝑉1𝑋 and 𝑆 = 𝑉2𝑋. With this notation we compute

𝐼𝑟= 𝑋𝑋 = 𝐶𝐶 + 𝑆𝑆, (7) and so‖𝑆‖ ≤ 1 and ‖𝐶‖ ≤ 1. Furthermore, note that 𝑄 = 𝑋𝑋 is the orthogonal projection onto Ran(𝑋) and so from [14, Theorem 2] it follows that

󵄩󵄩󵄩󵄩𝑃1− 𝑄󵄩󵄩󵄩󵄩 =󵄩󵄩󵄩󵄩𝑉1𝑉1− 𝑋𝑋󵄩󵄩󵄩󵄩 =󵄩󵄩󵄩󵄩󵄩󵄩󵄩󵄩󵄩󵄩[𝐼𝑟− 𝐶𝐶 −𝐶𝑆

−𝑆𝐶 −𝑆𝑆]󵄩󵄩󵄩󵄩󵄩󵄩󵄩󵄩󵄩󵄩

= ‖𝑆‖ .

(8)

The last identity has been established by spectral calculus using the fact that dimRan(𝑃1) = 𝑟; see [15]. Let ‖𝑆‖ < 1;

then we define the unique number𝜃 ∈ [0, 𝜋/2⟩ such that

sin𝜃 = ‖𝑆‖ . (9)

We call𝜃 the maximal canonical angle between the spaces Ran(𝑋) and Ran(𝑃1) = Ran(𝑉1). Since ‖𝑆‖ < 1 (7) implies that𝐶𝐶 is a positive definite matrix, therefore 𝐶 must be invertible. By direct computation using spectral calculus, as in [15], we establish that

cos𝜃 = ‖𝐶‖ , tan𝜃 = 󵄩󵄩󵄩󵄩󵄩𝑆𝐶−1󵄩󵄩󵄩󵄩󵄩 .

(10)

When we are considering several pairs of quasi-matrices𝑉1 and𝑋 we will write 𝜃(𝑉1, 𝑋) to denote the maximal canonical angle between subspaces Ran(𝑉1) and Ran(𝑋).

This definition can be extended to subspaces of different dimensions using pairs of orthogonal projections and their singular value decomposition [14, 15]. In this case we call the norm of the difference of the two projections the gap distance between subspaces; see [14, 15].

(4)

2.1. Application of Abstract Results to Operators Defined by Sesquilinear Forms. The abstract Fredholm analytic theorem is stated for bounded operators between Hilbert spaces.

Since we are interested in finite element computations, our problems will always be stated in the variational form which assumes working with unbounded sectorial operators.

See [16] for definitions and the terminology relating to unbounded sectorial operators and forms. Below we will formulate the Fredholm analytic theorem in this setting.

Lett0be a densely defined closed symmetric form inH which is semibounded from below with a strictly positive lower bound; see [16, Section VI.2]. We will call such quad- ratic forms positive definite forms and use Dom(t0) ⊂ H to denote its domain of definition. To simplify the notation we will writeV := Dom(t0) in the rest of the paper. Additionally, let c𝑖, 𝑖 = 1, . . . , 𝑑, be a sequence of sesquilinear (not necessarily symmetric) forms which are relatively compact [16] with regard to t0. Moreover, assume that 𝑓𝑖(⋅), 𝑖 = 1, . . . , 𝑑, is a sequence of scalar analytic functions in Ω. Define the family of sesquilinear forms:

t (𝑧) [𝜙, 𝜓] := t0[𝜙, 𝜓] +𝑑

𝑖=1

𝑓𝑖(𝑧) c𝑖[𝜙, 𝜓] ,

𝜙, 𝜓 ∈ V, 𝑧 ∈ Ω.

(11)

In a variationally posed eigenvalue problem we are seeking a scalar𝜆 ∈ Ω and a vector 𝜙 ∈ V \ {0} such that

t (𝜆) [𝜙, 𝜓] = 0, 𝜓 ∈ V. (12) To this variational formulation we construct a representation by an operator-valued function 𝑇(⋅) and then apply the analytic Fredholm theorem to establish the structure of the spectrum.

Let𝑇0, Dom(𝑇01/2) = V be a self-adjoint positive definite operator which represents the formt0in the sense of Kato’s second representation theorem; see [16, Theorem VI.2.23].

Let𝐾𝑖be defined by

(𝐾𝑖𝑢, V) = c𝑖[𝑇0−1/2𝑢, 𝑇0−1/2V] , 𝑢, V ∈ H. (13) The operators𝐾𝑖,𝑖 = 1, . . . , 𝑑, are obviously compact and for 𝑧 ∈ Ω \ 𝜎(𝑇) the value of the generalized resolvent is the operator:

𝑇−1(𝑧) = 𝑇0−1/2(𝐼 +𝑑

𝑗=1

𝑓𝑗(𝑧) 𝐾𝑗)

−1

𝑇0−1/2. (14)

Here𝑇(𝑧) is the unbounded sectorial operator with domain Dom(𝑇(𝑧)) such that

(𝑇 (𝑧) 𝜙, 𝜓) = t0[𝜙, 𝜓] +𝑑

𝑖=1𝑓𝑖(𝑧) c𝑖[𝜙, 𝜓] ,

𝜙 ∈ Dom (𝑇 (𝑧)) , 𝜓 ∈ V.

(15)

Obviously the operator-valued function 𝑆 (𝑧) = 𝐼 +𝑑

𝑗=1𝑓𝑗(𝑧) 𝐾𝑗, 𝑧 ∈ Ω (16)

satisfies the requirements of the analytic Fredholm theorem and𝜎(𝑆) = 𝜎(𝑇). Let us note that we will use

(𝑇󸀠(𝑧) 𝜙, 𝜓) =𝑑

𝑖=1𝑓𝑖󸀠(𝑧) c𝑖[𝜙, 𝜓] ,

𝜙 ∈ Dom (𝑇󸀠(𝑧)) , 𝜓 ∈ V, (17)

to define a derivative of an operator-valued analytic function.

We can now define the notion of a semisimple eigenvalue.

Definition 1. Let𝑇 be as in (15) and let 𝜇 ∈ 𝜎(𝑇) be an eigenvalue. The eigenvalue𝜇 is semisimple if for each 𝜓 ∈ Ker(𝑇(𝜇)) \ {0} there is a 𝜓 ∈ Ker(𝑇(𝜇)) such that

(𝑇󸀠(𝜇) 𝜙, 𝜓) ̸= 0. (18) If dim Ker(𝑇(𝜇)) = 1, then 𝜇 is called a simple eigenvalue.

Note that in the quasi-matrix notation we will freely write (𝑇󸀠(𝜇) 𝜙, 𝜓) = 𝜓𝑇󸀠(𝜇) 𝜙. (19) To this end we identify the vectors with a mapping𝜓 : C → H.

With these conventions we state, informally, the general- ized argument principle proved by Gohberg and Sigal in [17–

19]. It states that for the closed contourΓ ⊂ 𝜌(𝑇) the number 𝑀 (𝑇, Γ) = tr (∫

Γ𝑇 (𝑧)−1𝑇󸀠(𝑧) 𝑑𝑧)

= tr (∫

Γ𝑇󸀠(𝑧) 𝑇 (𝑧)−1𝑑𝑧)

(20)

satisfies𝑀(𝑇, Γ) ∈ N and it equals the total multiplicity of the eigenvalues enclosed by Γ. We also have the following consequence of [9, Theorem 1.3.1].

Proposition 2. Let us assume that we have a variational eigenvalue problem (12) with the operator representation𝑇 : Ω → H given by (15). Then the spectrum 𝜎(𝑇) consists of a countable collection of eigenvalues with finite multiplicity.

Further, let the component of𝜎(𝑇) inside a contour Γ consist only of semisimple eigenvalues 𝜆𝑖, 𝑖 = 1, . . . , 𝑟, such that 𝑛𝜆𝑖 = dim Ker(𝑇(𝜆𝑖)). Then there are quasi-matrices 𝑋𝑖, 𝑌𝑖 : C𝑛𝜆𝑖 → H, 𝑖 = 1, . . . , 𝑟, such that Ran(𝑋𝑖) ⊂ Ker(𝑇(𝜆𝑖)) and Ran(𝑌𝑖) ⊂ Ker(𝑇(𝜆𝑖)), 𝑖 = 1, . . . , 𝑟, and an open once connected neighborhoodU containing 𝜆𝑖,𝑖 = 1, . . . , 𝑟 and Γ and an operator-valued function𝐻 which is analytic on U and taking values inL(H) such that

𝑇−1(𝑧) =𝑟

𝑖=1

1

𝑧 − 𝜆𝑖𝑋𝑖𝑌𝑖+ 𝐻 (𝑧) , 𝑧 ∈ U, (21) and𝑌𝑖(𝑇󸀠(𝜆𝑖)𝑋𝑖) = 𝐼𝑛

𝜆𝑖,𝑖 = 1, . . . , 𝑟.

Proof. For𝑧 ∈ Ω \ 𝜎(𝑇) write

𝑇−1(𝑧) = 𝑇0−1/2(𝐼 +𝑑

𝑗=1𝑓𝑗(𝑧) 𝐾𝑗)

−1

𝑇0−1/2 (22)

(5)

and define the Fredholm valued function:

𝑆 (𝑧) = 𝐼 +𝑑

𝑗=1

𝑓𝑗(𝑧) 𝐾𝑗, 𝑧 ∈ Ω. (23)

Recall that Dom(𝑇01/2) = V and that 𝑇01/2mapsV one to one onH. Now apply [9, Theorem 1.3.1] on 𝑆.

2.2. Estimating Eigenvalues inside a Contour. To count the semisimple eigenvalues inside a contour we will use the approach of [20]. We will limit our consideration on the case of semisimple eigenvalues and rank one perturbations.

First we will present results for Fredholm valued operator functions and then formulate the result for operators defined by sesquilinear forms.

Lemma 3. Let 𝑆 : Ω → Φ(H) be an analytic function and let𝐸 be a bounded operator such that dim Ran(𝐸) = 1.

Assume thatΓ ⊂ 𝜌(𝑆) is a simple closed contour such that the component of𝜎(𝑆) inside Γ consists solely of a simple eigenvalue 𝜆. If 𝑆(𝑧) + 𝜏𝐸 is invertible for all 𝑧 ∈ Γ and all 𝜏 ∈ [0, 1], then 𝑇 + 𝐸 has a simple eigenvalue ̃𝜆 inside Γ and

󵄨󵄨󵄨󵄨󵄨𝜆 −̃𝜆󵄨󵄨󵄨󵄨󵄨 ≤ 𝐶‖𝐸‖, (24) where 𝐶 essentially depends on max𝑧∈Γ‖𝑆(𝑧)−1|Ran(𝐸)‖ and max𝑧∈Γ‖𝑆(𝑧)−∗|Ran(𝐸)‖ and the length of the the integration curveΓ.

Proof. Recall that(𝑆(𝑧) + 𝐸)󸀠= 𝑆󸀠(𝑧) and define the function:

𝑓 (𝜏) = 1 2𝜋𝑖tr(∫

Γ𝑆󸀠(𝑧) (𝑆 (𝑧) + 𝜏𝐸)−1𝑑𝑧) . (25) By the generalized argument principle—see [17–19]—the value of𝑓(𝜏) equals the total multiplicity of the eigenvalues insideΓ. In particular, by Proposition 2 we have that there are vectors𝑥 and 𝑦 such that (𝑆󸀠(𝜆)𝑥, 𝑦) = 𝑦𝑆󸀠(𝜆)𝑥 = 1 and

𝑓 (0) = tr (𝑆󸀠(𝜆) 𝑥𝑦) = tr (𝑦𝑆󸀠(𝜆) 𝑥) = 1, (26) where we used the circularity of the trace. By the assumptions of the theorem𝑆(𝑧) is invertible for all 𝑧 ∈ Γ and 𝐸 is a rank

one bounded operator. Therefore, there are vectorsV and 𝑢 so that𝐸 = 𝑢Vand using Sherman-Morrison formula (see [21–23] and the references therein), we write

(𝑆 (𝑧) + 𝜏𝐸)−1

= 𝑆 (𝑧)−1 𝜏

1 + V𝑆−1(𝑧) 𝑢𝑆 (𝑧)−1(𝑢V) 𝑆 (𝑧)−1, (27) and so it follows that

(𝑆 (𝑧) + 𝜏1𝐸)−1− (𝑆 (𝑧) + 𝜏2𝐸)−1

= 𝜏2− 𝜏1

1 + V𝑆−1(𝑧) 𝑢𝑆 (𝑧)−1(𝑢V) 𝑆 (𝑧)−1

= 𝜏2− 𝜏1

1 + V𝑆−1(𝑧) 𝑢(𝑆 (𝑧)−1𝑢) (𝑆 (𝑧)−∗V).

(28)

And in particular 𝑓 is a smooth function. Since 𝑓, due to Rouche’s theorem, conclude that 𝑓 takes values only in natural numbers, it must be constant for all𝜏 ∈ [0, 1]. Let us denote this eigenvalue of ̃𝑆 = 𝑆 + 𝐸 by ̃𝜆. Define the operators

𝐴(𝑞)= ∫

Γ𝑧𝑞𝑆−1(𝑧) 𝑑𝑧, 𝐴̃(𝑞)= ∫

Γ𝑧𝑞̃𝑆−1(𝑧) 𝑑𝑧, 𝑞 = 0, 1.

(29)

Proposition 2 implies 𝐴(1) = 𝜆𝑥𝑦, ̃𝐴(1) = ̃𝜆̃𝑥̃𝑦, where 𝑆(𝜆)𝑥 = 0, 𝑆(𝜆)𝑦 = 0, ̃𝑆(̃𝜆)̃𝑥 = 0, and ̃𝑆(𝜆)̃𝑦 = 0 and so there exists a vector𝜙 such that 𝜙𝐴(0)𝜙 ̸= 0 and 𝜙𝐴̃(0)𝜙 ̸= 0.

We now compute

𝜆 = 𝜙(∫Γ𝑧𝑆−1(𝑧) 𝑑𝑧) 𝜙 𝜙(∫Γ𝑆−1(𝑧) 𝑑𝑧) 𝜙,

̃𝜆 = 𝜙(∫Γ𝑧̃𝑆−1(𝑧) 𝑑𝑧) 𝜙 𝜙(∫Γ̃𝑆−1(𝑧) 𝑑𝑧) 𝜙

(30)

and so the second assertion follows by the following compu- tation:

󵄨󵄨󵄨󵄨󵄨𝜆 −̃𝜆󵄨󵄨󵄨󵄨󵄨 =󵄨󵄨󵄨󵄨󵄨󵄨󵄨󵄨󵄨󵄨󵄨

𝜙(∫Γ𝑧𝑆−1(𝑧) 𝑑𝑧) 𝜙

𝜙(∫Γ𝑆−1(𝑧) 𝑑𝑧) 𝜙 𝜙(∫Γ𝑧̃𝑆−1(𝑧) 𝑑𝑧) 𝜙 𝜙(∫Γ̃𝑆−1(𝑧) 𝑑𝑧) 𝜙

󵄨󵄨󵄨󵄨󵄨󵄨󵄨󵄨

󵄨󵄨󵄨

1

󵄨󵄨󵄨󵄨󵄨𝜙(∫Γ𝑆−1(𝑧) 𝑑𝑧) 𝜙󵄨󵄨󵄨󵄨󵄨

[󵄨󵄨󵄨󵄨󵄨󵄨󵄨𝜙(∫

Γ𝑧𝑆−1(𝑧) 𝑑𝑧) 𝜙 − 𝜙(∫

Γ𝑧̃𝑆−1(𝑧) 𝑑𝑧) 𝜙󵄨󵄨󵄨󵄨󵄨󵄨󵄨

+󵄨󵄨󵄨󵄨

󵄨󵄨󵄨󵄨󵄨󵄨󵄨

𝜙(∫Γ𝑧̃𝑆−1(𝑧) 𝑑𝑧) 𝜙 𝜙(∫Γ̃𝑆−1(𝑧) 𝑑𝑧) 𝜙

󵄨󵄨󵄨󵄨󵄨󵄨󵄨󵄨

󵄨󵄨󵄨󵄨󵄨󵄨󵄨󵄨󵄨󵄨𝜙(∫

Γ̃𝑆−1(𝑧) 𝑑𝑧) 𝜙 − 𝜙(∫

Γ𝑆−1(𝑧) 𝑑𝑧) 𝜙󵄨󵄨󵄨󵄨󵄨󵄨󵄨]

1

󵄨󵄨󵄨󵄨󵄨𝜙(∫Γ𝑆−1(𝑧) 𝑑𝑧) 𝜙󵄨󵄨󵄨󵄨󵄨

[󵄨󵄨󵄨󵄨󵄨󵄨󵄨𝜙(∫

Γ𝑧𝑆−1(𝑧) − 𝑧̃𝑆−1(𝑧) 𝑑𝑧) 𝜙󵄨󵄨󵄨󵄨󵄨󵄨󵄨 +󵄨󵄨󵄨󵄨󵄨̃𝜆󵄨󵄨󵄨󵄨󵄨󵄨󵄨󵄨󵄨󵄨󵄨󵄨𝜙(∫

Γ̃𝑆−1(𝑧) − 𝑆−1(𝑧) 𝑑𝑧) 𝜙󵄨󵄨󵄨󵄨󵄨󵄨󵄨]

(6)

= 1

󵄨󵄨󵄨󵄨󵄨𝜙(∫Γ𝑆−1(𝑧) 𝑑𝑧) 𝜙󵄨󵄨󵄨󵄨󵄨 [󵄨󵄨󵄨󵄨

󵄨󵄨󵄨󵄨𝜙(∫

Γ

𝑧

1 + V𝑆−1(𝑧) 𝑢𝑆 (𝑧)−1𝐸𝑆 (𝑧)−1𝑑𝑧) 𝜙󵄨󵄨󵄨󵄨

󵄨󵄨󵄨󵄨

+ 󵄨󵄨󵄨󵄨󵄨̃𝜆󵄨󵄨󵄨󵄨󵄨󵄨󵄨󵄨󵄨󵄨󵄨󵄨󵄨𝜙(∫

Γ

1

1 + V𝑆−1(𝑧) 𝑢𝑆 (𝑧)−1𝐸𝑆 (𝑧)−1𝑑𝑧) 𝜙󵄨󵄨󵄨󵄨󵄨󵄨󵄨󵄨] ≤ 𝐶‖𝐸‖.

(31)

The claim on the constant𝐶 follows from the observation that Ran(𝐸) = span{𝑢} and Ran(𝐸) = span{V}.

With this result we can now formulate the main result of this section. It will be used to assess the quality of a given approximation, regardless of its origin.

Proposition 4. Let t(⋅) denote the family of sesquilinear forms (11) with operator representation𝑇(⋅) : Ω → H given by (15). Assume that a contourΓ ⊂ 𝜌(𝑇) encloses only a simple eigenvalue𝜆 and 𝜇 ∈ C but no other points of 𝜎(𝑇). Let 𝑢 ∈ V, with‖𝑢‖ = 1. Then there is 𝜆 ∈ 𝜎(𝑇) such that

󵄨󵄨󵄨󵄨𝜆 − 𝜇󵄨󵄨󵄨󵄨 ≤ 𝐶 sup𝜙∈V\{0}󵄨󵄨󵄨󵄨t(𝜇)[𝑢,𝜙]󵄨󵄨󵄨󵄨

√t0[𝜙, 𝜙] . (32) The constant𝐶 does not depend on 𝜇 and 𝑢 but on contour Γ and the restriction of‖𝑇−1(⋅)‖ and ‖𝑇−∗(⋅)‖ to Γ.

Proof. We will now construct a particular operator𝐸𝜇,𝑢which will be used to assess the quality of the pair𝜇, 𝑢. Define the sesquilinear forme𝜇,𝑢: V × V → C by the formula

e𝜇,𝑢[𝜓, 𝜙] = −t (𝜇) [𝑢, 𝜙] (𝜓, 𝑢) , 𝜓, 𝜙 ∈ V. (33) It is obviously relatively compact with respect tot0and so we can define the Fredholm valued operator functionΩ ∋ 𝑧 →

̃𝑇(𝑧), where ̃𝑇(𝑧) is the operator defined by the form

̃t(𝜆) [𝜓, 𝜙] = t (𝜆) [𝜓, 𝜙] + e𝜇,𝑢[𝜓, 𝜙] . (34) Further,

(̃𝑇 (𝜇) 𝑢, 𝜙) = t (𝜇) [𝑢, 𝜙] + e𝜇,𝑢[𝑢, 𝜙]

= t (𝜇) [𝑢, 𝜙] − t (𝜇) [𝑢, 𝜙] (𝑢, 𝑢)

= t (𝜇) [𝑢, 𝜙] − t (𝜇) [𝑢, 𝜙] = 0,

𝜙 ∈ V, (35)

and so 𝜇 ∈ 𝜎(̃𝑇). We construct the operator 𝐸𝜇,𝑢 as the operator defined by

(𝐸𝜇,𝑢𝜙, 𝜓) = e𝜇,𝑢[𝑇0−1/2𝜙, 𝑇0−1/2𝜓] , 𝜙, 𝜓 ∈ H. (36) Now recall that (22) and (35) imply

̃𝑇−1(𝑧) = 𝑇0−1/2(𝐼 +𝑑

𝑗=1𝑓𝑗(𝑧) 𝐾𝑗+ 𝐸𝜇,𝑢)

−1

𝑇0−1/2. (37)

By construction the operator function ̃𝑆 : Ω → L(H)

̃𝑆(𝑧) = 𝐼 +𝑑

𝑗=1

𝑓𝑗(𝑧) 𝐾𝑗+ 𝐸𝜇,𝑢= 𝑆 (𝑧) + 𝐸𝜇,𝑢 (38)

satisfies the assumption of Lemma 3. Finally, note that

𝜙∈V\{0}sup

󵄨󵄨󵄨󵄨󵄨e𝜇,𝑢(𝜇) [𝑢, 𝜙]󵄨󵄨󵄨󵄨󵄨

√t0[𝜙, 𝜙] = sup

𝜙∈V\{0}

󵄨󵄨󵄨󵄨t(𝜇)[𝑢,𝜙]󵄨󵄨󵄨󵄨

√t0[𝜙, 𝜙]

=: 󵄩󵄩󵄩󵄩󵄩t𝜇,𝑢(𝜇) [𝑢, ⋅]󵄩󵄩󵄩󵄩󵄩t0,−1,

(39)

and so from (36) it follows that

󵄩󵄩󵄩󵄩󵄩𝐸𝜇,𝑢󵄩󵄩󵄩󵄩󵄩 =󵄩󵄩󵄩󵄩󵄩t𝜇,𝑢(𝜇) [𝑢, ⋅]󵄩󵄩󵄩󵄩󵄩t0,−1. (40) Now recall the definition of𝐶 from Lemma 3 to conclude the proof.

2.3. A Sketch for a Practical Algorithm for Error Estimation.

Proposition 4 will be the basis for practical error estimation.

LetV ⊃ Q,ℎ > 0 be a family of finite dimensional spaces such that the orthogonal projections 𝑄 onto Q converge strongly to𝐼 as ℎ → ∞. Let further Q1 ⊂ Q2for1 ≤ ℎ2. Assume that𝜙 ∈ Q1and𝜇 ∈ C are given. For ℎ2,2 > ℎ1 define

󵄩󵄩󵄩󵄩t(𝜇)[𝑢,⋅]󵄩󵄩󵄩󵄩Qℎ2,t0,−1= sup

𝜙∈Qℎ2\{0}

󵄨󵄨󵄨󵄨t(𝜇)[𝑢,𝜙]󵄨󵄨󵄨󵄨

√t0[𝜙, 𝜙] . (41) Obviously it holds, recalling the definition from (39), that

󵄩󵄩󵄩󵄩t(𝜇)[𝑢,⋅]󵄩󵄩󵄩󵄩Qℎ2,t0,−1≤ 󵄩󵄩󵄩󵄩󵄩t𝜇,𝑢(𝜇) [𝑢, ⋅]󵄩󵄩󵄩󵄩󵄩t0,−1. (42) However, in the case in whichQ,ℎ > 0 satisfies the standard saturation property, for example,‖(𝐼−𝑄2)𝜙‖ ≤ 𝑞‖(𝐼−𝑄1)𝜙‖

for fixed𝑞, 0 < 𝑞 < 1 and some 𝜙, we can also prove

󵄩󵄩󵄩󵄩t(𝜇)[𝑢,⋅]󵄩󵄩󵄩󵄩Qℎ2,t0,−1≤ 󵄩󵄩󵄩󵄩t (𝜇) [𝑢, ⋅]󵄩󵄩󵄩󵄩t0,−1

≤ 𝐶 󵄩󵄩󵄩󵄩t (𝜇) [𝑢, ⋅]󵄩󵄩󵄩󵄩Qℎ2,t0,−1. (43) The constant 𝐶 essentially depends on 𝑞 which in turn depends on2− ℎ1but not on the magnitude of1; for more details see [24, 25].

(7)

3. Contour Integration Based Subspace Iteration

In the following section we will present the inexact subspace iteration algorithm based on contour integration and prove basic convergence results using quasi-matrix notation. We consider the spectral transform functions from [5, 6, 26, 27]

in the context of eigenvalues of operator-valued analytic functions. Let𝑇(⋅) be given and let Γ ⊂ Ω be a closed curve which encloses either a set of𝑟 simple eigenvalues or a single semisimple eigenvalue whose multiplicity is𝑟.

Let us consider the operators 𝐴(𝑞)= 1

2𝜋𝑖

Γ𝑧𝑞𝑇 (𝑧)−1𝑑𝑧, 𝐵(𝑞)= 1

2𝜋𝑖

Γ𝑧𝑞𝑇 (𝑧)−1𝑇󸀠(𝑧) 𝑑𝑧, 𝑞 = 0, 1,

(44)

and their approximations:

𝐴(𝑞)𝑁 =𝑁

𝑘=1

𝜔𝑘𝑧𝑞𝑘𝑇 (𝑧𝑘)−1,

𝐵(𝑞)𝑁 =𝑁

𝑘=1

𝜔𝑘𝑧𝑞𝑘𝑇 (𝑧𝑘)−1𝑇󸀠(𝑧𝑘) ,

𝑞 = 0, 1.

(45)

Here𝜔𝑖and𝑧𝑖 ∈ Γ, 𝑖 = 1, . . . , 𝑁, are integration weights and integration nodes. Based on Theorem 4.7 in [4] we establish, for𝜔𝑖and𝑧𝑖 ∈ Γ defined by the 𝑁 node trapeze integration formula for the contour integral (44), the following estimates:

󵄩󵄩󵄩󵄩󵄩𝐴(𝑞)− 𝐴(𝑞)𝑁󵄩󵄩󵄩󵄩󵄩 ≤ 𝐶1𝑑 (𝑇)−𝜅𝑒−𝐶2𝑁𝑑(𝑇),

󵄩󵄩󵄩󵄩󵄩𝐵(𝑞)− 𝐵(𝑞)𝑁󵄩󵄩󵄩󵄩󵄩 ≤ 𝐶1𝑑 (𝑇)−𝜅𝑒−𝐶2𝑁𝑑(𝑇). (46) HereΓ is simple closed contour in Ω such that 𝜎(𝑇) ∩ Γ = 0, 𝑑(𝑇) = min𝜆∈𝜎(𝑇)dist(𝜆, Γ), and 𝜅 is the maximum order of poles for the inverse of𝑇. The constants in (46) are in general different and depend on the maximum of the integrand on the contourΓ. For more details see [4, 7]. Subsequently we conclude that𝐴(𝑞)𝑁 → 𝐴(𝑞)and𝐵𝑁(𝑞) → 𝐵(𝑞),𝑞 = 0, 1 in the norm resolvent sense.

3.1. Extracting Information on Eigenvalues Enclosed by a Contour. Based on Proposition 2 we see that operators𝐴(0) and 𝐵(0) are finite rank operators such that Ran(𝐴(0)) = Ran(𝐵(0)) is the space spanned by linearly independent eigenvectors associated with semisimple eigenvalues which are enclosed byΓ. Rather than providing a technical proof of these claims, which will be a subject of subsequent reports, we will present numerical evidence on two judiciously chosen examples. Further, we see that based on [7] we can establish the following technical result for the operators𝐴(1)and𝐵(1).

Proposition 5. Let 𝑇 be the operator-valued function from (15) and letΓ be the contour which encloses solely the 𝑟, counting according to algebraic multiplicity, semisimple eigenvalues of𝑇.

Define the matrices𝐴(𝑞)and𝐵(𝑞),𝑞 = 0, 1, as in (44) and let 𝑄 : C𝑟 → ̃H, 𝑄𝑄 = 𝐼, be a quasi-matrix such that 𝑄𝐴(0)𝑄 and 𝑄𝐵(0)𝑄 are invertible. Then the eigenvalues of the matrix pairs(𝑄𝐴(1)𝑄, 𝑄𝐴(0)𝑄) and (𝑄𝐵(1)𝑄, 𝑄𝐵(0)𝑄) are precisely the eigenvalues 𝜆𝑖, 𝑖 = 1, . . . , 𝑟, where we count according to multiplicity. Furthermore, if 𝑎𝑖, 𝑏𝑖 ∈ C𝑟 satisfy 𝑄𝐴(1)𝑄𝑎𝑖 = 𝜆𝑖𝑄𝐴(0)𝑄𝑏𝑖 and𝑄𝐵(1)𝑄𝑎𝑖 = 𝜆𝑖𝑄𝐵(0)𝑄𝑏𝑖, then 𝑄𝑎𝑖 and 𝑄𝑏𝑖 are eigenvectors of 𝑇 associated with 𝜆𝑖, 𝑖 = 1, . . . , 𝑟.

Proof. We will prove the statement for the matrix pair (𝑄𝐵(1)𝑄, 𝑄𝐵(0)𝑄) and note that the proof for the matrix pair(𝑄𝐴(1)𝑄, 𝑄𝐴(0)𝑄) isequivalent. Based on Proposition 2 we see that there are quasi-matrices 𝑋 : C𝑟 → ̃H and 𝑌 : C𝑟 → ̃H and a neighborhood U of Γ such that

𝑇 (𝑧)−1

= 𝑋 [[ [[ [[ [

(𝑧 − 𝜆1)−1 0 ⋅ ⋅ ⋅ 0 0 (𝑧 − 𝜆2)−1 ⋅ ⋅ ⋅ 0

⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0 ⋅ ⋅ ⋅ 0 (𝑧 − 𝜆𝑟)−1

]] ]] ]] ]

𝑌

+ 𝐻 (𝑧) , 𝑧 ∈ U.

(47)

Here 𝐻 is an analytic operator-valued function. Now we compute

𝑄𝐵(1)𝑄 = (𝑄𝑋) [[ [[ [ [

𝜆1 0 ⋅ ⋅ ⋅ 0 0 𝜆2 ⋅ ⋅ ⋅ 0

⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 0 ⋅ ⋅ ⋅ 0 𝜆𝑟 ]] ]] ] ]

(𝑌𝑄)

𝑄𝐵(0)𝑄 = (𝑄𝑋) (𝑌𝑄) .

(48)

Based on the assumptions of the theorem we conclude that (𝑄𝑋) and (𝑌𝑄) have to be invertible and so the conclusion readily follows.

Before we proceed, note that norm resolvent convergence of 𝐴(𝑞)𝑁 and 𝐵(𝑞)𝑁 implies the convergence of spectra and associated spaces to those of 𝐴(𝑞) and 𝐵(𝑞), 𝑞 = 0, 1. In particular, this means that𝐴(0)𝑁 and𝐵(0)𝑁, for𝑁 large enough, will have two well separated components in the spectrum and so we are motivated to use subspace iteration on the operator level to improve the quality of the approximate eigenvalues.

Note that for a vector V the vectors 𝐴(0)𝑁V and 𝐵(0)𝑁V can be computed using formula (45) without ever forming a representation for the underlying operator.

3.2. Inexact Subspace Iteration Based on Quadrature Formula.

We have the following algorithm for a generic bounded

References

Related documents

Let A be an arbitrary subset of a vector space E and let [A] be the set of all finite linear combinations in

The aim of the present article is to find convergence regions and a few conjectures of convergence regions for these functions based on a vector version of the Nova q-addition1.

The number of the dimensions for a feature can grow with time as more information is acquired. At the same time as the special properties of each type of feature are accounted for,

Lam´ e equation, Schr¨ odinger equation, spectral polynomials, asymp- totic root distribution, hypergeometric functions, elliptic functions, finite gap

Further research has to be conducted in order to establish a more concrete business model for a pro-active DSO. The target consumer element is something that needs more research

The pair of integral transformations (2.11) and (2.13) has been used in [5] to study cor- relation functions of the non-linear O(N ) sigma model in the framework of the

Hence, the local maximum is the root μ 2 in (3.15) if the bounded component intersects the imaginary axis two times and the root μ 4 in (3.15) if there are four intersections..

Hence, if the entries in a n × n operator matrix function are either multiplications of polynomials or Schur complements, then Theorem 3.4 and Theorem 3.7 can be used iteratively to