• No results found

Elliptiska Partiella Differentialekvationer och Spektralgeometri

N/A
N/A
Protected

Academic year: 2022

Share "Elliptiska Partiella Differentialekvationer och Spektralgeometri"

Copied!
33
0
0

Loading.... (view fulltext now)

Full text

(1)

IN

DEGREE PROJECT TECHNOLOGY, FIRST CYCLE, 15 CREDITS

STOCKHOLM SWEDEN 2019,

Partial Differential Equations and Spectral Geometry

VALTER FALLENIUS

ALBIN PERSSON

(2)

INOM

EXAMENSARBETE TEKNIK, GRUNDNIVÅ, 15 HP

STOCKHOLM SVERIGE 2019,

Elliptiska Partiella

Differentialekvationer och Spektralgeometri

VALTER FALLENIUS

ALBIN PERSSON

(3)

Abstract

In this paper we explore some theory behind elliptic partial differential equations, in what problems they arise and methods of solving them.

Specifically we will try to address the question asked by Mark Kac in 1966: “Can one hear the shape of a drum?”. Using the theory from an article by C. Gordon, D. Webb and S. Wolpert [GWW92] we construct two planar domains which are isospectral under the Laplacian. Thus answering Kac’s question negatively that no, one cannot hear the shape of a drum. With numerical methods we visualize some eigenmodes for these isospectral domains and compare their eigenvalues. Even though one can not hear the shape of a drum the spectrum generate some useful information. With Weyl’s Law one can calculate the area, or even the circumference, of the domain which we discuss in the last section.

(4)

Sammanfattning

I denna rapport utforskar vi en del teori kring eliptiska partiella dif- ferential ekvationer, i vilka problem de uppstår och metoder för att lösa dem. Mer specifikt försöker vi att sätta oss in i och undersöka frågan ställd av Mark Kac 1966: “Can one hear the shape of a drum?”. Genom att an- vända metoder från en artikel av C. Gordon, D. Webb and S. Wolpert [GWW92] konstruerar vi två plana domäner som är isospektrala under Laplacianen. Detta ger oss svaret till Kac’s fråga: nej, man kan inte hö- ra formen på en trumma. Med numeriska metoder visualiserar vi några egenmoder för dessa isospektrala domäner och jämför deras egenvärden.

Fastän man inte kan höra formen på en trumma ger spektrat en del an- vändbar information. Med Weyl’s Lag kan man beräkna arean, eller till och med omkretsen, av domänen vilket vi diskuterar i sista sektionen.

(5)

Contents

1 Introduction 3

1.1 Elliptic Partial Differential Equations . . . . 3

1.2 Function Spaces . . . . 4

1.3 Mean Value Theorem . . . . 5

1.4 The Maximum Principle . . . . 5

1.5 Boundary Value Problems . . . . 8

1.6 Eigenvalue Problems . . . . 9

2 You can’t hear the shape of a drum 11 2.1 Group Theory . . . 12

2.2 Generating Isospectral Regions . . . 16

3 Numerics on Isospectral Domains 19 3.1 Discretizing the Wave Equation . . . 19

3.2 The Neumann Spectrum . . . 22

3.3 Sinusoidal Solutions . . . 22

3.4 Analytic Approach to the Cosine Hypothesis . . . 24

4 Weyl’s Law 25 4.1 Example in 1D . . . 26

4.2 Example in 2D . . . 26

4.3 Weyl’s Law on Isospectral Domains . . . 27

4.4 The Weyl Conjecture . . . 28

(6)

1 Introduction

Differential equations appear in physics and mathematics when there is a re- lation between a quantity and its derivatives; for example the Navier–Stokes equation modeling the flow of a medium. In this paper we explore some theory behind a certain type of differential equations, elliptic partial differential equa- tions. We have a linear partial differential operator L acting on some function u defined on a domain U. We then solve the eigenvalue problem (with boundary conditions)

Lu = λu.

Often one looks at a predefined domain U and find eigenfunctions u in that domain with corresponding eigenvalues λ. Then maybe one does a series expan- sion of the eigenfunctions to find some particular solution. However in spectral geometry one looks at the reverse problem. If you know the eigenspectrum of L, what can you say about the geometry of U? That is, what can you say about the shape of the domain U and its boundary ∂U? Hermann Weyl was a pioneer in the area of spectral geometry and he devoted many papers to the asymptotic behavior of the eigenspectrum [Wey11, Wey12a, Wey12b, Wey13, Wey15]. He showed that there is an asymptotic relation between the number of eigenvalues smaller than λ and the magnitude of λd/2 proportional to the volume of the domain U ⊂ Rd:

lim

λ→∞

N (λ)

λd/2 ∝ vol(U )

The exact formulation, which we will discuss in Section 4, is known as Weyl’s Law.In 1966 — 50 years after Weyl’s first paper on spectral geometry — Mark Kac published a paper with the today still famous title “Can One Hear the Shape of a Drum?” [Kac66]. This means, can one determine the shape of a do- main from the Laplace eigenvalue spectrum? This question had been answered negatively a few years earlier by John Milnor for 16-dimensional flat tori [Mil64].

Milnor shows an example of two non-isometric, meaning they do not look the same, 16-dimensional flat tori which have the same eigenvalue spectra under the Laplacian. It was not until 1992 when C. Gordon, D. Webb and S. Wolpert answered Kac’s question negatively for 2-dimensional planar regions [GWW92].

In their paper they construct two different planar regions which sound the same, thus proving that in the general case one cannot hear the shape of a drum.

1.1 Elliptic Partial Differential Equations

The general form of a linear partial differential equation (PDE) can be written as

Lu = −

n

X

i,j=1

aij(x)Diju+

n

X

i=1

bi(x)Diu + c(x)u = f (x) (1) where u is a function, Diu = ∂x∂u

i, Diju = ∂x2u

i∂xj and aij(x), bi(x), c(x) are coefficients defined for all x ∈ U. A partial differential equation is classified as

(7)

elliptic when the coefficient matrix [aij] is positive definite, meaning when all its eigenvalues λi> 0. We can assume the matrix [aij] to be symmetric. For a two-dimensional example, if we define a11= A, a22= C and a12= a21= Bthe PDE takes the form

−A2u

∂x2 − 2B 2u

∂y∂x− C2u

∂y2 + b1∂u

∂x + b2∂u

∂y + cu − f = 0 and the PDE is considered elliptic if

B2− AC < 0

If for example u represents the density of some quantity then the physical in- terpretation of the second-order terms is that they describe the change of flow within the medium, the diffusion of u. The first-order terms represents the change of density and the zeroth-order term the density in a given point. Ellip- tical PDEs show up in a range of different areas, one of the simplest being the Laplace equation given by

∆u =

d

X

i=1

2u

∂x2i = 0 (2)

which appears in many branches of physics, for example the diffusion equation which describes the evolution in time of some density, such as heat through a medium, given by ∂u∂t − ∆u = 0.

1.2 Function Spaces

We will study functions belonging to L2(U ), Cn(U ) and H01(U )-spaces, they prove to be useful in the area of PDEs treated in this paper.

The L2(U )space is defined as the space of functions u that satisfies

kuk =

U

|u|2dx

1/2

< ∞.

The Cn(U ) space is the space of functions that are n times continuously differentiable. To define the H01(U )-space we make some modifications to the Cn(U )-space.

C(U ) is a function space with all infinitely differentiable equations. We define C0 := {u ∈ C(U ) and supp(u) is compact in U} ⊆ C(U ). When studying an open subset A ⊂ Rd, closing it with respect to the metric – denoted as A – is defined as A := A ∪ B where

B := {x ∈ Rd: ∃xj ∈ Rd such that kx − xjk → 0when j → ∞}.

Here xj approaches x in the meaning of the metric. We can use the same argument on C0 and close it using the H01(U )norm defined by

kukH1

0(U )=q

hu, viH1

0(U )

(8)

where the inner product is defined as

hu, viH1

0(U )= ˆ

U

| uv | dx + ˆ

U

| (∇u)(∇v) | dx.

The H01(U )is the C0 space closed with respect to the H01(U )norm. In the closure C0

H01

(U ) we add all functions u for which there exists a sequence of functions uj∈ C0(U )that fulfills lim

j→∞ku − ujkH1

0(U )= 0.

1.3 Mean Value Theorem

Take a function u : U → R where U ⊂ Rd with u ∈ C2(U ) that satisfies the Laplace equation ∆u = 0. Then the following holds true for any x0∈ U:

u(x0) =

Br(x0)

u(x)dV =

∂Br(x0)

u(x)dS

Where Br(x0)is a ball with radius r in U and has its center at x0. Proof

Let φ(r) :=|∂Br1(x0)|´

∂Br(x0)u(x)dS(x)and let x = x0+ r ˆxwhere x ∈ Br(x0), ˆ

x ∈ B1(0) and dS(x) = rd−1dS(ˆx)then φ(r) = 1

ωdrd−1 ˆ

∂B1(0)

u(x0+ r ˆx)rd−1dS(ˆx) = 1 ωd

ˆ

∂B1(0)

u(x0+ r ˆx)dS(ˆx) where ωd is the volume of the unit d-ball. If we now take the derivative with respect to r:

φ0(r) =

∂B1(0)

ˆ

x · ∇u(x0+ r ˆx)dS(ˆx) =

∂Br(x0)

∇u(x) · (x − x0 r )dS(x)

=

∂Br(x0)

du

dS(x) = −

Br(x0)

div(∇u(x))dS(x) = {∆u = 0} = 0 where we have used Green’s Theorem. This implies that:

φ(r) =constant = lim

t→0 ∂Bt(x0)

u(x)dS(x) = u(x0) 

1.4 The Maximum Principle

The maximum principle gives useful information about the behavior of solutions to elliptic PDEs. Maximum principle methods are based on the fact that if a function attains its maximum value at a point x0 then its derivative at that point will be zero and the second derivative ≤ 0 [Eva10, p. 344]. To get a good

(9)

idea of the maximum principle and the connection to elliptic PDEs we can look at the Hessian H in two dimensions defined by

H =

2

∂x² 2

∂x∂y

2

∂y∂x

2

∂y2

!

and a harmonic function u. Suppose there is a maximum at a point xm. Then we have the condition that Hu(xm)is negative definite ⇐⇒ λ1 < 0and λ2 < 0. We also have that

T r(Hu(xm)) = 2u

∂x²(xm) +2u

∂y²(xm) = λ1+ λ2< 0.

But since u is harmonic, that is ∂x2u² +∂y2u² = 0, this leads to a contradiction.

Hence there can not be a maximum at xm, only a saddle point. In the following theorem, the variables and operators are as above, U denotes the domain, ∂U is the boundary of the domain and U = U ∪ ∂U.

The Strong Maximum Principle [Eva10, p. 27]

Suppose u ∈ C2(U ) ∩ C(U )is harmonic within U.

Then

max

U

u = max

∂U u

Furthermore, if U is connected and there exists a point x0∈ U such that u(x0) = max

U

u then

uis constant within U Proof

We suppose there exists a point x0 ∈ U and that u(x0) = M where M is the max value in the domain, that is M := max

U¯

u. From the mean-value theorem we have for 0 < r < dist(x0,∂U ) that

M = u(x0) =

Br(x0)

u(x)dV ≤ M

Equality only holds when u = M within Br(x0) which implies that u(x) = M for all x ∈ Br(x0). A set A is open if for all x ∈ A there exists a  > 0 : B(x). Therefore the set A = {x ∈ U | u(x) = M} is open and also relatively closed, due to continuity, in U and thus if U is connected A = U and the theorem follows. 

(10)

Hopf ’s Lemma [Eva10, p. 347]

Assume u ∈ C2(U ) ∩ C( ¯U )and

c ≡ 0in U.

Suppose further

Lu ≤ 0in U and that there exists a point x0∈ ∂U such that

u(x0) > u(x) ∀x ∈ U.

Assume finally that U satisfies the interior ball condition at x0; that is, there exists an open ball B ⊂ U with x0∈ ∂B. Then

∂u

∂ν(x0) > 0 where ν is the outer unit normal to B at x0. If

c ≥ 0in U the same conclusion holds provided that

u(x0) ≥ 0.

Proof

For this proof we refer to [Eva10, p. 347].

The strong maximum principle for elliptic PDEs [Eva10, p. 349]

Assume u ∈ C2(U ) ∩ C(U ) and

c = 0in U.

Suppose also U is connected, open and bounded. If Lu ≤ 0in U

and u attains its maximum over U at an interior point, then u is constant within U. Similarly, if

Lu ≥ 0in U

and u attains its minimum over U at an interior point, then u is constant within U.

(11)

Proof

Let M := max¯

U

u and C := {x ∈ U|u(x) = M}. Then if u 6= M, set V :=

{x ∈ U |u(x) < M }. Choose a point y ∈ V satisfying dist(y, C) < dist(y, ∂U), let B be the largest ball whose center is y, and whose interior lies in V . Then there exists a point x0 ∈ C, with x0 ∈ ∂B. This implies that V satisfies the interior ball condition at x0which implies that Hopf’s Lemma is satisfied, hence

∂u

∂ν(x0) = Du(x0) · ν > 0. This leads to the contradiction that since u attains its maximum at x0 ∈ U we have that Du(x0) = 0. Therefore the set V is empty and the theorem follows. 

1.5 Boundary Value Problems

In this paper a central problem is a boundary value problem (BVP) for the elliptic operator L as defined in equation (1) on the form

(Lu = f in U

au + b∂u∂ν = gon ∂U (3)

Here g and f are functions, a and b are constants and U is a region. ∂u∂ν is the normal derivative. This gives rise to questions regarding existence of solutions.

Are there any solutions? If so, are they unique?

Existence of Solutions

In this paper we will not look into when and when not there exists solutions, here we refer to a book by L. C. Evans where he discusses it thoroughly [Eva10, P. 315]

Uniqueness

Uniqueness of solutions found can be proved with help from the maximum prin- ciple to the eigenvalue problem stated above as follows:

Let u and v be solutions to equation (3). Then by linearity of L we have (L(u − v) = Lu − Lv = f − f = 0 in U

u − v = g − g = 0 on ∂U

If we put w = u − v , we then have according to the maximum principle max∂U w =max

U w =min

∂U w =min

U w = 0

This implies w = 0 and so u = v which proves the uniqueness of solutions to this problem.

(12)

Boundary Conditions

When considering the BVP (3) the boundary conditions are of great importance for they decide how the solutions will look like. The two most important are fixed boundary conditions called Dirichlet boundary conditions and zero-flux boundary conditions called Neumann boundary conditions.

Dirchlet boundary conditions are defined as the condition when the boundary has a fixed value. It is often written as

u = f on ∂U where f is some function on the boundary.

Neumann boundary conditions are defined as when the normal derivative of the function at the boundary has a fixed value

∂u

∂ν = gon ∂U where g is some function on the boundary.

Sometimes mathematicians study a linear combination of both Dirichlet and Neumann boundary conditions, however these rarely show up in physics.

1.6 Eigenvalue Problems

Dirichlet eigenvalue problems are boundary value problems of the form ((L + λ)u = 0 in U

u = 0 on ∂U (4)

where λ is an eigenvalue of the elliptic operator L and u is a corresponding eigen- function. Eigenvalues can represent different physical quantities such as light or sound frequencies, energy levels and more. In eigenvalue problems you often have the domain and try to find the eigenvalues and eigenfunctions. However in spectral geometry you try to find the domain by studying the eigenvalues.

This leads to questions regarding the uniqueness of domains found, which we will explore later.

A useful theorem which we need as a basis later in this paper is one regarding the eigenvalues of symmetric elliptic operators. A symmetric elliptic operator Lis of the form

Lu =

d

X

i,j=1

aijuij (5)

where the aij ∈ C( ¯U )are coefficients and the matrix [aij]is symmetric.

Eigenvalues of symmetric elliptic operators [Eva10, 355]

(i) Each eigenvalue of L is real

(13)

(ii) Furthermore, if we repeat each eigenvalue according to its (finite) multiplicity, we have

Σ = {λk}k=1 where Σ denotes the spectrum of L and

0 < λ1≤ λ2≤ λ3≤ ...

and

λk→ ∞as k → ∞

Two domains with spectra Σ1and Σ2are said to be isospectral under an elliptic operator L if and only if Σ1= Σ2.

There is an interesting connection between the maximum principle and the first eigenvalue λ1 of (4) that we will state below. To prove this statement we need some information about λ1.

Principle Eigenvalue for Non-Symmetric Elliptic Operators [Eva10, p. 361]

(i) There exists a real eigenvalue λ1, called the principle eigenvalue of L, for (4) such that if λ ∈ C is any other eigenvalue, we have Re(λ) ≥ λ1.

(ii) There exists a corresponding eigenfunction u1, which is positive within U.

(iii) The eigenvalue λ1is simple; that is, if u is any other solution of (4), then u is a multiple of u1.

We will not prove this theorem, here we refer to [Eva10, p. 361].

The Principle Eigenvalue and the Maximum Principle (refined Max- imum Principle [BNV94a, p. 55])

The refined maximum principle holds for L ⇐⇒ λ1> 0, where λ1and u1 is a solution to (4).

Proof

The first implication, the refined maximum principle holds for L =⇒ λ1> 0, also holds for the strong maximum principle and is simple to prove. Proof by contradiction:

Suppose λ1≤ 0 then from the Principle Eigenvalue theorem we know there exists a function u1 positive in U that satisfies Lu1= λ1u1≤ 0. We also know since u1= 0on ∂U and u1> 0in U that u1fullfills the equality max

U

u1= max

U u1. The maximum principle now implies that u1 = constantin U but since u1 is continuous, u1= 0in ∂U and positive in U we get a contradiction. Hence if the

(14)

maximum principle holds for L =⇒ λ1> 0. Since the refined maximum prin- ciple is a special case of the strong maximum principle the implication is also true for the refined maximum principle. The other implication is a bit trickier to prove and can be found in the article by H. Berestycki et. al. [BNV94b].

Berestycki prove it for the so called refined maximum principle but the equiva- lence does not hold for the strong maximum principle.

2 You can’t hear the shape of a drum

In 1966 the mathematician Mark Kac posed the question, “Can one hear the shape of a drum?”, which means if one can determine the shape of an object by studying its spectrum of eigenvalues (frequencies). We can model a drum with the wave equation as a vibrating membrane. For a drum it is reasonable to have Dirichlet boundary conditions, meaning that the wave function is zero along the boundary. One can also imagine Neumann boundary conditions for which the normal derivative is zero on the boundary. The wave equation describes how the membrane vibrates depending on the different spatial directions:

2u

∂t2 = c2∆u

where u is the vertical displacement, t is time, c is the propagation speed and ∆ being the Laplace operator. For a 2D membrane the wave equation in Cartesian coordinates becomes:

1 c2

2u

∂t2 2u

∂x2 2u

∂y2 = 0

This can be solved through separation of variables. The stationary solution of the wave equation where we set t = t0 is often of interest since it is enough to yield all the eigenvalues of the membrane. This means we want to separate the time dependent function from the spatial function, u(x, y, t) = f(x, y)g(t). This gives rise to the following equations:

∆f (x, y) = −λf (x, y)for (x, y) ∈ U, f (x, y) = 0for (x, y) ∈ ∂U, g00(t) = −λg(t)for t > 0, u(x, y, 0) = u0(x, y)for (x, y) ∈ U,

(6)

where λ denotes the eigenvalues of these equations and D is the domain of f (x, y). The first of these equations is the Helmholtz equation. It turns out that the Helmholtz equation only can be solved explicitly for a very limited number of regions D. Examples of D for which analytic solutions exist are rectangular regions or a circular membrane. Part of why Kac’s original problem remained unsolved for almost 30 years was because the Helmholtz equation cannot be solved for a general shape which makes the problem trickier. It was not until 1992 when Gordon, Webb and Wolpert [GWW92] solved it by combining ideas

(15)

from group theory and linear algebra. In their article they show a counter- example of two isospectral non-isometric regions in the Euclidean plane. This answers Kac’s original question that no, you cannot hear the shape of a drum.

The details of the proof are too advanced for this paper. However we will try to address the main concepts and ideas behind the proof.

2.1 Group Theory

To answer Kac’s question “Can One Hear the Shape of a Drum?” we general- ize the question to “Can one hear the shape of a Riemannian Manifold?”. By generalizing to Riemannian manifolds, group theory and linear algebra provide tools to tackle this problem and later we can return to the Euclidean plane.

Manifolds are spaces that resemble Euclidean space locally at every point. An example of a 2-dimensional manifold could be a sphere; if you zoom in enough on any point it will resemble the Euclidean plane. On the other hand if you take a sphere with a one dimensional string attached to it, essentially a balloon, no matter how much you zoom in on the point where the string attaches to the sphere it will not resemble a Euclidean space. Two manifolds are said to be isospectral if they have the same eigenvalues under the Laplacian counting multiplicities.

In the paper by C. Gordon et. al. [GWW92] they find a simple example of two isospectral non-isometric regions. They do this using a theorem developed by T. Sunada and extended by P. Bérard which guarantees two particular mani- folds to be isospectral both under Dirichlet and Neumann boundary conditions.

They start by creating a finite group G that contains three special elements α, βand γ which serve as a generating set for G. By finding two subgroups Γ1and Γ2 of G which have the same linear representation but different permutation representations they generate two Riemannian manifolds by letting Γ1 and Γ2

act by isometries. Now Bérard’s theorem says the manifolds will be isospectral.

The way Γ1and Γ2 generate two isospectral planar regions is somewhat techni- cal and for the purpose of this paper it will suffice with illustrating how this is done. By folding and connecting edges of the flat manifold seen in Figure 1 one can create various shaped manifolds. By following a recipe given by Brooks and Buser in a private conversation with Gordon et. al. one can create two planar isospectral manifolds M1and M2as seen in Figure 2 by folding and connecting 7 of the unit manifolds given in Figure 1.B.

(16)

Figure 1: (A): Unit manifold M. (B): One folded unit manifold.

(17)

Figure 2: Manifolds M1 and M2.

The manifolds M1and M2are clearly not isometric yet they are not planar regions. We want to find non-isometric isospectral drums in the Euclidean plane.

To use these manifolds to create isospectral drums one can imagine a process of flattening them. Figure 3 shows the flattening of M1and M2where the double lined edges illustrates where the original manifolds originally connected around to the other side of the manifold. C. Gordon et. al. [GWW92] show in their paper that these flattenings preserves the isospectrality of the manifolds.

(18)

Figure 3: Flattened manifolds O1 and O2

By shrinking the edges where lines are not doubled to points one can arrive to the shape shown in Figure 4. This process is just an illustration of why these figures are similar. One can show that if you shrink both O1 and O2 in this way the generated planar domains are also isospectral. The real reason why this

“shrinking” works is because we are allowed to modify the unit manifold letting the cross shape become a square by shrinking the free edges. Why this is allowed is not rigorously discussed in the article by Gordon et. al. [GWW92], instead they refer to previous results from Buser and Semler, and independent results from Conway and Doyle. As we will show in the next section the same domains can be generated from a different approach and through a transplantation of waveforms we can show they still are isospectral.

(19)

Figure 4: Shrinking O1step by step.

2.2 Generating Isospectral Regions

By representing M1and M2in two Cayley graphs as shown in Figure 5 one can transfer the manifolds isospectral properties to another geometrical shape. The set X = {1, 2, 3, 4, 5, 6, 7} labels the different manifold elements of M1 and M2

in Figure 2. The Cayley graphs in Figure 5 encode a way of how the manifold elements (see Figure 1) of M1and M2 are connected to each other.

Figure 5: The Cayley graphs generating isospectral domains.

(20)

To construct two planar regions we start by modeling a triangle with edges α, β and γ. By connecting adjacent vertices in the two Cayley graphs with each respective edge we construct two different planar regions D1and D2. This method is illustrated in Figure 6.

Figure 6: Method to generate a region from Cayley graphs and a triangle ele- ment.

Now considering the full Cayley graphs in Figure 5 we get two non-isometric and isospectral regions in the Euclidean plane:

(21)

Figure 7: Two non-isometric and isospectral regions in the Euclidean plane.

The theory of why the transfer works is too advanced for this paper which is covered by Bérard’s paper from 1992 [Bér92], however the method is practically simple which we will illustrate here.

Start by defining the waveforms A−G on each triangle in D1with boundary and continuity conditions. We can transplant these waveforms to the region D2

with linear combinations of the waveforms A − G. One such solution is shown in Figure 8.

Figure 8: Wave transplantation from D1 to D2.

(22)

We verify this solution by checking all boundary conditions on each element of D2. For example let us take a closer look on the bottom right triangle in the region D2. The red edge should be zero under Dirichlet conditions. As we can see in D1 the red edge in G is already zero. Furthermore the regions D and E are connected through the red edge and by continuity they will have the same values on the red edge, thus by superposing D with −E the waveform vanishes at the boundary. In the same way you can verify continuity to all connected triangles. We have shown that we can transplant a waveform from D1with some frequency to D2 and isospectrality follows. Thus you cannot hear the shape of a drum.

It is mentioned in the paper by Gordon et. al. [GWW92] that if you strike the region D1with a unit intensity at some point you actually have to strike the region D2in 7 different places with appropriate intensities. However, to find the eigenfunctions of these regions turns out to be a difficult task without the use of numerical methods. In the section below we will investigate the isospectral domains using numerical methods.

3 Numerics on Isospectral Domains

Partial differential equations often have simple formulations, take the Helmholtz equation for example: ∆u = λu on some domain with boundary conditions.

Even though the equation looks simple it is easy to find a region for which no explicit solution on a simple form can be found. Numerical methods provide a way to approximate solutions when calculus fails. Although numerical solu- tions are not exact, they can be close enough to strengthen or even generate a hypothesis. More often than not numerical solutions are easier and quicker to find. With manageable discrete approximations we can generate approximate solutions to different problems. The reason we are using numerical methods on these domains is to visualize the solutions and as we will see in this sec- tion some particular solutions stand out to be very simple in their form. We stumbled upon these simple solutions when studying the Neumann eigenvalue problem for D1 and D2.

3.1 Discretizing the Wave Equation

Using COMSOL Multiphysics we investigate the wave equation acting on the domains D1and D2shown in Figure 7. We start by defining the regions D1and D2 in COMSOL where each side of a triangle has a unit length of 1 and the hypotenuse a length of

2:

(23)

Figure 9: D1and D2as defined in COMSOL.

We model the wave equation acting on these regions using the finite element method. By discretizing the regions into a mesh of smaller elements we can make the wave equation to its discrete counterpart, this is done automatically in COMSOL. Choosing sufficiently small elements to minimize the errors:

Figure 10: Meshing of D1 and D2.

The last thing we do before we let COMSOL do its calculations is to define the boundary conditions. According to the theory discussed in the previous sec- tion the regions D1 and D2 should be both Dirichlet and Neumann isospectral.

We start by investigating the Dirichlet spectrum and their eigenmodes. With fixed boundaries we arrive at the following eigenmodes:

(24)

Figure 11: First 4 eigenmodes of D1and D2.

(25)

Although numerical methods can never prove isospectrality — this is because all spectra are infinite — we can verify that the first eigenfrequencies are at least very similar. With the above mesh we managed to verify the eigenfrequencies to be equal in up to three significant digits, see Figure 11. However some numerical errors show up in the fourth and fifth decimal points, when we make the mesh elements smaller the significant figures coincide with more digits.

3.2 The Neumann Spectrum

The theory from Section 2.1 guarantees the domains D1and D2to be isospectral both under Dirichlet and Neumann boundary conditions. In this section we will use the same method as described in Section 3.1 to visualize the Neumann spectra for the domains and their respective eigenmodes. As expected we find that the spectra are indeed very similar, the first six non-zero eigenvalues of the Neumann spectrum are tabulated in Table 1. A Neumann spectrum for a planar domain will always have λ1= 0with a corresponding constant eigenfunction.

j λDj1 λDj2 2 0.84483 0.84483 3 3.2380 3.2380 4 4.2315 4.2315 5 7.4418 7.4418 6 9.8696 9.8696 7 10.912 10.912

Table 1: The first six eigenvalues of the Neumann spectrum for D1 and D2.

3.3 Sinusoidal Solutions

The values of Table 1 are generated for both regions and with sufficiently small meshing the values coincide as seen in Table 1. On a closer examination of the first 50 eigenmodes, some stand out to be very simple in their form. In Figure 12 these simple looking functions are plotted. Judging from the figure it seems that two-dimensional periodic functions on the form

ψkj(x, y) = C cos rλkj

2 x cos

rλkj

2 y

(7) with C ∈ R for some integers kj might solve the Helmholtz equation with Neumann boundary conditions. Here the x- and y-directions are either parallel to the straight triangle sides or the hypotenuses. Let the spectrum for D1 be ΣD1 = {λi}i=1. We now have that Σcosine= {λkj}j=1⊂ ΣD1.

(26)

Figure 12: The j:th eigenvalue is given by lambda(j). The figure shows eigen- modes for λ6, λ10, λ16 and λ30.

To see if these eigenmodes really are sinusoidal functions as hypothesized in (7) we compare it to the numerically generated eigenmodes. First we see that in the x-direction for λ10one period has the length 2 which implies λ10= 2π2and we arrive at the same result that λ10= 2π2≈ 19.739. Furthermore λ30= 4λ10

which also makes sense from the picture since the period has halved in length giving a factor 4 to the eigenvalue. The eigenvalue for the eigenfunction ψkj is:

λkj =

(2j2 for propogation parallel to straight triangle sides, π2j2 for propogation parallel to the hypothenuses.

We can verify all these results using MATLAB. We export the eigenmode, 5270 amplitude data points, for λ10 from COMSOL in an array Z along with each respective x and y value. Then we compare it to a function as in (7). We let C = − max(Z) with λkj = 2π2.

To measure how well the cosine hypothesis coincides with the COMSOL solution we take a look at the coefficient of determination r2 which measures how well a model replicates a set of data. The r-value for two sets of data is

(27)

given by:

r = iψ10(xi, yi)Zi− (Σiψ10(xi, yi)(ΣiZi)

pn(Σiψ10(xi, yi)2) − (Σiψ10(xi, yi))2pn(ΣiZi2) − (Σiψ10(xi, yi)Zi) (8) Evaluating the r-value for the COMSOL solution Z and the cosine hypothesis ψ10 = ψ10(x, y)in the same x- and y-coordinates we get 1 − r2 ≈ 10−13. This implies that our solution has a very low deviation from the COMSOL solution Z with some numerical errors.

Applying the same hypothesis for λ6 and λ33 one can conclude that their respective eigenfunctions too are on the form shown in equation 7 with x- and y-axis perpendicular to the hypotenuses of the element triangles.

Even though the Neumann spectrum of eigenvalues ΣD1= {λkj}j=1for the domains D1 and D2 remains unknown, we can show that the infinite set

Σcos= {λkj = 2π2j2}j=0

is a subset of ΣD1. To do this we need to take an analytic approach.

3.4 Analytic Approach to the Cosine Hypothesis

We have postulated some solutions to the Helmholtz equation for D1 to be on the form

ψkj(x, y) = C cos rλkj

2 x cos

rλkj

2 y where kj is a sequence of integers, j ∈ N, C is some constant and

λkj =

(2j2 for propogation perpendicular to the straight triangle sides, π2j2 for propogation perpendicular to the hypothenuses.

Observe that the x- and y-directions are defined differently for propagation perpendicular to the straight triangle sides and hypotenuses and for each j there are 2 eigenvalues and two eigenfunctions. These solutions obviously solve the Helmholtz equation

∆ψkj = −λkjψkj

What remains to show is that the boundary conditions are fulfilled. Neu- mann conditions are such that the normal derivatives are zero along the bound- ary:

∂ψkj

∂ν = ∇ψkj· ν = 0

For vertically oriented boundaries we have ν = ˆex (the unit x-vector) and propagation is perpendicular to the straight triangle sides. We get:

∂ψkj

∂ν = −C rλkj

2 sin(

rλkj

2 x) cos(

rλkj

2 y)

(28)

For all vertical boundaries x = n where n is some integer — this is because each triangles side length is equal to 1 — which results in:

∂ψkj

∂ν = −Cjπ sin(jπn) cos(jπy) = 0

We get the same result for horizontal boundaries where ν = ˆey. For the hy- potenuse boundaries it gets little bit trickier. We use a coordinate system with x-axis parallel to a hypotenuse. We have:

∇ψkj = ˆex

2sin

2x

cos

2y

+ ˆey

2cos

2x

sin

2y We now take a look at the normal derivative when ν = ˆex:

∂ψkj

∂ν = ∇ψkj · ν =

2sin

2x

cos

2y For ν = ˆex we also have that x =

2n where n is some integer and we get:

∂ψkj

∂ν =

2sin jπn

cos

2y

= 0

In the same way we get ∂ψ∂νkj = 0for ν = −ˆex, ν = ˆey and ν = −ˆey. Thus the Neumann boundary conditions are satisfied for

ψkj(x, y) = C cos rλkj

2 x cos

rλkj

2 y .

4 Weyl’s Law

We have now shown that one cannot hear the shape of a drum. That is the spectra does not uniquely decide the domain or boundary geometries. However there still exists some methods that allow us to extract information from the spectrum. As mentioned in the introduction one of the earliest results in spectral geometry is one such method. It was derived by Hermann Weyl who showed that there is an asymptotic relation between the eigenvalues and the size of the domain called Weyl’s Law. Let N(λ) be the number Dirichlet or Neumann eigenvalues to the Laplacian acting on a domain U ⊂ Rd less than or equal to λ. Then Weyl’s Law is formulated as follows:

lim

λ→∞

N (λ)

λd/2 = (2π)−dωdvol(U ) Where ωd is the volume of a unit ball in Rd.

(29)

4.1 Example in 1D

Let Σ = {λ1,λ2, ...} be the sequence of eigenvalues for a 1-dimensional string.

For a string we know the eigenvalues to the laplacian λj = j2Lπ22 and ωd = 2. In reality N(λ) = Σj=1Θ(λ − j2Lπ22), a series of Heaviside step functions, but this function is asymptotically similar with N(λ) ∼

λLπ. The definition for asymptotically similar functions is f(x) ∼ g(x) ⇐⇒ lim

x→∞

f (x)

g(x) = 1. We can now calculate the length of the string using Weyl’s law. Putting this in Weyl’s law we get that

lim

λ→∞

λLπ λ1/2 = L

π = (2π)−1ωdvol(U ) = vol(U )

π =⇒ vol(U ) = L

This example is somewhat trivial since for a string the first eigenvalue is enough give us the length of the string.

4.2 Example in 2D

Consider the Laplace operator acting on a rectangular region U = [0, a] × [0, b]

in R2 with Dirichlet boundary value conditions. We have the BVP:

(∆ψ(x, y) = λψ in U,

ψ(x, y) = 0 on ∂U. (9)

The solutions of the BVP (9) takes the form ψm,n(x, y) = sin(

a x) sin( b y).

This gives us λm,n= (a )2+ (b )2which forms an equation for an ellipse. We now want to find an expression for N(λ). Since λm,nis a discrete multivariate function of m and n there is no explicit expression for N(λ). One implicit expression would be N(λ) = #{λm,n= (a )2+ (b )2 : λm,n ≤ λ}. However we can find a function which is asymptotically similar to N(λ) which is all we need for Weyl’s Law.

We start by plotting a grid with m and n in the xy-plane. To approximate the number of eigenvalues less than or equal to λ we make the equation for λm,n= (a )2+ (b )2 non-discrete by exchanging m and n with x and y. The area of this quadrant ellipse (x > 0 and y > 0) will be A1 = abλ, marked with the red line in Figure 13. Each eigenvalue takes up a unity area, which means we can approximate the number of eigenvalues as the area

A1= abλ

< N (λ).

This is a lower bound for N(λ), where N(λ) is represented by the gray squares in figure 13. To find an upper bound we translate thr ellipse a unit length in

(30)

the x- and y-direction. An upper bound is then given by

A2= abλ +a

λ π +b

λ

π − 1 > N (λ) marked with the yellow lining in Figure 13. Since lim

λ→∞

A1

A2 = 1we have A1 A2abλ which implies that N(λ) ∼abλ.

Figure 13: Counting function convergence.

We can now calculate the area of the rectangle through Weyl’s Law:

λ→∞lim abλ 4πλ2/2 = ab

= (2π)−2ωdvol(U ) = vol(U )

=⇒ vol(U ) = ab

4.3 Weyl’s Law on Isospectral Domains

Let us see how well the numerical results of the eigenspectra from Section 3 approximates the area using Weyl’s Law. Using the 50th eigenvalue from COM- SOL λ50 = 140.2 and N(140.2) = 50 we get that vol(D1) = 4.775. If we take the 150th eigenvalue λ150= 477.91we get an area vol(D1) = 3.944which is an even better approximation of the area which we know to be A = 3.5 considering the triangles in D1. When looking at an infinite subset of the eigenvalue spectra

References

Related documents

The principal findings of the present study were that: (1) prevalences of symptoms of CMD among male European professional footballers ranged from 3 % for smoking to 37 %

Syftet med denna uppsats var att studera hur Sveriges reala fastighetspriser påverkas av det dynamiska förhållandet mellan makroekonomiska faktorer, som inkluderar hushållens reala

Aim Our aim is to describe single-living community health needing elderly people’s thoughts on their everyday life and social relations.. Method This study uses a qualitative

Linköping Studies in Science and Technology Dissertations, No.. Linköping Studies in Science

The weak resemblance to the corresponding deterministic problem suggests an appropriate way to specify boundary conditions for the solution mean, but gives no concrete information

It was found that sampling with around 2000 points was typically sufficient to see convergence with Monte Carlo sampling to an accuracy comparable to the accuracy of the

In the second paper, we study an exponential integrator applied to the time discretization of the stochastic Schrödinger equation with a multiplicative potential. We prove

The aim was to evaluate from a stakeholders view point, the feasibility of utilising mobile phone technology in the Kenya’s reproductive health sector in Nakuru Provincial