• No results found

Comparing Three Numerical Methods For Solving The 1D Time Dependent Schrödinger Equation

N/A
N/A
Protected

Academic year: 2022

Share "Comparing Three Numerical Methods For Solving The 1D Time Dependent Schrödinger Equation"

Copied!
38
0
0

Loading.... (view fulltext now)

Full text

(1)

IN

DEGREE PROJECT TECHNOLOGY, FIRST CYCLE, 15 CREDITS

STOCKHOLM SWEDEN 2021,

Comparing Three Numerical

Methods For Solving The 1D Time Dependent Schrödinger Equation

ATABAK JALALI HUGO ÅKESSON

KTH ROYAL INSTITUTE OF TECHNOLOGY

(2)

Sammanfattning

Syftet med denna rapport är att implementera tre olika numeriska metoder för att lösa och undersöka den tidsberoende Schrödingerekvationen (TDSE) för dels den analytiskt lösta kvantharmoniska oscillator, men även för den så vitt vi vet ej analytiskt lösta double-well -potentialen, och jämföra dessa. De numeriska metoderna är Crank-Nicolson- metoden samt två “split operator ”-metoder av olika ordningar. I den kvantharmoniska oscillatorn användes den exakta lösningen för att bestämma det globala felet hos de nu- meriska metoderna, medan andra tillvägagångssätt behövdes för att kunna undersöka double-well-potentialen (eftersom det saknas exakta lösningar). För den kvantharmonis- ka oscillatorn konvergerade lösningarna, och beroende på önskad noggrannhet så kun- de antingen Crank-Nicolson-metoden eller den modifierade split operator-metoden vara effektivast med avseende på det globala felet i förhållande till beräkningstiden. Double- well-potentialen lyckades med att uppvisa förväntade beteenden, så som harmoniskt oscil- latoriskt beteende under specifika förhållanden, samt tunnlingseffekten. För väldigt små tids- och rumssteg så avvek inte de numeriska lösningarna särskilt mycket från varandra, vilket indikerar på att de konvergerade mot en korrekt lösning.

(3)

Abstract

The purpose of this thesis is to implement three numerical methods for solving and examining the time-dependent Schrödinger equation (TDSE) of the analytically solved quantum harmonic oscillator (QHO) and the, to our knowledge, analytically unsolved double well potential (DW), and to compare the numerical solutions. The methods used were the Crank-Nicolson method and two split operator methods of different orders. For the QHO, the exact solution was used to determine the errors of the methods, while for the DW, the validity of the methods had to be tested in other ways. The solutions converged for the QHO, and depending the desired accuracy, either the Crank-Nicolson method or the modified split operator method were the most effective choices with regards to the global error and computation time. Applied on the DW potential, the numerical methods succeeded in displaying expected behaviours, such as locally behaving like a QHO under specific conditions, and displaying the quantum tunneling effect. For very small time-steps and fine spacial discretizations, the solutions did not deviate much from each other, indicating on convergence toward a correct solution.

(4)

Contents

1 Introduction 1

2 Background Material 2

2.1 Problem . . . 2

2.2 Time evolution . . . 2

2.3 The Mehler kernel. . . 3

2.4 The double well potential . . . 5

2.5 Numerical methods . . . 7

2.5.1 Crank-Nicolson . . . 8

2.5.2 Split operator method . . . 8

3 Investigation 10 3.1 Model . . . 10

3.1.1 The potentials . . . 10

3.1.2 Discretization . . . 10

3.2 Numerical implementation . . . 11

3.2.1 Crank-Nicolson . . . 11

3.2.2 2nd order split operator . . . 11

3.2.3 3rd order split operator. . . 12

3.3 Error analysis of the QHO . . . 12

3.3.1 Error of the "exact" solution . . . 12

3.3.2 Error of the numerical solutions . . . 13

3.4 Analysis of the double well potential . . . 13

3.5 Results . . . 14

3.5.1 The quantum harmonic oscillator . . . 14

3.5.2 The double well potential . . . 21

3.6 Discussion . . . 26

3.6.1 Integrating the kernel. . . 26

3.6.2 Investigating the results from the QHO . . . 27

3.6.3 Investigating the results from the double well potential . . . 28

4 Summary and Conclusions 30

(5)

Chapter 1 Introduction

The time dependent Schrödinger equation (TDSE) is a partial differential equation en- coding all the information of any non-relativistic quantum-mechanical system, and its form depends on the system in question. However, solving the TDSE analytically can be quite tedious, and is only possible in a limited number of cases [1]. One very common such case is the quantum harmonic oscillator [2]. Because of this, numerical methods have been and are only becoming more and more crucial for making advancements within the field [3].

When solving the TDSE numerically, it is common to use some sort of centered finite difference method (FDM) [4] for the spacial derivative, and then an implicit method for the time-stepping. One regularly used numerical method for solving the TDSE [5,6] is the Crank-Nicolson method [7], which is of second order in both space and time, and is a numerically stable method. Other common methods used for the temporal part include the the RK4 method (fourth order Runge-Kutta), which as the name implies has fourth order accuracy. There are methods which do not use a finite difference method in the spacial part, such as split operator Fourier methods [7]. Rather, these methods rely on Fourier transforms, and so the spacial accuracy is determined by the algorithms used to perform the transforms.

Due to the many applications of quantum physics and the TDSE to various areas in physics, a significant fraction of the research within the field is dedicated to testing and comparing different numerical methods [8–10], as well as improving on or creating new, more accurate and or efficient algorithms for solving complex variations of the TDSE [3,11]. Among others, the Crank-Nicolson method and the split operator method, and different variations of these two, are commonly used such numerical methods [12–16].

It is therefore of interest to compare these methods to one another, which leads us to the purpose of this thesis.

The purpose of this thesis is to compare the Crank-Nicolson method and two split operator methods. They are first tested and compared in one analytically solved system, and then in one, to our knowledge, not analytically solved system.

(6)

Chapter 2

Background Material

2.1 Problem

This report compares the Crank-Nicolson method with two split operator methods for solving the TDSE. First, they are compared in a 1D quantum harmonic oscillator (QHO) potential, where the analytical solutions are known. This is to verify that the implemen- tation is correct. Then they are compared in the so-called 1D double well potential, for which the analytical solutions, in terms of analytic expressions for the eigenfunctions and eigenvalues, are unknown, to our knowledge. This is to see if the methods behave well even for potentials that are less trivial than the QHO.

2.2 Time evolution

For a time independent hamiltonian ˆH, the time evolution operator ˆU (t) is

U (t) = eˆ ~it ˆH, (2.1)

and acts on a initial state in the following way:

|Ψ(t)i = ˆU (t) |Ψ(t = 0)i . (2.2)

This follows from the 1D TDSE. From now on, we will indeed restrict ourselves to 1D cases. The “components” of ˆU (t),

K(x, x0, t) = hx| ˆU (t) |x0i , (2.3) is called the kernel (or the propagator) of the system in question. Here, hx| and |x0i are bra and ket eigenvectors of the position operator ˆx with eigenvalues x and x0 respectively.

Let us further assume, for simplicity, that ˆH has a discrete spectrum, and denote its eigenvectors and eigenvalues by ψn (or |ni) and En, respectively. Then the identity operator ˆI is given by the expression

I =ˆ

X

n=0

|ni hn| . (2.4)

(7)

Using this, it can be useful to express ˆU (t) in the following way U (t) = ˆˆ U (t)ˆI

= e~i(t) ˆH

X

n=0

|ni hn|

=

X

n=0

e~i(t) ˆH|ni hn|

=

X

n=0

e~iEn(t)|ni hn| .

(2.5)

Inserting this into equation (2.3) yields K(x, x0, t) = hx|

X

n=0

e~itEn|ni hn|

!

|x0i

=

X

n=0

e~itEnhx|ni hn|x0i

=

X

n=0

e~itEnψn(x)ψn(x0).

(2.6)

We will also need to derive an expression for Ψ(x, t) in terms of an initial wave function Ψ(x, 0) and the kernel. By using

I =ˆ Z

R

|x0i hx0| dx, (2.7)

we get

Ψ(x, t) = hx|Ψ(t)i = hx| ˆU (t) |Ψ(0)i

= hx| ˆU (t) Z

R

|x0i hx0|Ψ(0)i dx0

= Z

R

hx| ˆU (t) |x0i Ψ(x0, 0) dx0

= Z

R

K(x, x0, t) Ψ(x0, 0) dx0,

(2.8)

which is the desired result.

2.3 The Mehler kernel

The Mehler kernel is the kernel of the quantum harmonic oscillator. We can derive it with the formula for the kernel derived in the previous section, because the Hamiltonian of the QHO has a pure point spectrum. It can be expressed on closed form, by inserting the eigenfunctions for the QHO into equation (2.6) and then using Mehler’s formula [17]

to rewrite. The eigenfunctions of the Hamiltonian are given by ψn(x) = 1

√2nn!

mω π~

14

emωx22~ Hn

r mω

~ x



, (2.9)

(8)

and the eigenvalues are

En = (2n + 1)~ω

2 . (2.10)

Inserting equations (2.9) and (2.10) into equation (2.6), we acquire the following series for the Mehler kernel KMehler:

KMehler(x, x0, t) =

X

n=0

ei(2n+1)ω2 t 2nn!

mω π~

12

emω(x2+x02)2~ Hnr mω

~ x



Hnr mω

~ x0

 . (2.11) For the sake of somewhat more pleasant expressions, we define

C ≡r mω

~ , (2.12)

so that

KMehler(x, x0, t) = C

√πe−C2 (x2+x02)2 eiωt2

X

n=0

e−inωt

2nn! Hn(Cx) Hn(Cx0) . (2.13) We now introduce ρ = e−iωt, so that (2.13) becomes

KMehler(x, x0, t) = C

√πe−C2 (x2+x02)2 eiωt2

X

n=0

1 n!

ρ 2

n

Hn(Cx) Hn(Cx0) . (2.14)

The time is now ripe to introduce Mehler’s formula [17], which states that

X

n=0

1 n!

ρ 2

n

Hn(x)Hn(y) = 1

p1 − ρ2 exp



−ρ2(x2+ y2) − 2ρxy 1 − ρ2



. (2.15)

With equation (2.15), the expression for the kernel can be written as KMehler(x, x0, t) = C

√πe−C2 (x2+x02)2 eiωt2 1

p1 − ρ2 exp



−C2ρ2(x2+ x02) − 2ρxx0 1 − ρ2



= Ceiωt2

pπ(1 − ρ2)exp



−C2x2+ x02

2 − C2ρ2(x2+ x02) − 2ρxx0 1 − ρ2



= Ceiωt2

pπ(1 − ρ2)exp



−C2(1 − ρ2)(x2 + x02) + 2ρ2(x2+ x02) − 4ρxx0 2(1 − ρ2)



= Ceiωt2

pπ(1 − ρ2)exp



−C2(1 + ρ2)(x2+ x02) − 4ρxx0 2(1 − ρ2)

 .

(2.16) With some simple algebra, one can show that

1 + ρ2 = 2ρ cos (ωt) (2.17)

1 − ρ2 = 2iρ sin (ωt). (2.18)

(9)

Substituting (2.17) and (2.18) into (2.16), substituting ρ2 back for e−i2wt in the denom- inator of the fraction outside, and simplifying a little more yields a final expression for the Mehler kernel:

KMehler(x, x0, t) = Ceiωt2

pπ(1 − e−i2ωt)exp



iC2(x2+ x02) cos (ωt) − 2xx0 2 sin (ωt)



. (2.19) It is common to set m = ω = ~ = 1, so that C = 1.

Remark 1. KMehleronly depends on t via the functions eiωt2 , e−i2ωt, cos(ωt) and sin(ωt).

Since 4π/ω is a period of all four of these functions, 4π/ω must be a period of KMehler as well.

Remark 2. The kernel becomes singular when t = ω, where n ∈ N. From the definition of the kernel, (2.3), we can deduce that K(x, x0, t = 0, t0 = 0) = δ(x − x0). Hence, we know that

limt→0

Ceiωt2

pπ(1 − e−i2ωt)exp



iC2(x2+ x02) cos (ωt) − 2xx0 sin (ωt)



→ δ(x − x0). (2.20)

We now realize that the factor eiωt2 evaluates to 1 in t = 0, and so it should be the case that

limt→0

Ceiωt2

pπ(1 − e−i2ωt)exp



iC2(x2+ x02) cos (ωt) − 2xx0 sin (ωt)



(2.21)

= lim

t→0

C

pπ(1 − e−i2ωt)exp



iC2(x2+ x02) cos (ωt) − 2xx0 sin (ωt)



. (2.22)

Letting t → πω, we see that everything is the same as when t → 0, with the exception of the cosine term flipping signs in the exponent and eiωt2 evaluating to i. Completing the square in the exponent, we get that

lim

t→πωKMehler(x, x0, t) = iδ(x + x0). (2.23) With analogous reasoning, we can show that:

lim

t→ω

KMehler(x, x0, t) = −δ(x − x0); (2.24) lim

t→ω

KMehler(x, x0, t) = −iδ(x + x0); (2.25) lim

t→ω

KMehler(x, x0, t) = δ(x − x0). (2.26)

2.4 The double well potential

The (1D) double well potential can refer to different types of potentials with differently shaped wells. In this report, the double well potential referred to is the one given by

V (x) = a

x40(x2 − x20)2, (2.27)

(10)

−4 −2 0 2 4 0

2 4 6 8 10

Double well potential

Figure 2.1: The double well potential for a = 7 and x0 = 2, plotted on the interval [−5, 5].

where a and x0 are constants. Note that (0, a) is the coordinate of the local maximum, while (±x0, 0) are the coordinates for the two minima. The potential is shown in figure 2.1, with a = 7 and x0 = 2.

If x0 is large, the minima of the potential should locally approximate a harmonic oscillator potential. This can be shown by taylor-expanding the potential around x = x0 :

V (x) = V (x0) + dV dx x=x0

(x − x0) + d2V dx2 x=x0

(x − x0)2

2 + . . . . (2.28) Since V (x) is a fourth order polynomial, all terms above ddx4V4 will vanish. Calculating the derivatives and inserting x = x0 yields

V (x) = 4a

x20(x − x0)2+16a

6x30(x − x0)3+ 16a

24x40(x − x0)4. (2.29) If we define u = x − x0 and approximate the second order term with a harmonic oscillator

2u2

2 , we find that the frequency of such a harmonic oscillator must be ω = q8a

mx0. Looking at the error terms, we see that they tend to 0 as x0 → ∞. As such, the broader the barrier between the two wells, the more the potential will look like a QHO locally inside them. This can be seen in figure 2.2, where an additional QHO with the corresponding frequency ω is drawn in the right well.

(11)

−20 −10 0 10 20 0

5 10 15

Double well potential QHO

Figure 2.2: The double well potential for a = 20 and x0 = 15, together with a QHO potential centered around x = x0. The QHO potential has parameters such that the two potentials’ 1st and 2nd derivitives coincide at x = x0.

2.5 Numerical methods

Numerical methods for solving partial differential equations often discretize space and time. We will introduce some notation for this. Suppose that we seek a solution Ψ(x, t) where x ∈ [x0, xNx] and t ∈ [0, tNt], where Nx and Nt will denote the number of sub- intervals that these intervals are split into. The numerical method in question will typi- cally give approximations to Ψ(xn, tj) =: Ψjn, which we will call ujn, where xn ∈ [x0, xNx] and tn ∈ [0, tNx] are some set of points in space and time, respectively. Also, we define

~

uj := (uj0, ..., uj1). In this report, we will use equidistantly placed points:

xn= x0+ n∆x, n = 0, 1, . . . , Nx;

tn= j∆t, j = 0, 1, . . . , Nt. (2.30) Remark 3. It is not uncommon to use non-homogeneous discretizations! For instance, when there are singularities in the potential, then it may be necessary to have finer meshes in the vicinities of these singularities, so that the numerical methods are able to make more accurate calculations, yet not have to sacrifice too much computational time when solving sufficiently far away from the singularities.

All three methods will generate an approximation of ~Ψj+1 (except for its boundary points) using ~Ψj. We will use them to calculate ~uj+1from ~uj, together with some boundry conditions uj+1. So, given an initial condition

~

u0 := ~Ψ0, (2.31)

these methods can be used sequentially to generate ~uj, for every j = 0, 1, ..., Nt. We define the global error as

k~ΨNt− ~uNtk2. (2.32)

where k · k2 denotes the `2-norm defined as k~yk2 =

s X

i

|yi|2. (2.33)

(12)

If the global error is O(∆tn+ ∆xm), the method is said to be of n:th order in time, and of m:th order in space.

2.5.1 Crank-Nicolson

The Crank-Nicolson method, when applied to the Schrödinger equation, performs the following approximation at each time step [7]:

(2 + i∆tH/~)~uj+1 = (2 − i∆tH/~)~uj. (2.34) Here, H denotes the discretized Hamiltonian acquired by replacing the second order space derivative by some finite difference scheme. The order of the scheme used will determine the order of the method in x. Furthermore, Crank-Nicolson is inherently of 2nd order in time, and is unitary as well as unconditionally stable [7]. Though, it is implicit in time, i.e. (2.34) is actually a system of equations that need to be solved using some method.

H is often tridiagonal, or more generally, banded, and then specialized methods can be used to solve (2.34) with time complexity O(Nx).

2.5.2 Split operator method

Split operator methods revolve around approximating the time evolution operator, for a small time step ∆t, as

U (∆t) = eˆ −i∆t ˆH/~≈ e∆t ˆA1e∆t ˆA2· · · e∆t ˆAn, (2.35) where

− i∆t ˆH/~ = X

i

∆t ˆAi. (2.36)

Note that (2.35) is exact if all ˆAi:s commute with each other, but is otherwise just an approximation, generally. The accuracy, or order, of this approximation is determined by the choice of ˆAi:s. For the choice

1 = −i

~ V (ˆx)

2 ; Aˆ2 = −i

~ 1

2mpˆ2; Aˆ3 = −i

~ V (ˆx)

2 , (2.37)

the error in the approximation is O(∆t3). Though, when used at each of the Nt time steps to approximate ˆU (t), the (global) error becomes O(∆t2) [7,18], which makes this method second order in time.

One might wonder what we have gained by performing such a splitting. The point is that, in order to calculate |Ψ(t)i = e−i∆t ˆH/~|Ψ(0)i, one essentially has to know the eigenvalues of ˆH, and express |Ψ(0)i in terms of ˆH’s eigenvectors. Even if, for a given ˆH, the eigenvalues and eigenvectors are known, there might be infinitely many. Often, one instead splits e−i∆t ˆH/~ into factors which only contain the kinetic part or the potential part of the hamiltonian, as in (2.37). Then, the individual exponential operators act in a simple way:

hx| e−i∆t~ V (ˆ2x) |Ψ(t)i = e−i∆t~ V (x)2 Ψ(x, t); (2.38) e−i∆t~ 2m1 pˆ2|Ψ(t)i =e−i~ 2m1 pˆ2

Z

|pi hp|

2π~ dp |Ψ(t)i

= 1 2π~

Z

e−i∆t~ 2m1 p2|pi hp|Ψ(t)i dp.

(2.39)

(13)

Now, consider

hp|Ψ(t)i = hp|

Z

−∞

|xi hx| dx |Ψ(t)i

= Z

−∞

hp|xi Ψ(x, t) dx

= Z

−∞

e−ixp/~Ψ(x, t) dx

= ˜Ψ(p/2π~, t),

(2.40)

where we use the following convention for the Fourier transform:

Ψ(k, t) ≡ ˆ˜ F [Ψ(x, t)] ≡ Z

e−i2πxkΨ(x, t) dx. (2.41)

We can now substitute (2.40) into (2.39), as well as projecting onto hx|, to get hx| e−i~ 2m1 pˆ2 |Ψ(t)i = hx| 1

2π~

Z

e−i∆t~ 2m1 p2|pi ˜Ψ(p/2π~, t) dp

= 1 2π~

Z

eixp/~

| {z }

=hx|pi

e−i∆t~ 2m1 p2Ψ(p/2π~, t) dp˜

= Z

ei2πxke−i∆t~ 2m1 (2π~k)2Ψ(k, t) dk.˜

(2.42)

This can be rewritten

hx| e−i∆t~ 2m1 pˆ2 |Ψ(t)i = ˆF−1h

e−i∆t~ 2m1 (2π~k)2 F [Ψ(x, t)]ˆ i

(x). (2.43)

In order for a computer to be able to (approximately) calculate this, the Fourier trans- forms can be replaced by fast (discrete) fourier tranform (FFT).

Increased order

The higher-order accuracy split operator method [11] builds on the method in the previous subsection. Assume we have an operator e( ˆA1+ ˆA2)∆t and define the operator

S2(∆t) = e

A1ˆ

2 ∆teAˆ2∆te

A1ˆ

2 ∆t. (2.44)

From equation [10] in [11], we have that

e( ˆA1+ ˆA2)∆t= S3(∆t) + C∆t4+ O(∆t5), (2.45) where C is some combination of commutators between ˆA1 and ˆA2, and

S3(∆t) = S2(s∆t)S2((1 − 2s)∆t)S2(s∆t), (2.46) where s is a solution to the equation

2s3+ (1 − 2s)3 = 0. (2.47)

Choosing ˆA1 and ˆA2 as in equation (2.37), this method should, according to [11], have a global error of order 3.

(14)

Chapter 3 Investigation

3.1 Model

The numerical methods will be used to solve the 1-dimensional TDSE, given by HΨ(x, t) = i~ˆ ∂

∂tΨ(x, t). (3.1)

Here, ˆH is the Hamiltonian operator of the system and can be written as H = −ˆ ~2

2m

2

∂x2 + V (x), (3.2)

where V (x) is the potential. As previously mentioned, we will consider two different potentials.

3.1.1 The potentials

The quantum harmonic potential is given by

VHO(x) = mω2x2

2 . (3.3)

where m is the mass of the considered particle, and the frequency ω describes the strength of the QHO.

The double well potential is given by V (x) = a

x40(x2 − x20)2, (3.4) and is explained in more detail in section 2.4.

3.1.2 Discretization

For more details and an introduction to the notation used, see the beginning of section 2.5. The intervals [−10, 10] and [−15, 15] were used for simulations in the QHO and the double well potential, respectively. Also, since both potentials are quite large outside of these intervals, the following boundry conditions were used:

uj0 = ujNx = 0 j = 0, 1, ..., Nt. (3.5) The grid sizes of these intervals varied between different simulations for various reasons, but mostly for computational time purposes.

(15)

3.2 Numerical implementation

3.2.1 Crank-Nicolson

A simple second order finite difference scheme was used to approximate the second deriva- tive in the Schrödinger equation,

2

∂x2Ψjn ≈ Ψjn−1− 2 Ψjn+ Ψjn+1

(∆x)2 , (3.6)

which resulted in the (Nx+ 1 × Nx+ 1)-matrix

H = −~2 2m(∆x)2

−2 1

1 −2 1

. .. ... ...

1 −2 1

1 −2

 +

 V (x0)

V (x1) . ..

V (xNx−1)

V (xNx)

 .

(3.7) Through this scheme, the method should become a 2nd order method in space. Insert- ing into equation (2.34) and calculating the right-hand side with simple matrix-vector multiplication, an equation of the form

Ax = b (3.8)

was acquired. This equation was then solved by scipy.linalg’s built-in solve_banded function, which utilizes the fact that A is tridiagonal.

3.2.2 2nd order split operator

The split operator method (for a time-independant Hamiltonian) iterates forward in time via

Ψj+1 = ei∆t~ HˆΨj. (3.9)

Splitting up the exponent according to equation (2.37), equation (3.9) becomes

Ψj+1≈ ei∆t2~V (x)e2m~i∆tpˆ2ei∆t2~V (x)Ψj. (3.10) We can now evaluate at x = xn (project onto hxn|), so that we can use (2.43), which gives a concrete expression for how the middle factor acts on a wave vector, using Fourier transforms, to get

Ψj+1n ≈ ei∆t2~V (xn)−1h

e−i∆t~ 2m1 (2π~k)2 Fˆh

ei∆t2~V (x)Ψ(x, tj)ii

(xn). (3.11) If we now substitute the continuous Fourier transform for the discrete Fourier transform, denoted by F, we get the 2nd order split operator approximation ujn of Ψjn:

~

uj+1 = ei∆t2~VF−1h

e2m~i∆t(2π~K)2Fh

ei∆t2~V~ujii

. (3.12)

(16)

Here, K is a diagonal matrix with the sample frequencies for the discrete Fourier trans- form as its diagonal entries. Furthermore, V is the matrix

V =

 V (x0)

V (x1) . ..

V (xNx−1)

V (xNx)

. (3.13)

When implementing (3.12) in Python, the methods numpy.fft.fft and numpy.fft.ifft, from the NumPy package, were used for the discrete fourier transform and its inverse, respectively.

3.2.3 3rd order split operator

The operators ˆA1and ˆA2 in equation (2.44) were defined as in equation (2.37). Then, the same splitting of the time evolution operator as described in equation (2.45) was used, but with S2(∆t) approximated by S2(∆t), which denotes the same evolution as used in (3.12), that is:

S2(∆t)[~uj] := ei∆t2~VF−1h

e2m~i∆t(2π~K)2Fh

ei∆t2~V~ujii

. (3.14)

Using equation (2.46), the third order method is then given by

~

uj+1 =

S2(s∆t) ◦ S2((1 − 2s)∆t) ◦ S2(s∆t)

~uj. (3.15) Note that there is function composition between each of the S2 factors, meaning it is not simple matrix vector multiplication. Now, it is necessary to choose one of three solutions to (2.47), to use in (2.46). Any solution should work, according to [11], so the real solution was chosen:

s = 1.35120719196. (3.16)

3.3 Error analysis of the QHO

For all simulations regarding the quantum harmonic oscillator, ω = ~ = m = 1. Since the QHO is an analytically solvable system, it should be possible to calculate the error of the numerical solutions. Though, since the analytical solution cannot, in general, be expressed without an integral or series, it too needs to be approximated in some way.

3.3.1 Error of the "exact" solution

The analytic solution can be expressed in various ways, among others as in equation (2.8); that is, as an integral of the kernel times the initial wave function. Since a closed form expression for the kernel was derived in (2.19), the only thing that needed to be ap- proximated was the integral. For this, a simple midpoint integration rule, i.e. a Riemann sum, was used. We will denote the integration step size by ∆xint. The convergence of this

(17)

integration was checked before moving on to the numerical methods, so as to make sure that the exact solution was reliable. One of the methods for checking the convergence of the integral was as follows:

1. Choose a large ∆xint and a point in time far away from the singularities discussed in remark (2), propagate using the integrator and cache the resulting wave-vector.

2. Choose a smaller ∆xint and propagate the same initial wave to the same point in time, this time with the smaller ∆xint.

3. Calculate the `2-norm of the difference between the latest propagated wave-vector and the previous one.

4. Cache the latest wave-vector as the previous one.

5. Repeat steps 2-4 and see if the norm converges to zero.

The time t = 0.5π was chosen to propagate to. However, a time close to a singularity was also chosen and tested, namely t = 0.98π. Another method for checking the convergence was to plot the `2-norm of the “exact” solution versus time, for some fixed ∆xint, to see if and where it deviates from 1.

3.3.2 Error of the numerical solutions

Having tested the convergence of the integration method, it was regarded as an exact solution, in the following sense. Let ~uj denote the aforementioned numerical integration approximation of ~Ψj, and ~Uj the approximation of ~Ψj by some other method (Crank- Nicolson, split operator etc.). The global error of this method is then, as defined in (2.32),

k~Uj − ~ujk2 (3.17)

where k · k2 denotes the `2-norm, defined in (2.33). Often, though, the Nx was lower for U~j than for ~uj. This is because Nx does not affect the accuracy of ~Uj, just the number of grid points, which in turn increases the computation time. Nx does, however, matter for the accuracy of other numerical methods ~uj, in general. It is therefore beneficial to construct two overlapping grids, and use the finer grid for ~uj and the coarser for ~Uj. The longer vector ~uj then needs to be trimmed to contain values only at the coarser grid, before calculating the error in (3.17).

3.4 Analysis of the double well potential

The double well potential was examined in three different ways. These examinations had in common that the initial wave function was the ground state wave function of a QHO, and was centered around x = x0 without any additional momentum.

The first examination revolved around the numerical solutions behaving like that of a QHO centered around x = x0, the right minimum of the DW potential, when x0 is large, based on the reasoning in section 2.4. This was tested in the following way.

1. Choose the parameters x0 and a for the double-well potential, with x0 large so that the wells are far apart.

(18)

2. Now place a QHO in the right well of the DW potential with ω according to section 2.4, so that it approximates the DW potential around x = x0.

3. Choose the initial wave function to be the ground state wave function for the aforementioned QHO.

4. Evolve this wave function in the DW potential with the three numerical methods, and evolve it in the QHO with a sufficiently accurate method of choice (in our case the modified split operator method).

5. Compare the numerical solutions from the DW potential to the solution from the QHO at each time step using the `2-norm.

The second examination was to evolve one and the same initial state using all three numerical methods, and measure how much they differed from each other at some point in time, while simultaneously varying Nx and Nt between simulations to see how these differences changed.

The final way of examining the numerical methods for the DW potential relied on the quantum tunneling effect. From section 2.4, it is expected to observe the quantum tunneling effect for small values of x0 and a. Hence, by fixing x0 and decreasing a from a large value, we should see more and more tunneling. The "amount" of tunneling at a specific point in time was measured by calculating the norm over the left half of the real axis (i.e. [x0, 0]), of the numerical solution. This amount was plotted against time for some different values of a, for each of the three numerical methods.

3.5 Results

From here on, "Split operator (2)" represents the ordinary split operator method, while

"Split operator (3)" represents the modified split operator method, which should be of order 3.

3.5.1 The quantum harmonic oscillator

The results from section 3.3.1 are presented in tables 3.1 and 3.2:

(19)

Previous ∆xint Current ∆xint Error

2.0 1.0 1.90045

1.0 0.5 1.48780

0.5 0.25 0.05579

0.25 0.125 8.16362 · 10−16

0.125 0.0625 8.30857 · 10−16

0.0625 0.03125 8.79220 · 10−16

Table 3.1: This table shows the difference between Ψ(t), approximated by integrating the kernel using "Current ∆xint", and Ψ(t), approximated by integrating the kernel with "Pre- vious ∆xint". When both ∆xint get small enough, this difference, or error, drops dramatically. A more detailed description is found in section3.3.1. The simulations were run with ∆x = 0.2 and t = 0.5π.

Previous ∆xint Current ∆xint Error

2.0 1.0 3.70620

1.0 0.5 2.65783

0.5 0.25 1.69163

0.25 0.125 1.49500

0.125 0.0625 0.90726

0.0625 0.03125 5.06626 · 10−15

Table 3.2: This table shows the difference between Ψ(t), approximated by integrating the kernel using "Current ∆xint", and Ψ(t), approximated by integrating the kernel with "Pre- vious ∆xint". When both ∆xint get small enough, this difference, or error, drops dramatically. A more detailed description is found in section3.3.1. The simulations were run with ∆x = 0.2 and t = 0.98π.

When simulating the "exact" solution for one period (4π) and plotting the difference between the norm and 1 at each point in time, figures 3.1 and 3.2 were obtained. The sharp peaks seen in figure3.1 is when the kernel becomes singular (explained in remark 2). Plots 3.3 and 3.4 was acquired in a similar way, but plotted against sin t, since sin t → 0 seems to be the reason for the divergence of the numerical integration.

(20)

Figure 3.1: A plot of the error of the exact solution, kΨk2 − 1, at every point in time. The simulation was over t = 4π seconds, with ∆t = 50π and

∆xint = 0.1. The sharp peaks seen are when the kernel becomes singular and the midpoint rule integration diverges (see remark 2).

Figure 3.2: The same as figure 3.1, but zoomed in on a sub-interval of [0, π]. This is far away from the points in time where the kernel becomes singular.

(21)

Figure 3.3: A plot of the error kΨk2 − 1 against sin (t) for times in the interval [0,π2]. The simulation used ∆t = 400π and ∆xint = 0.1. The sharp peak seen close to t = 0 is where the kernel becomes singular and the midpoint rule used for integrating diverges.

Figure 3.4: The same as figure 3.3, but zoomed in beyond sin (t) = 0.2.

(22)

Now, the focus will shift to the numerical methods. Figure 3.5 displays the global error of each method versus time step, for a fixed space step. A more detailed explanation of exactly how the the error was determined is found in section 3.3.2. Figures 3.6, 3.7 and 3.8 are zoomed in versions of figure 3.5.

Figure 3.5: A plot of the global error in Ψ versus different values of the time step

∆t, for a simulation time of 2 seconds. The plot is in logarithmic scale on both axes, with base 2. Here, ∆x = 0.02 and ∆xint = 0.04.

Figure 3.6: The same plot as figure 3.5, but zoomed in on the Crank-Nicolson graph. The slope of the graph is about 2 units vertically per unit hori- zontally.

(23)

Figure 3.7: The same plot as figure 3.5, but zoomed in on the split operators (2) graph. The slope of the graph is about 2 units vertically per unit horizontally.

Figure 3.8: The same plot as figure 3.5, but zoomed in on the split operators (3) graph. The slope of the graph is about 4 units vertically per unit horizontally.

Figure 3.9 also displays the global error of Ψ, however this time for different space steps ∆x. The time step was appropriately chosen and kept fixed. Figure 3.10 zooms in specifically on the Crank-Nicolson graph.

(24)

Figure 3.9: A plot of the global error in Ψ versus different values of the space step

∆x, for a simulation time of 2 seconds. The plot is in logarithmic scale on both axes, with base 2. Here, ∆t = 40·24.5π7 and ∆xint = 0.04.

Figure 3.10: The same plot as figure 3.9, but zoomed in on the Crank-Nicolson graph. The slope of the graph is about 2 units vertically per unit horizontally.

Finally, two plots for the QHO were generated where ∆t was varied and the global error for each method was compared to the computation time, for each time step. This is shown in figures 3.11 and 3.12. For each graph, moving point-wise from left to right corresponds to ∆t decreasing by a factor 2.

(25)

Figure 3.11: The error of each method vs the computational time when keeping

∆x = 0.02 constant and varying ∆t.

Figure 3.12: The error of each method vs the computational time when keeping

∆x = 0.01 constant and varying ∆t.

3.5.2 The double well potential

In the language of 3.4, figure 3.13 is the result of the first examination. Here a = 10 and x0 = 10.

(26)

Figure 3.13: The norm of the difference ΨDW(t) − ΨQHO(t) over time. ΨDW(t) is evolved in the double well potential and ΨQHO(t) is evolved in a QHO centered around x0with ω according to section2.4, and have the same initial state. This initial state is the ground state of the same QHO.

Since the potentials approximate each other around x0, we expect the plotted difference to be small.

The second examination yielded figures 3.14 and 3.15. Here, x0 = a = 1. The initial state was chosen in the same way as for figure 3.13, but with different parameters now that x0 and a are different. In figure 3.14, the numerical methods are compared to one another after having simulated for t = 2π seconds, for different time steps ∆t and a fixed space step. Figure 3.15 repeats the comparison process, however with a fixed time step and varying space step instead.

(27)

Figure 3.14: The norm of the difference Ψi(t) − Ψj(t), where i, j are different numerical methods, versus ∆t. Every numerical method is compared to all other methods. Here, ∆x = 0.015.

(28)

Figure 3.15: The norm of the difference Ψi(t) − Ψj(t), where i, j are different numerical methods, versus ∆x. Every numerical method is compared to all other methods. Here, ∆t is of the order 2−8. Note that the green plot almost covers the orange one.

(29)

Finally, figures 3.16, 3.17 and 3.18 display how kΨ(t)k2 changes over time when the norm is taken only over the left half of the real axis, for different values of a, and with x0 = 1.

Figure 3.16: This figure shows kΨ(t)k2, over time, for the solution to the Crank- Nicolson method, but where the norm is taken only over the left half of the real axis, and a is varied between 1 and 11. The simulation ran for t = 4π seconds, with x0 = 1, ∆t = 500, and ∆x = 0.03.

Figure 3.17: This figure shows kΨ(t)k2 over time for the solution to the regular split operator method, but where the norm is taken only over the left half of the real axis, and a is varied between 1 and 11. The simulation ran for t = 4π seconds, with x0 = 1, ∆t = 500, and ∆x = 0.03.

(30)

Figure 3.18: This figure shows kΨ(t)k2 over time for the solution to the modified split operator method, but where the norm is taken only over the left half of the real axis, and a is varied between 1 and 11. The simulation ran for t = 4π seconds, with x0 = 1, ∆t = 500, and ∆x = 0.03.

3.6 Discussion

3.6.1 Integrating the kernel

In figure 3.1, we see that the solution diverges for some time points. This is not very surprising, in light of Remark 2, where it is explained that the kernel becomes a delta- function at the very same time points. Though, in figure 3.2 and 3.4, we see that the error becomes extremely small, even for a modest ∆xint = 0.1, at time points not close to multiples of π. According to numpy.finfo(float), the machine epsilon for floats are 2.2 · 10−16, in our case. It is then clear that the errors in figure 3.4, which are of order 10−16, are truly negligable.

However, the mentioned error is just the error of the norm of the wave function. It does not guarantee a correct shape of the wave function. Therefore, we measured how much ~Uj (the approximation of Ψ(tj) by integrating the kernel) changed when decreasing

∆xint in tables 3.1 and 3.2. Here, we see that the difference decreases very slowly, until it drops straight down to the order of 10−16, i.e. of the same order of magnitude as the machine epsilon. This too does not guarantee the correctness of ~Uj, but it strongly hints at it.

For our purposes, this integration method (midpoint rule) was satisfactory; It sufficed to use ~Uj at time points tj not close to multiples of π. Though, it is an extremely basic method. We tried using some other general integration methods from various Python libraries, as well as a method that used FFT instead of integrating, but none seemed to perform much better. It is worth noting that, after some algebra, the integral (2.8) of

(31)

the product of the kernel and initial wave function can be written on the form c(t)

Z

R

f (x0) eig(x0,x)/ sin ωt dx0 (3.18) which, whenever 1/ sin ωt is large, perfectly matches the prerequisites for using a Filon- or Levin-type integration method [19] - which one of them depends on the initial wave function. This [19] PHD thesis even claims that the accuracy of such methods increases with increased oscillation, i.e. when 1/ sin t is large, which is also when the current integration method converges poorly.

3.6.2 Investigating the results from the QHO

Figures 3.5 to 3.8

From figures3.6, 3.7 and3.8, the order, in time, of the methods can be determined from the slopes of the graphs. For the Crank-Nicolson method, as well as the normal split operator method, the slopes seem to be around 2, which falls in line with the theory, stating that they are second order methods in time. However, for the modified split operator method, the slope is closer to 4 than it is to 3. This is somewhat unexpected, since [11] claims that it is of order 3 in time. Although, this could just be a coincidence, stemming from the fact that the QHO is a beneficial system for this numerical solver in terms of accuracy. It would be interesting to see what order we would observe in some other analytically solvable system, for instance the two-particle rational Calogero-Moser potential [20]. That potential also has a singularity in it, which would give rise to new obstacles.

Figures 3.9 and 3.10

Moving on to the spacial error, figure 3.9 shows some interesting graphs. Both split operator methods quite drastically decrease in the beginning, and then immediately flatten out and become near constant for all smaller values of ∆x. This could be because of the built-in FFT methods used, and them having small errors. Thus, it might be the case that the spacial error becomes so small that the error in time dominates. Comparing with figure 3.5, since ∆t was chosen to 40·27 ∼ 2−9, the error should be around 2−16 for the split operator method and around 2−33 for the modified split operator method. Now,

∆x is not the same as the smallest ∆x in 3.9, rather in 3.5 it is of order 2−6, which is just one more unit to the left of the left-most point in figure 3.9. Assuming the graphs for the split operator methods remain constant, the errors seems to be as proposed.

As expected, in figure 3.10it is evident that the slope of the Crank-Nicolson method is around 2, which also confirms that the spacial accuracy is of second order.

Figures 3.11 and 3.12

Finally, we take a look at figures3.11and3.12. Figure3.12is the same as figure3.11, but with a finer ∆x. These describe how the error compares to the computation time, and both of these properties depend on ∆t. Before delving into comparing the two figures, we note the common appearance of them. The split operator methods remain almost unchanged between the figures, except translated to the right, which is expected due to

(32)

the results presented in figure3.9. There is a point of intersection between the two split operator methods, where they for different values of ∆t have the same computation time and error. Interesting to also note is how the Crank-Nicolson graph seems to converge toward some value, which probably is the error in space. This is because ∆t is very small for the right-most points of the green graph (∼ 10−4).

The most noticeable difference between figures 3.11 and 3.12 is that the green graph has been shifted down and rightward. It converges toward a smaller error than before, which is expected since ∆x is smaller in figure 3.12 and we know from figure 3.9 that this should happen. The Crank-Nicolson method remains below the two split-operator methods up until its intersection with them, implying it is the better choice when ∆t is large enough such that it stays to the left of and below the two split-operator graphs.

However, when ∆t becomes too small, the modified split operator method takes over and becomes the most effective choice. This remains to be the case for figure 3.12. In fact, it seems as though the regular split operator method is the worst in terms of global error versus computation time! Maybe this changes when ∆x becomes sufficiently small, or perhaps when it is too large. This is something that could be investigated in future studies.

3.6.3 Investigating the results from the double well potential

Figure3.13displays the results from using the procedure as explained in section3.4. The solution to the QHO was simulated with the modified split operator method, which by the reasoning in the previous subsections is a good choice when accuracy is important.

We can see that the norm of this difference is somewhat small, and seems to be oscillating, albeit slightly increasing. What this tells us is that the numerical solutions to the double well potential seem to remain relatively fixed under the specified conditions. If it did not stay fixed in the minimum, then the norm of the difference would not oscillate so much; rather it would increase quite significantly. This confirms our claim in section 2.4, i.e. that the solution should behave like a QHO when x0 is large. This is because the barrier between the two minima is so broad that the wave cannot tunnel through.

Moving on to small values for x0, tunneling should become a possibility. Although, the height of the barrier still plays a role as well, which can be seen in figures 3.16, 3.17 and 3.18. We can see that for small values of a, the norm of the wave function on the interval [−15, 0] increases with time, slightly oscillating. Now, it is not entirely obvious if it is tunneling or if it is numerical errors taking place. Especially for large values of a, the norm is so small, yet oscillating, making it difficult to determine which one it is. However, the overall behaviour seems to be correct: tunneling increases when a decreases. Furthermore, looking at all three figures as a whole, we see that they are incredibly similar. Especially the two split operator methods.

Let us examine figures 3.14 and 3.15. When the difference between the solutions of two very different numerical methods is small, it indicates that they are close to the exact solution. More precisely, the difference of two solutions acts as a lower-bound for the sum of their errors:

i(t)−Ψj(t)k2 =

Ψi(t) − Ψ(t) −Ψj(t) − Ψ(t)

2 ≤ kΨi(t) − Ψ(t)k2+kΨj(t) − Ψ(t)k2 Here, Ψi(t) and Ψj(t) represent different numerical solutions, and Ψ(t) represents the exact one. In light of this, the two aforementioned plots show a lower-error-bound that is

(33)

not that low, for the graphs where Crank-Nicolson is involved. It does, however, become smaller with smaller ∆x or ∆t. One might be tempted to interpret the low lower-error- bounds for the split operator methods as something promising, but since these methods are similar in nature, it is plausible that their difference can be small even if they would be far from the exact solution.

In the second figure, 3.15, the blue graph shows an increasing difference with decreas- ing time step size. This might look strange, and there might be something wrong with the implementation. Though, it might be connected to the behaviour of split operator methods displayed in figure 3.9, where ∆x, too, is varied. It is perhaps worth noting that the two figures agree at the common point ∆x ≈ 2−6 and ∆t ≈ 2−8, which they of course should do.

Combined, these results do not disprove the correctness of the numerical methods when used to simulate the double well potential. The two more qualitative tests, i.e.

tunneling and local behaviour similar to that of a QHO, gave largely expected results.

For the other test, the reasoning around figures3.14and 3.15implies that very small ∆x and ∆t are needed for reasonably small lower-error-bounds. One reason for this might be the fine interference pattern that sometimes appears when simulating tunneling in the double well potential.

Also, it is important to point out that the stability and convergence of the numerical solutions can depend on the initial wave function used, as well as the parameters (x0 and a). All of the tests made on the double well potential were made with a Gaussian wave-packet as the initial wave function, however, more complicated initial waves could give rise to very different behaviours. Therefore, it is important to test the numerical methods with different parameters and different initial wave functions, which was done to some extent in this project. However, it was decided to use a Gaussian wave-packet due to its simplicity. Hence, it is worth acknowledging the fact that these results may only be valid for the specific, easy-to-work-with initial wave-function that was chosen.

(34)

Chapter 4

Summary and Conclusions

This report has implemented three different numerical methods for solving the TDSE, the Crank-Nicolson method, the split operator method, and a modified split operator method of higher order accuracy. These methods were tested on the quantum harmonic oscillator, where the exact solution is known, as well as on the double well potential, where the analytical expressions for the solutions currently are unknown (to our knowledge).

When comparing the numerical solutions to the exact solution of the QHO, the Crank- Nicolson method was found to be the most efficient simulation method for time-steps larger than some critical ∆t. For any time-steps smaller than this critical value, the modified Split operator method was the most efficient. This critical point is given at the intersection between the green and orange graphs in figures 3.11 and 3.12.

In future studies, a better numerical integrator for integrating the kernel would be beneficial, preferably one that can handle singularities in the integrand. Perhaps integra- tors of Levin or Filon type [19] would be a good candidate. Furthermore, it is noteworthy to point out that these results need not necessarily hold for potentials other than the QHO. In the future, it would therefore be wise to test more than one analytically solvable system. For instance, systems like the two-particle rational Calogero-Moser system [20], which contains singularities.

For the double well potential, the numerical methods appear to work well when ap- proximating the minimum of the potential locally with a QHO. All three methods also seem to display quantum tunneling in cases where we expected it. Measuring the dif- ference between the solutions produced by different methods yielded lower-error-bounds that were not very small, even though ∆x and ∆t were small. This suggests that the double well potential is quite demanding to simulate accurately with these methods. Fur- thermore, all of the tests used a Gaussian wave packet as initial wave function, so the results do not necessarily hold for more complicated initial wave functions.

(35)

Acknowledgements

We would like to wholeheartedly thank our academic supervisor, Edwin Langmann. With his vast knowledge within the field, as well as his passion for it, his presence throughout this project has been nothing but reassuring. Edwin has a genuine willingness to help, and often engaged in and got involved with trying to solve the problems with which we were faced. He always encouraged us to do more and to explore further, which is not to be taken for granted. These are invaluable experiences for any student, and as such, we are very fortunate to have had Edwin as our supervisor.

References

Related documents

Following the same lines as in the Sec. In other words, steps 共2兲–共4兲 constitute the successive transformation of the wave function from the spectral basis to a spatial

This includes an introduction to the power flow problem, how the Newton-Raphson method is applied to the power flow problem, and different approaches on how to solve the

The complexity of the blanking process was however re-proven. Goijaerts also investigated how the punch velocity i.e. strain rate influenced the required punch force and the

Starting with stability of the problem with Dirichlet-Neumann boundary conditions, Table 3 in Section 4.1.2 show that the analysis conducted in.. section 3.2.2 succesfully

In this project a quadratic objective function subjected to linear elliptical partial differential equation with Neumann boundary condition is known, construct the variational

This thesis presents regularity estimates for solutions to the free time dependent fractional Schr¨ odinger equation with initial data using the theory of Fourier transforms.. (1)

A classical implicit midpoint method, known to be a good performer albeit slow is to be put up against two presumably faster methods: A mid point method with explicit extrapolation

Landau damping simulation Figure 3 (left) with the adaptive number of Hermite mode takes 46.7 s while the reference simulation that uses fixed number of Hermite modes takes 56.3 s..