IN
DEGREE PROJECT TECHNOLOGY, FIRST CYCLE, 15 CREDITS
STOCKHOLM SWEDEN 2020 ,
Computation of Stationary States for Rotating Bose-Einstein
Condensates using Spectral Methods
ADAM ERLANDSSON
PAUL HEDVALL
Abstract
The Bose-Einstein condensate is a phase of matter that arises when cooling gases of bosons to extremely low temperatures. When studying these condensates one may use the Gross-Pitaevskii equation, which is a non-linear variant of the Schr¨ odinger equation. An interesting phe- nomenon that arises when rotating a Bose-Einstein condensate is the ap- pearance of vortices.
We implement a semi-implicit Euler scheme using spectral methods proposed in [1] to numerically calculate the ground state of a rotating Bose-Einstein condensate. We start with implementing a simpler iterative fixed-point method to solve the Euler scheme but show that this method fails to converge for large rotations. Because of this we implement mul- tiple Krylov subspace solvers that in fact do converge for large rotations and show that the Preconditioned Conjugate Gradient method has bet- ter performance than the BiConjugate Gradient Stabilized method. After the implementation we briefly look at the performance of the method and improve it with simple tricks that do not compromise the accuracy or robustness and which reduce the computation time slightly.
Lastly we look at the formation of vortices in 2-dimensional and 3- dimensional Bose-Einstein condensates. We show that the number of vor- tices increases exponentially for increasing angular velocity in 2D until the condensate breaks apart, but in 3D we ultimately find that the required computation time and RAM storage is too large to be able to analyze the vortices in a similar way on our personal computers.
Sammanfattning
Bose-Einstein-kondensat ¨ ar en fas som uppst˚ ar vid nedkylning av bo- soner till extremt l˚ aga temperaturer. F¨ or att studera dessa kondensat g˚ ar det att anv¨ anda Gross-Pitaevskii-ekvationen, vilket ¨ ar en icke-linj¨ ar vari- ant av Schr¨ odinger-ekvationen. Ett intressant fenomen som uppst˚ ar n¨ ar ett Bose-Einstein-kondensat roteras ¨ ar framtr¨ adandet av virvlar.
Vi implementerar ett semi-implicit Euler-system med hj¨ alp av spektral- metoder som f¨ oreslogs i [1] f¨ or att numeriskt ber¨ akna grundtillst˚ andet f¨ or ett roterande Bose-Einstein-kondensat. Vi b¨ orjar med att implementera en enklare iterativ fixpunkts-metod f¨ or att l¨ osa detta system men visar att denna metod inte konvergerar f¨ or stora rotationer. P˚ a grund av det- ta implementerar vi flera Krylov-delrums-metoder som faktiskt konverge- rar f¨ or stora rotationer och visar att Preconditioned Conjugate Gradient- metoden har b¨ attre prestanda ¨ an BiConjugate Gradient Stabilized-metoden.
Efter denna implementation unders¨ oker vi kort prestandan av metoden och f¨ orb¨ attrar den med n˚ agra enkla trick som inte p˚ averkar noggrannhe- ten eller robustheten och som minskar ber¨ akningstiden n˚ agot.
Slutligen studerar vi formationen av virvlar i 2-dimensionella och 3- dimensionella Bose-Einstein-kondensat. Vi visar att antalet virvlar ¨ okar exponentiellt f¨ or ¨ okande vinkelhastighet i 2D tills kondensatet g˚ ar s¨ onder, men i 3D finner vi till slut att ber¨ akningstiden och RAM-anv¨ andningen
¨ ar f¨ or stor f¨ or att g¨ ora en liknande analys av virvlarnas beteende p˚ a v˚ ara
egna datorer.
Acknowledgements
We would like to thank our advisors Patrick Henning and Johan W¨ arneg˚ ard for their help and guidance during this project.
Contents
1 Background 1
1.1 The Bose-Einstein condensate . . . . 1
1.2 The Gross-Pitaevskii equation . . . . 1
1.3 Nondimensionalization . . . . 2
2 Methods 3 2.1 Gradient Descent . . . . 3
2.2 Semi-Implicit Euler . . . . 5
2.3 Spatial discretization . . . . 7
2.4 A fixed-point approach . . . . 13
2.5 The Conjugate Gradient method . . . . 15
2.6 Approximate Solutions . . . . 18
2.6.1 The Thomas-Fermi Approximation . . . . 18
2.6.2 Additional tricks . . . . 19
3 Results 20 3.1 Results in two dimensions . . . . 20
3.2 Results in three dimensions . . . . 25
4 Discussion 28 4.1 Vortices . . . . 28
4.2 Future studies . . . . 29
1 Background
1.1 The Bose-Einstein condensate
From quantum mechanics we know that any particle falls into one of two classes:
bosons and fermions. A particle is a boson if it has integer spin and it is a fermion if it has half-integer spin [2]. Some examples of bosons are photons with spin 1 and the Higgs boson with spin 0. An example of a composite particle that is a boson is the helium ( 4 He) atom [3]. Some examples of fermions are electrons, protons and neutrons which all have spin 1/2 [4].
Because of the Pauli exclusion principle, no two fermions can occupy the same quantum state. For bosons however this is not true. In particular, when cooling down a low-density gas consisting of bosons to temperatures near ab- solute zero, a lot of bosons will be in the lowest quantum state at the same time. This causes the bosons to behave like a superfluid, which is a fluid with zero viscosity. This new state of matter is called the Bose-Einstein condensate, named after Satyendra Nath Bose and Albert Einstein who both were important in theoretically conjecturing the existence of this condensate during the 1920’s [5].
Since then a lot of research has been done on the subject. The first experi- mental observation of a Bose-Einstein condensate was accomplished in 1995 [6]
for which some of the contributors later were awarded the Nobel prize in physics.
One interesting phenomenon that arises in rotating Bose-Einstein condensates is the appearance of vortices, or density singularities, in which the wave func- tion vanishes. These have been seen experimentally, which can be seen in for example [7].
Since physically creating and observing a condensate can be challenging, one may instead study Bose-Einstein condensates numerically. Our goal in this paper is to implement a reliable method for calculating the ground state of a Bose-Einstein condensate numerically, and utilizing this method to study the condensate in general and the vortices in particular.
1.2 The Gross-Pitaevskii equation
The wave function for a Bose-Einstein condensate can be calulated as the solu- tion ψ(x, t) ∈ C to the Gross-Pitaevskii Equation, which is a non-linear variant of the Schr¨ odinger equation. The time-dependent Gross-Pitaevskii Equation can be written as
i~ ∂ψ(x, t)
∂t = Hψ =
− ~ 2
2m ∇ 2 + V (x) + N U 0 |ψ(x, t)| 2 − ΩL z
ψ(x, t), (1.1)
with this particular form being used in [8]. In this equation we have a few
different parameters. H is the Hamiltonian operator, while ~ is the reduced
Planck constant. As for the particles themselves, m is the particle mass, N
is the number of particles, and U 0 is a parameter describing the interaction
between the particles in the condensate and defined as U 0 = 4π~ 2 a s
m (1.2)
where a s is the s-wave scattering length. A positive a s value means that the particles repel each other, while a negative one means the particles attract.
We have that the ∇ operator is defined as ∇ = (∂ x , ∂ y , ∂ z ), with ∇ 2 = ∆ =
∂ 2 x + ∂ y 2 + ∂ z 2 being the Laplacian operator. In this form of the GPE we have implicitly assumed that all rotation is about the z−axis, with the L z operator describing the angular momentum to be defined as L z = −i~(x∂ y − y∂ x ).
The two parameters V and Ω describe the external potential applied to the condensate and the angular velocity enforced on the condensate respec- tively. Some reasonable potentials to use are the zero potential V (x) = 0, the quadratic potential V (x) = α|x| 2 or the quartic potential V (x) = α|x| 4 for some parameter α.
According to Max Borns rule we have that the probability of finding a par- ticle in a position x and a time t is
|ψ(x, t)| 2 = ρ(x, t). (1.3)
This translates in (1.1) that the number of particles is constant and gives us the normalization condition
kφk 2 :=
Z
R
d|ψ(x, t)| 2 dx = N. (1.4)
Given a wave function ψ, the energy per particle is defined as E(ψ) =
Z
R
d~ 2
2m + V (x)|ψ(x, t)| 2 + N U 0
2 |ψ(x, t)| 4 − Ωψ ∗ (x, t)L z ψ(x, t)
dx, (1.5) where R d can be either R 2 or R 3 depending on what dimension we are working in. The asterisk symbol (as in ψ ∗ ) denotes the complex conjugate.
What makes the GPE interesting is the nonlinear term N U 0 |ψ| 2 ψ arising from the interaction between the particles in the condensate. The non-linearity in the GPE together with the enforced rotation is what gives rise to the vortices in the wave function of the Bose-Einstein condensate.
1.3 Nondimensionalization
Working with the GPE (1.1) directly is possible, but in this paper we prefer modeling the behavior of the condensate without caring about the units. We therefore introduce dimensionless variables proposed in [8]
t 7→ t
ω m x 7→ xa 0 ψ 7→ ψ
a
3 2
0
(1.6)
Ω 7→ Ωω m E 7→ ~ω m E β,Ω (1.7)
where ω m = min(ω x , ω y , ω z ) with ω x , ω y , ω z being the trapping frequency for the external potential in the x−, y−, z−direction respectively, which are defined by the quadratic potential V (x, y, z) = m 2 (ω x x 2 + ω y y 2 + ω z z 2 ). Here we have defined a 0 = q
mω ~
m, and by letting
β = U 0 N a 3 0 ~ω m
= 4πa s N
a 0 (1.8)
we get the dimensionless GPE i ∂ψ(x, t)
∂t = − 1
2 ∆ψ(x, t) + V (x)ψ(x, t) + β|ψ(x, t)| 2 ψ(x, t) − ΩL z ψ(x, t).
(1.9) Since we are only interested in finding time-independent ground states one may use the ansatz
ψ(x, t) = e −iµt φ(x) (1.10)
where µ is the chemical potential [1]. Inserting this ansatz into the GPE gives µφ(x) = − 1
2 ∆φ(x) + V (x)φ(x) + β|φ(x)| 2 φ(x) − ΩL z φ(x) (1.11) which is the time-independent dimensionless GPE for a conserved number of particles.
By nondimensionalizing we transform the normalization condition into kφk 2 =
Z
R
d|φ(x)| 2 dx = 1, (1.12)
and the non dimensional energy of the time-independent GPE can be calculated as
E β,Ω (φ) = Z
R
d1
2 |∇φ(x)| 2 + V (x)|φ(x)| 2 + β
2 |φ(x)| 4 − Ωφ ∗ (x)L z φ(x)
dx.
(1.13)
2 Methods
2.1 Gradient Descent
Our main problem in this paper is implementing a method that numerically calculates the ground state of a Bose-Einstein condensate. The ground state is the wave function φ for which the energy functional E β,Ω (φ) in (1.13) is minimal under the constraint (1.12). More formally we can write our problem as
( minimize E β,Ω (φ) subject to R
R
d|φ(x)| 2 dx = 1. (2.1)
One way to solve this problem is using the conjugate gradient which proved very good results in [9] however this method is rather complicated. A much simpler way to solve this problem would be by using gradient descent method.
This method works by taking a starting guess and then moving in the direction in which the energy decreases the most until the energy is minimized. We will do this by taking the negative gradient of E β,Ω (φ) with respect to φ. Since E β,Ω is a functional depending on the function φ, we will have to define what this gradient actually means. We will be skipping the derivation, only presenting the definitions and what the gradient in our context turns out to be.
Taking the functional derivative of (1.13) will be done with the Gˆ ateaux derivative, which is defined as
d h f = lim
→0
f (x + h) − f (x)
. (2.2)
With this definition, the Gˆ ateaux derivative of (1.13) at u in the direction h can be calculated as
E 0 (u)h = Z
D
< h∇u, ∇hi + h2V u + 2β|u| 2 u − 2iΩL z u, hi . (2.3) where h·, ·i is some inner product. The inner product in the Hilbert space H 0 (D, C) = L 2 (D, C) is the most common choice and is given by
hu(x), v(x)i L
2= Z
D
u(x)v ∗ (x). (2.4)
A direct computation with this inner product gives us the gradient
∇ φ,L
2E β,Ω = 2Hφ = −∆φ + 2V φ + 2β|φ| 2 φ − 2ΩL z φ (2.5) where ∇E denotes the Riesz representative of the gradient [10]. However other gradients might be more effective. For example, the gradient computed with the inner product in H 1 (D, C) mentioned in [11] gives
∇ φ,H
1E β,Ω = (1 − ∆) −1 (−∆φ + 2V φ + 2β|φ| 2 φ − 2ΩL z φ). (2.6) The fact is that any inner product could be used. In [12] a gradient that changes with the flow was presented which proved good results. For our implementation we will use the ∇ φ,L
2gradient since this is the simplest to implement. These kind of gradients are in general called Sobolev gradients and more information about Sobolev gradients can be found at [11].
With the gradient defined, let us formalize the method of gradient descent
in the following way. Let t 0 , t 1 . . . be uniform ”time”-steps during the descent,
with ∆t := t k+1 − t k being the distance between these steps, and φ k be the
value of φ at step k. Note that these steps are not actual steps in time, but
a construction to utilize in the gradient descent method. The gradient descent
method is constructed in such a way that φ k goes to the ground state as t → ∞.
φ 0
− ∆ t∇
φ,L
E
2β
,Ω
φ 1 φ 2
φ 3 min E β,Ω (φ)
Figure 1: The Gradient Descent Method
In figure 1 we can see how the gradient descent method converges to the minimal energy as lim k→∞ φ k . Note that taking to large steps with this kind of method can undo parts of previous steps or even increase the energy oppose to for an example conjugate gradient which finds more optimal step sizes. However it is possible to optimize the step sizes further while not over-complicating the method to much. For an example one could try a few different step sizes at each step and calculate the energy for each step and then use the step which minimizes the energy the most.
2.2 Semi-Implicit Euler
To calculate the value of φ k at step k during the gradient descent we will in this paper use a semi-implicit Euler scheme. We get this by taking the value of φ from the previous iteration for the nonlinear terms in the next which will give us a linear system of equations. Each time step we will project the solution on the unit sphere S to enforce the normalization constraint from (1.12).
The semi-implicit backward Euler scheme is given by φ ˜ n+1 − φ n
∆t = −∇ φ,L
2E β,Ω ( ˜ φ n+1 ), (2.7)
which we can write as
1
∆t − ∆ + 2V + 2β|φ n | 2 − 2ΩL z
φ ˜ n+1 = φ n
∆t . (2.8)
Here ˜ φ n+1 is the state of φ at step n + 1 before normalization, so after each time step we will project the solution onto the unit sphere according to
φ n+1 = φ ˜ n+1
|| ˜ φ n+1 || , (2.9)
enforcing the normalization defined in (1.12).
φ n−1
φ n−1
φ n φ n
− ∆ t∇
φ,L
2
E β
,Ω
− ∆ t∇
φ,L
2
E β
,Ω
n +1 φ
n +1 φ
Figure 2: Normalization after each time step.
The normalization can be visualized in figure 2.
φ n−1 φ n−1 φ n φ n
φ n +1
φ n +1
−∆ tP T ∇ φ,L
2E β, Ω
−∆ tP T ∇ φ,L
2E β, Ω
Figure 3: Projection onto the tangent space of the mass constraint.
Another more effective approach to enforce the (1.12) is to project the gra- dient along with the tangent space of the constraint which is done in [10]. The projection approach gives an error term proportional to O(∆t 2 ) [9] which could be removed by normalizing at each step. This scheme would make the method harder to implement with the semi-implicit Euler scheme and would probably not give us that much of an advantage since we can just increase the step size to compensate. This kind of scheme can be seen in figure 3.
In any case, to actually use this Euler scheme we have to start with some approximation φ 0 . Particular starting approximations are discussed in section 2.6.
Since we want to find the converged state lim k→∞ φ k , but taking an infinite amount of steps numerically is not possible, we have to choose some criterion that determines whether or not we are close enough to the solution. We do this by implementing the stopping criterion |φ k+1 − φ k | ≤ ∆t, for some small .
One way to quicken this algorithm is by using an adaptive time-step ∆t.
For example, in the beginning of the descent taking large time-steps is fine and will go faster than taking lots of small time-steps. As the solution starts to converge, one may however need to take smaller steps to actually get a good approximation of the solution.
2.3 Spatial discretization
The semi-implicit scheme introduced in the last chapter is a way of approxi- mating the movement in ”time”. To actually use this scheme, we also have to discretize the physical area in which we want to calculate the ground state. To start off we introduce the grid
O := (−a x , a x ) × (−a y , a y ) × (−a z , a z ) (2.10) where a x , a y , a z describe the length of the sides in the area wherein we want to calculate the ground state. We are here implicitly assuming that all of the ground state resides in this area with |φ| 2 = 0 on and outside of the boundary.
This approximation is fine to make since our trapping potentials V will enforce almost all of the condensate to reside in some finite area, after which we can choose a x , a y , a z to include this area. We then discretize this grid uniformly, rep- resenting it with J ×K ×L points, with J, K, L points in the x−, y−, z−direction respectively. We therefore have the discrete grid
O J,K,L = {(x j , y k , z l ) | 0 ≤ j ≤ J − 1, 0 ≤ k ≤ K − 1, 0 ≤ l ≤ L − 1}. (2.11)
where x j = −a x + 2a x j/J, y k = −a y + 2a y k/K, z l = −a z + 2a z l/L. We also
define that h x := x j+1 − x j , h y := y k+1 − y k , h z := z l+1 − z l .
h
xh
yx j x j+1
x j−1 x J
y K y k+1
y k
y k−1
{−a x , −a y } {−a x , a y }
{a x , −a y }
Figure 4: Illustration of spatial scheme in 2D.
h
xh
yh
zx j x j+1 x j−1
x J y K
y k+1
y k
y k−1
z L z l+1
z l z l−1
{−a x , −a y , −a z } {a x , a y , a z }
{−a x , a y , a z }
{a x , −a y , a z }
Figure 5: Illustration of spatial scheme in 3D.
The spatial discretization scheme is illustrated in figure 4 and figure 5. In this discretization, the integrals for normalizing the wave-function and calculating the energy are approximated as sums. For example, we have that
kφk 2 ≈ h
J −1
X
j=0 K−1
X
k=0 L−1
X
l=0
|φ(x j , y k , z l )| 2 , (2.12) with h := h x h y h z , so when normalizing we instead use this approximation of the norm. The energy can be expressed similarly as
E β,Ω (φ) ≈ h X
j,k,l
1
2 |∇φ(x)| 2 + V (x)|φ(x)| 2 + β
2 |φ(x)| 4 − Ω< (φ ∗ (x)L z φ(x))
.
(2.13)
As for applying operators, this discretization can be directly used with the
identity operator, the potential V as well as the non-linear term β|φ| 2 , since all
these operators only depend on the discrete point in question and can therefore be applied pointwise.
Let us clarify what we mean by this. The most intuitive and easiest way to regard φ with this discretization is as a 3-dimensional matrix, where the element at location (j, k, l) in the matrix is the corresponding φ-value at point (x j , y k , z l ). Since we are not regarding φ as a vector, we will never actually perform an explicit matrix multiplication, but rather we want to consider all operators as functions that we can apply pointwise. For example, applying the V operator to a state φ, would be done by letting
φ(x j , y k , z l ) 7→ V (x j , y k , z l )φ(x j , y k , z l ),
for each j, k, l, and similar for the identity operator and the non-linear term. We see here that the value for a point φ(x j , y k , z l ) under application of the operator V only depends on the point in question, which means that we do not have to reshape φ into a vector and perform any explicit matrix multiplications.
However, both the ∆-term and the ΩL z -term require us to approximate spatial derivatives. Here the value has to depend on other points since any derivative does, and we can not apply these terms pointwise as above.
To accurately approximate the partial derivative we use a method proposed in [1] where we use the fact that differentiation in Fourier space is just a simple multiplication - that is,
F d n f dx n
(ω) = (iω) n F [f ] (ω) (2.14) where F is the continous Fourier transform operator.
Figure 6: Illustrating the scheme of approximating a function as a sum of sinu- soidal functions.
The idea behind the method can be illustrated in figure 6 where we can approximate a function with a sum of periodic sinusoidal functions which are easy to differentiate.
An important part when using spectral methods is that we have a periodic
boundary condition.
0 5 10 15 20 25 30 35 40 45 50 0
0.5
1 In the Frequency Domain
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
-1 0
1 In the Time Domain
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
-1 0
1 Derivative In The Time Domain
Approximation With Fourier Differentiation Analytic
0 5 10 15 20 25 30 35 40 45 50
0 0.5
1 In The Frequency Domain
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
-1 0
1 In the Time Domain
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
-1 0
1 Derivative In The Time Domain
Approximation With Fourier Differentiation Analytic
Figure 7: Effect of using spectral methods on a periodic boundary condition vs non periodic. Left figure is a 5 Hz signal that has a periodic boundary condition while the right is a 6.5 Hz signal that does not.
In figure 7 we can see a simple example in which having a periodic boundary condition is necessary in order for the spectral method to be accurate. Since we are assuming that the condensate resides in our finite area O with |φ| 2 = 0 on the boundary and outside of O, we can enforce periodic boundary conditions in our particular case.
When doing discrete Fourier transforms in multiple dimensions we denote the corresponding frequency vectors for x j , y k and z l as
F x ({x j }) p = µ p , F y ({y k }) q = λ q , F z ({z l }) r = ξ r , (2.15) where
µ p = πp a
x
, λ q = πq a
y
, ξ r = πr a
z
. (2.16)
and where F x is the discrete Fourier transform in the x−direction and similar for y and z. A partial derivative in the x-direction could then in our case be approximated in Fourier space by taking the discrete Fourier transform of φ in the x−direction and multiplying with iµ p , which gives us that
F x [∂ x φ] (µ p , y k , z l ) ≈ iµ p J −1
X
j=0
φ(x j , y k , z l )e −µ
pi(x
j+a
x) . (2.17)
Taking the inverse discrete Fourier transform gives us an approximation of the partial derivative in the time domain
∂ x φ(x j , y k , z l ) ≈ 1 J
J/2−1
X
p=−J/2
iµ p J −1
X
j=0
φ(x j , y k , z l )e −µ
pi(x
j+a
x)
e µ
pi(x
j+a
x)
(2.18)
and it is easy to see that the same method can be applied in the y−, and
z−directions. The second order partial differential operators in the Laplacian
can thus be expressed as
∂ x 2 φ(x j , y k , z l ) ≈ 1 J
J/2−1
X
p=−J/2
−µ 2 p
J −1
X
j=0
φ(x j , y k , z l )e −µ
pi(x
j+a
x)
e µ
pi(x
j+a
x) (2.19)
∂ 2 y φ(x j , y k , z l ) ≈ 1 K
K/2−1
X
q=−K/2
"
−λ 2 q
K−1
X
k=0
φ(x j , y k , z l )e −λ
qi(y
k+a
y)
#
e λ
qi(y
k+a
y) (2.20)
∂ z 2 φ(x j , y k , z l ) ≈ 1 L
L/2−1
X
r=−L/2
"
−ξ r 2
L−1
X
l=0
φ(x j , y k , z l )e −ξ
ri(z
l+a
z)
#
e ξ
ri(z
l+a
z) (2.21)
The Laplacian operator can be expressed as the sum of equations (2.19),(2.20) and (2.21) as
∆φ(x j , y k , z l ) = (∂ x 2 + ∂ y 2 + ∂ z 2 )φ(x j , y k , z l ). (2.22) The angular momentum is defined as
L z φ = −i(x∂ y − y∂ x )φ (2.23) which we can evaluate in Fourier space similar to the Laplacian as
ix j ∂ y φ(x j , y k , z l ) ≈ 1 K
K/2−1
X
q=−K/2
"
−x j λ q
K−1
X
k=0
φ(x j , y k , z l )e −λ
qi(y
k+a
y)
#
e λ
qi(y
k+a
y) (2.24)
−iy k ∂ x φ(x j , y k , z l ) ≈ 1 J
J/2−1
X
p=−J/2
y k µ p
J −1
X
j=0
φ(x j , y k , z l )e −µ
pi(x
j+a
x)
e µ
pi(x
j+a
x) . (2.25) We see that we have the exact situation as before, where we are able to apply the ∆ operator and the L z operator pointwise. The main difference is that we can only do this pointwise in Fourier space, requiring us to first apply a Fourier transform, then applying the operator pointwise in Fourier space, before applying the inverse Fourier transform to get back.
In general, one may jump back and forth between physical space and the
Fourier space by applying Fourier transforms, and applying operators in the
space where they are the easiest to apply. An important thing to consider
is therefore grouping operators in an effective way, as to not apply unneces- sary Fourier transforms and increase computation time. Evaluating both the Laplacian and the angular momentum at the same time is therefore preferred if possible.
Later in the paper when the method is implemented in MATLAB we use the more efficient Fast Fourier Transform (FFT) instead of the discrete Fourier transform which reduces the number of operations from O(N 2 ) to O(N log N ) and can also be computed in parallel with the computer’s GPU. Note that when implementing the methods in MATLAB calculations will be done with vectors and not 3-dimensional matrices since it is more efficient. Note that we still apply operations pointwise without explicit matrix multiplications. When evaluating terms which need Fourier transforms, the vectors will be reshaped back to 3- dimensional matrices to easier use the fft function. All implementation is done with MATLAB (version 9.7.0.1190202 (R2019b)).
A major reason for implementing this spectral method instead of a finite difference method or similar is the difference in convergence rate. For finite difference methods as N increases we typically have the convergence of O(N −m ) for some constant m that depends on the order of approximation. With spectral methods the convergence is at least O(N −m ) for every m according to [13], provided that the solution is infinitely differentiable.
2.4 A fixed-point approach
The system (2.8) can be written as ( A ˜ φ n+1 = b
φ n+1 = ˜ φ n+1 /k ˜ φ n+1 k (2.26) where the operator A is defined by
A = I
∆t − ∆ + 2V + 2β|φ n | 2 − 2ΩL z ,
and where b = φ n /∆t. To solve the system (2.26) one would ideally find the inverse A −1 of this operator to easily and explicitly calculate ˜ φ n+1 , but inverting A would be very time-consuming.
Another simple iterative method to implement introduced in [14] and [1], is instead to partition A into two different matrices A = A ω − A T F ω , defined by
A ω = I
∆t − ∆ + ωI, A T F ω = 2ΩL z − 2V − 2β|φ n | 2 + ωI (2.27) where ω is some stabilization parameter and where I is the identity operator.
Notice that applying A ω in Fourier space is relatively easy as shown in the
last chapter. Because of this its inverse A −1 ω can be applied easily in a similar
fashion. Therefore we can rearrange the system (2.26) as
A ω − A T F ω φ ˜ n+1 = b (2.28)
A ω φ ˜ n+1 = A T F ω φ ˜ n+1 + b (2.29) φ ˜ n+1 = A −1 ω
A T F ω φ ˜ n+1 + b
. (2.30)
This final equation suggests finding ˜ φ n+1 using fixed point iteration, which can be done by calculating a series of approximations { ˜ φ (k) } k=0,1,... until convergence through the algorithm
φ ˜ (0) = φ n (2.31)
φ ˜ (k+1) = A −1 ω
A T F ω φ ˜ (k) + b
, k = 0, 1, . . . (2.32) Every step in time therefore consists of applying (2.32) a large number of times until φ (k) is a good enough approximation, and then letting φ n+1 = φ (k) /kφ (k) k.
Convergence is decided by some stopping criterion max
(x
j,y
k,z
l)
φ ˜ (k+1) (x j , y k , z l ) − ˜ φ (k) (x j , y k , z l )
≤ ε. (2.33) From linear algebra we get that for this fixed point iteration to converge we require the spectral radius
ρ A −1 ω A T F ω := max |λ| : λ is an eigenvalue of A −1 ω A T F ω
(2.34) to be smaller than 1. As shown in [1], one can choose ω to get a small enough spectral radius when the rotation Ω is small. In particular, the ideal value for ω is proved in [14] to be
ω ∗ := 1
2 min(V + β|φ n | 2 ) + max(V + β|φ n | 2 ) . (2.35)
However, for large Ω the spectral radius gets too large and the method fails
to converge. In figure 8 we can see how the number of iterations grows as
the angular velocity increases. For values above Ω = 0.9 the methods fails to
converge.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10 1
10 2 10 3 10 4
Fixed Point Iterations
Number Of Fixed Point Iterations For First Step In Imaginary Time
Figure 8: The number of fixed point iterations needed for the first step in imaginary time. In this plot we had ∆t = 0.01, ε = 10 −12 and β = 2000.
2.5 The Conjugate Gradient method
Since we want to be able to model a Bose-Einstein condensate even during large rotations, we want to use another method than the fixed-point method.
A better method is for example the Conjugate Gradient (CG) method. This algorithm is too advanced to derive in this paper and is thus not as intuitive as the fixed-point iteration, but a derivation of CG and the underlying methods can be found in [15]. The CG algorithm, for solving a system Ax = b is described in Algorithm 1 where x 0 is the initial guess.
Algorithm 1 The Conjugate Gradient method r 0 := b − Ax 0
p 0 := r 0
for j = 0, 1, . . . until r j is small enough do α j := hr j , r j i/hAp j , p j i
x j+1 := x j + α j p j
r j+1 := r j − α j Ap j
β j := hr j+1 , r j+1 i/hr j , r j i p j+1 := r j+1 + β j p j
end for
To make this method more stable one may precondition the system before applying the CG method, giving us the Preconditioned Conjugate Gradient method or PCG. The particular preconditioner we use is called the Thomas- Fermi preconditioner and only uses the potential terms. This preconditioner is described in [1]. The Thomas-Fermi preconditioner is defined as
P T F n =
I
∆t + 2V + 2β|φ n | 2
−1
(2.36) and is easy to apply since it does not require the application of any Fourier transforms. Another common preconditioner mentioned in [9] is the kinetic energy preconditioner which only uses the kinetic term in the preconditioner.
This preconditioner is defined as
P ∆ = (α ∆ − ∆) −1 (2.37)
and for α ∆ = 1 is equivalent using the Sobolev gradient defined at (2.6). Since the kinetic energy preconditioner requires more Fourier transforms to apply we will use the Thomas-Fermi preconditioner because of its simplicity. By defining
A ∆,Ω = −∆ − 2ΩL z (2.38)
we can transform the system (2.26) as follows:
Aφ n+1 = b (2.39)
I
∆t − ∆ + 2V + 2β|φ n | 2 − 2ΩL z
φ n+1 = b (2.40)
I
∆t + 2V + 2β|φ n | 2
+ [−∆ − 2ΩL z ]
φ n+1 = b (2.41)
[P T F n ] −1 + A ∆,Ω
φ n+1 = b (2.42) (I + P T F n A ∆,Ω ) φ n+1 = P T F n b (2.43) Aφ ˘ n+1 = ˘ b, (2.44) where we define ˘ A = I + P T F n A ∆,Ω and ˘ b = P T F n b. Each time-step in the PCG method therefore consists of applying the CG algorithm to the system ˘ Aφ n+1 =
˘ b until convergence, and then normalizing the solution before moving on to the next time-step. As with the fixed-point iteration, convergence is decided by some stopping criterion
max
(x
j,y
k,z
l)
˘ b(x j , y k , z l ) − ˘ A ˜ φ (k) (x j , y k , z l )
≤ ε. (2.45)
When implementing the algorithm in MATLAB we use the built-in PCG func-
tion. There are more iterative methods that one can use instead of PCG. Pos-
sible methods are for example the BiConjugate Gradient Stabilized method
(BiCGStab), or the Generalized Minimal Residual method (GMRES). In any case, one might want to precondition the system differently, but both these methods are used in [1]. In figure 9 we see a comparison between PCG and BiCGStab, motivating why PCG should be a good choice since it is faster in some cases.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
20 40 60 80 100 120 140 160 180
Sec
Comparison Between Iterative Solvers
PCG BICGSTAB
Figure 9: Computation time to converge to ground state with different angular velocity and different solvers for β = 2000 and a 2 6 × 2 6 grid. We also have that
∆t = 0.1 and = 10 −5 .
Note that the computation time in figure 9 increases drastically after Ω = 0.7 and this could be partly because of our starting guess and partly because of the preconditioner we use. Both the Thomas-Fermi approximation and the Thomas- Fermi preconditioner should be effective when the non-linearity is small which for larger rotations might not be the case.
It is also clear that all of these methods are more efficient than calculating
an inverse explicitly. In figure 10 we are creating a random symmetric and
positive definite matrix A and random vector b in MATLAB and plotting the
computation time for solving the equation Ax = b with the built-in MATLAB
function inv for calculating an explicit inverse vs the iterative solver PCG. We
can see that the MATLAB inv takes a long time to compute as N grows but
the iterative method does not increase that much in comparison.
0 500 1000 1500 2000 Matrix size N
0 0.05 0.1 0.15 0.2 0.25
Sec
Time To Solve A Linear System With A Symmetric Positive Definite Matrix
MATLAB inv PCG with tolres 10
-10Figure 10: Computation time to solve a system of equations for different matrix sizes for the built-in MATLAB function inv vs an iterative solver PCG on a Intel Core i5 1035-G4.
2.6 Approximate Solutions
2.6.1 The Thomas-Fermi Approximation
In order for our implementation to converge to a ground state we need a good enough starting guess. In our implementation we will mostly use the Thomas- Fermi approximation. The Thomas-Fermi approximation can be obtained by neglecting the kinetic energy and rotation in (1.11) which gives
µφ(x) = V (x) + β|φ(x)| 2 φ(x) (2.46) after which we can explicitly solve for |φ| 2 and get that
|φ(x)| 2 = µ − V (x)
β , (2.47)
after which we get to define the Thomas-Fermi approximation φ T F (x) =
(p (µ − V (x)) /β, µ ≥ V (x),
0, µ < V (x). (2.48)
Here µ depends on β and which potential V is used, but can be calculated explicitly for chosen parameters by enforcing that
Z
|φ T F | 2 = 1. (2.49)
In the usual harmonic case we use
µ = µ T F β = (4βγ x γ y γ z /π) 1/2 /2 (2.50) with γ x = γ y = γ z = 1.
2.6.2 Additional tricks
To quicken computations one may employ a few different tricks. If we want to find the ground state for a particular parameter setup, we start off with the Thomas-Fermi approximation on a coarser grid-size than the one we want. After this converges, we may interpolate the solution to a finer grid-size, and use this interpolation as the starting guess for the larger grid. The interpolation can be done in a few different ways, but we utilize the interp2 MATLAB function with the ’spline’ option, causing the interpolation to be done with a piecewise polynomial.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
200 400 600 800 1000 1200
Sec
Time To Converge For Different And =2000
26x26 Grid 27x27 Grid
Figure 11: Top left is the converged ground state for β = 2000 and Ω = 0.4. Top
right is the converged ground state but interpolated to a higher grid definition
using spline. Bottom left is the converged ground state interpolated to a higher
grid definition and then converged again. Bottom right shows the computation
time when computing for different angular velocities and different grid sizes on
an Intel Core i5 1035-G4. We also had that = 10 −5 and ∆t = 1.
In figure 11 we can see that the interpolated ground state from the coarser (top left plot) to the finer grid (top right plot) is very close to the converged ground state of the finer grid (bottom left plot). We can also see that converging the 2 6 × 2 6 grid takes significantly less time than the 2 7 × 2 7 grid.
When investigating how the wave function depends on a particular param- eter, say Ω or β, we often want to see how the ground state changes when the parameter varies. If we for example change the parameter Ω from 0 to 1 with small steps of 0.01, we can use the converged state for the last Ω value when cal- culating the next one. For example, we can use the converged state for Ω = 0.34 as our starting guess for computing the converged state for Ω = 0.35.
Using an interpolation in this way and utilizing previous converged states should reduce the total time needed to find the ground state. It might however cause unexpected negative effects. Something to remember is that functions in MATLAB like FFT and PCG run on the computers CPU and can therefore take a long time to compute for large matrices. By using functions that utilize parallelization, for example the MATLAB function gpuArray that directly uses the GPU, one can reduce the total runtime significantly.
Since the matrix for the 3 dimensional case is a 3 dimensional matrix it is not completely trivial how to show the result, but a helpful function from the MATLAB file exchange was Viewer3d by Dirk-Jan Kroon which can be accessed at [16]. Viewer3d is a program originally built for magnetic resonance imaging and computed tomography, but since it can show the density of a given 3D matrix it is usable when plotting |φ| 2 in 3D.
3 Results
3.1 Results in two dimensions
In figure 12 and figure 13 we plot the converged ground state for a rotating
Bose-Einstein condensate in 2D. The plots show |φ(x, y)| 2 with a x = a y = 16,
J = K = 2 8 and with β and Ω as described in the particular plots. The potential
used is V (x, y) = (x 2 + y 2 )/2. The convergence has been accomplished with
PCG, and the starting guess was the Thomas-Fermi-approximation.
Figure 12: Converged ground state with harmonic potential for different angular
Figure 13: Converged ground state with harmonic potential for different angular velocities for = 10 −6 and dt = 0.01.
In figure 14 we found the ground state for β = 2000 and Ω ∈ [0, 0.9] with steps in Ω of 0.002. The potential is harmonic, and the starting guess was the Thomas-Fermi approximation for Ω = 0 and then the last approximation for Ω > 0. For example, the converged state for Ω = 0.344 was used as the starting guess for Ω = 0.346. By doing this we can see how the number of vortices depends on the angular velocity Ω. Note the discrepancy between Ω = 0.4 in figure 12 for which there exists vortices and figure 14 where there are no vortices.
The starting guess in the first case was the Thomas-Fermi approximation, while
the starting guess in the second case was the converged state for Ω = 0.398.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0
10 20 30 40 50 60 70 80 90
Number of Vortices
Vortices vs angular velocity for =2000
Figure 14: The number of vortices as a function of Ω with β = 2000 and a harmonic potential. Here ∆t = 0.01 and = 10 −4 was used.
In figure 15 we have done a quadratic fit and an exponential fit on the
values of Ω for which there exist vortices. Note that the Ω values before the
jump at about 0.46 were thus not included in this interpolation. The reason
we do a quadratic fit is because the number of vortices should be symmetric
when mirrored across the y-axis, because a negative Ω only means rotating in
the other direction which should not change the behavior of the vortices. A
quadratic fit that models the vortices well would therefore have its minimum
close to Ω = 0.
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0
10 20 30 40 50 60 70 80
Number of Vortices
Interpolation Of Angular Velocity Vs Vortices
Data Exponential Interpolation
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
0 20 40 60 80 100
Number of Vortices
Interpolation Of Angular Velocity Vs Vortices
Data
2nd Order Polynomial Interpolation