• No results found

Computation of Stationary States for Rotating Bose-Einstein Condensates using Spectral Methods

N/A
N/A
Protected

Academic year: 2022

Share "Computation of Stationary States for Rotating Bose-Einstein Condensates using Spectral Methods"

Copied!
35
0
0

Loading.... (view fulltext now)

Full text

(1)

IN

DEGREE PROJECT TECHNOLOGY, FIRST CYCLE, 15 CREDITS

STOCKHOLM SWEDEN 2020 ,

Computation of Stationary States for Rotating Bose-Einstein

Condensates using Spectral Methods

ADAM ERLANDSSON

PAUL HEDVALL

(2)

Abstract

The Bose-Einstein condensate is a phase of matter that arises when cooling gases of bosons to extremely low temperatures. When studying these condensates one may use the Gross-Pitaevskii equation, which is a non-linear variant of the Schr¨ odinger equation. An interesting phe- nomenon that arises when rotating a Bose-Einstein condensate is the ap- pearance of vortices.

We implement a semi-implicit Euler scheme using spectral methods proposed in [1] to numerically calculate the ground state of a rotating Bose-Einstein condensate. We start with implementing a simpler iterative fixed-point method to solve the Euler scheme but show that this method fails to converge for large rotations. Because of this we implement mul- tiple Krylov subspace solvers that in fact do converge for large rotations and show that the Preconditioned Conjugate Gradient method has bet- ter performance than the BiConjugate Gradient Stabilized method. After the implementation we briefly look at the performance of the method and improve it with simple tricks that do not compromise the accuracy or robustness and which reduce the computation time slightly.

Lastly we look at the formation of vortices in 2-dimensional and 3- dimensional Bose-Einstein condensates. We show that the number of vor- tices increases exponentially for increasing angular velocity in 2D until the condensate breaks apart, but in 3D we ultimately find that the required computation time and RAM storage is too large to be able to analyze the vortices in a similar way on our personal computers.

Sammanfattning

Bose-Einstein-kondensat ¨ ar en fas som uppst˚ ar vid nedkylning av bo- soner till extremt l˚ aga temperaturer. F¨ or att studera dessa kondensat g˚ ar det att anv¨ anda Gross-Pitaevskii-ekvationen, vilket ¨ ar en icke-linj¨ ar vari- ant av Schr¨ odinger-ekvationen. Ett intressant fenomen som uppst˚ ar n¨ ar ett Bose-Einstein-kondensat roteras ¨ ar framtr¨ adandet av virvlar.

Vi implementerar ett semi-implicit Euler-system med hj¨ alp av spektral- metoder som f¨ oreslogs i [1] f¨ or att numeriskt ber¨ akna grundtillst˚ andet f¨ or ett roterande Bose-Einstein-kondensat. Vi b¨ orjar med att implementera en enklare iterativ fixpunkts-metod f¨ or att l¨ osa detta system men visar att denna metod inte konvergerar f¨ or stora rotationer. P˚ a grund av det- ta implementerar vi flera Krylov-delrums-metoder som faktiskt konverge- rar f¨ or stora rotationer och visar att Preconditioned Conjugate Gradient- metoden har b¨ attre prestanda ¨ an BiConjugate Gradient Stabilized-metoden.

Efter denna implementation unders¨ oker vi kort prestandan av metoden och f¨ orb¨ attrar den med n˚ agra enkla trick som inte p˚ averkar noggrannhe- ten eller robustheten och som minskar ber¨ akningstiden n˚ agot.

Slutligen studerar vi formationen av virvlar i 2-dimensionella och 3- dimensionella Bose-Einstein-kondensat. Vi visar att antalet virvlar ¨ okar exponentiellt f¨ or ¨ okande vinkelhastighet i 2D tills kondensatet g˚ ar s¨ onder, men i 3D finner vi till slut att ber¨ akningstiden och RAM-anv¨ andningen

¨ ar f¨ or stor f¨ or att g¨ ora en liknande analys av virvlarnas beteende p˚ a v˚ ara

egna datorer.

(3)

Acknowledgements

We would like to thank our advisors Patrick Henning and Johan W¨ arneg˚ ard for their help and guidance during this project.

Contents

1 Background 1

1.1 The Bose-Einstein condensate . . . . 1

1.2 The Gross-Pitaevskii equation . . . . 1

1.3 Nondimensionalization . . . . 2

2 Methods 3 2.1 Gradient Descent . . . . 3

2.2 Semi-Implicit Euler . . . . 5

2.3 Spatial discretization . . . . 7

2.4 A fixed-point approach . . . . 13

2.5 The Conjugate Gradient method . . . . 15

2.6 Approximate Solutions . . . . 18

2.6.1 The Thomas-Fermi Approximation . . . . 18

2.6.2 Additional tricks . . . . 19

3 Results 20 3.1 Results in two dimensions . . . . 20

3.2 Results in three dimensions . . . . 25

4 Discussion 28 4.1 Vortices . . . . 28

4.2 Future studies . . . . 29

(4)

1 Background

1.1 The Bose-Einstein condensate

From quantum mechanics we know that any particle falls into one of two classes:

bosons and fermions. A particle is a boson if it has integer spin and it is a fermion if it has half-integer spin [2]. Some examples of bosons are photons with spin 1 and the Higgs boson with spin 0. An example of a composite particle that is a boson is the helium ( 4 He) atom [3]. Some examples of fermions are electrons, protons and neutrons which all have spin 1/2 [4].

Because of the Pauli exclusion principle, no two fermions can occupy the same quantum state. For bosons however this is not true. In particular, when cooling down a low-density gas consisting of bosons to temperatures near ab- solute zero, a lot of bosons will be in the lowest quantum state at the same time. This causes the bosons to behave like a superfluid, which is a fluid with zero viscosity. This new state of matter is called the Bose-Einstein condensate, named after Satyendra Nath Bose and Albert Einstein who both were important in theoretically conjecturing the existence of this condensate during the 1920’s [5].

Since then a lot of research has been done on the subject. The first experi- mental observation of a Bose-Einstein condensate was accomplished in 1995 [6]

for which some of the contributors later were awarded the Nobel prize in physics.

One interesting phenomenon that arises in rotating Bose-Einstein condensates is the appearance of vortices, or density singularities, in which the wave func- tion vanishes. These have been seen experimentally, which can be seen in for example [7].

Since physically creating and observing a condensate can be challenging, one may instead study Bose-Einstein condensates numerically. Our goal in this paper is to implement a reliable method for calculating the ground state of a Bose-Einstein condensate numerically, and utilizing this method to study the condensate in general and the vortices in particular.

1.2 The Gross-Pitaevskii equation

The wave function for a Bose-Einstein condensate can be calulated as the solu- tion ψ(x, t) ∈ C to the Gross-Pitaevskii Equation, which is a non-linear variant of the Schr¨ odinger equation. The time-dependent Gross-Pitaevskii Equation can be written as

i~ ∂ψ(x, t)

∂t = Hψ =



− ~ 2

2m ∇ 2 + V (x) + N U 0 |ψ(x, t)| 2 − ΩL z



ψ(x, t), (1.1)

with this particular form being used in [8]. In this equation we have a few

different parameters. H is the Hamiltonian operator, while ~ is the reduced

Planck constant. As for the particles themselves, m is the particle mass, N

is the number of particles, and U 0 is a parameter describing the interaction

(5)

between the particles in the condensate and defined as U 0 = 4π~ 2 a s

m (1.2)

where a s is the s-wave scattering length. A positive a s value means that the particles repel each other, while a negative one means the particles attract.

We have that the ∇ operator is defined as ∇ = (∂ x , ∂ y , ∂ z ), with ∇ 2 = ∆ =

2 x + ∂ y 2 + ∂ z 2 being the Laplacian operator. In this form of the GPE we have implicitly assumed that all rotation is about the z−axis, with the L z operator describing the angular momentum to be defined as L z = −i~(x∂ y − y∂ x ).

The two parameters V and Ω describe the external potential applied to the condensate and the angular velocity enforced on the condensate respec- tively. Some reasonable potentials to use are the zero potential V (x) = 0, the quadratic potential V (x) = α|x| 2 or the quartic potential V (x) = α|x| 4 for some parameter α.

According to Max Borns rule we have that the probability of finding a par- ticle in a position x and a time t is

|ψ(x, t)| 2 = ρ(x, t). (1.3)

This translates in (1.1) that the number of particles is constant and gives us the normalization condition

kφk 2 :=

Z

R

d

|ψ(x, t)| 2 dx = N. (1.4)

Given a wave function ψ, the energy per particle is defined as E(ψ) =

Z

R

d

 ~ 2

2m + V (x)|ψ(x, t)| 2 + N U 0

2 |ψ(x, t)| 4 − Ωψ (x, t)L z ψ(x, t)

 dx, (1.5) where R d can be either R 2 or R 3 depending on what dimension we are working in. The asterisk symbol (as in ψ ) denotes the complex conjugate.

What makes the GPE interesting is the nonlinear term N U 0 |ψ| 2 ψ arising from the interaction between the particles in the condensate. The non-linearity in the GPE together with the enforced rotation is what gives rise to the vortices in the wave function of the Bose-Einstein condensate.

1.3 Nondimensionalization

Working with the GPE (1.1) directly is possible, but in this paper we prefer modeling the behavior of the condensate without caring about the units. We therefore introduce dimensionless variables proposed in [8]

t 7→ t

ω m x 7→ xa 0 ψ 7→ ψ

a

3 2

0

(1.6)

Ω 7→ Ωω m E 7→ ~ω m E β,Ω (1.7)

(6)

where ω m = min(ω x , ω y , ω z ) with ω x , ω y , ω z being the trapping frequency for the external potential in the x−, y−, z−direction respectively, which are defined by the quadratic potential V (x, y, z) = m 2 (ω x x 2 + ω y y 2 + ω z z 2 ). Here we have defined a 0 = q

mω ~

m

, and by letting

β = U 0 N a 3 0 ~ω m

= 4πa s N

a 0 (1.8)

we get the dimensionless GPE i ∂ψ(x, t)

∂t = − 1

2 ∆ψ(x, t) + V (x)ψ(x, t) + β|ψ(x, t)| 2 ψ(x, t) − ΩL z ψ(x, t).

(1.9) Since we are only interested in finding time-independent ground states one may use the ansatz

ψ(x, t) = e −iµt φ(x) (1.10)

where µ is the chemical potential [1]. Inserting this ansatz into the GPE gives µφ(x) = − 1

2 ∆φ(x) + V (x)φ(x) + β|φ(x)| 2 φ(x) − ΩL z φ(x) (1.11) which is the time-independent dimensionless GPE for a conserved number of particles.

By nondimensionalizing we transform the normalization condition into kφk 2 =

Z

R

d

|φ(x)| 2 dx = 1, (1.12)

and the non dimensional energy of the time-independent GPE can be calculated as

E β,Ω (φ) = Z

R

d

 1

2 |∇φ(x)| 2 + V (x)|φ(x)| 2 + β

2 |φ(x)| 4 − Ωφ (x)L z φ(x)

 dx.

(1.13)

2 Methods

2.1 Gradient Descent

Our main problem in this paper is implementing a method that numerically calculates the ground state of a Bose-Einstein condensate. The ground state is the wave function φ for which the energy functional E β,Ω (φ) in (1.13) is minimal under the constraint (1.12). More formally we can write our problem as

( minimize E β,Ω (φ) subject to R

R

d

|φ(x)| 2 dx = 1. (2.1)

(7)

One way to solve this problem is using the conjugate gradient which proved very good results in [9] however this method is rather complicated. A much simpler way to solve this problem would be by using gradient descent method.

This method works by taking a starting guess and then moving in the direction in which the energy decreases the most until the energy is minimized. We will do this by taking the negative gradient of E β,Ω (φ) with respect to φ. Since E β,Ω is a functional depending on the function φ, we will have to define what this gradient actually means. We will be skipping the derivation, only presenting the definitions and what the gradient in our context turns out to be.

Taking the functional derivative of (1.13) will be done with the Gˆ ateaux derivative, which is defined as

d h f = lim

→0

f (x + h) − f (x)

 . (2.2)

With this definition, the Gˆ ateaux derivative of (1.13) at u in the direction h can be calculated as

E 0 (u)h = Z

D

< h∇u, ∇hi + h2V u + 2β|u| 2 u − 2iΩL z u, hi . (2.3) where h·, ·i is some inner product. The inner product in the Hilbert space H 0 (D, C) = L 2 (D, C) is the most common choice and is given by

hu(x), v(x)i L

2

= Z

D

u(x)v (x). (2.4)

A direct computation with this inner product gives us the gradient

φ,L

2

E β,Ω = 2Hφ = −∆φ + 2V φ + 2β|φ| 2 φ − 2ΩL z φ (2.5) where ∇E denotes the Riesz representative of the gradient [10]. However other gradients might be more effective. For example, the gradient computed with the inner product in H 1 (D, C) mentioned in [11] gives

∇ φ,H

1

E β,Ω = (1 − ∆) −1 (−∆φ + 2V φ + 2β|φ| 2 φ − 2ΩL z φ). (2.6) The fact is that any inner product could be used. In [12] a gradient that changes with the flow was presented which proved good results. For our implementation we will use the ∇ φ,L

2

gradient since this is the simplest to implement. These kind of gradients are in general called Sobolev gradients and more information about Sobolev gradients can be found at [11].

With the gradient defined, let us formalize the method of gradient descent

in the following way. Let t 0 , t 1 . . . be uniform ”time”-steps during the descent,

with ∆t := t k+1 − t k being the distance between these steps, and φ k be the

value of φ at step k. Note that these steps are not actual steps in time, but

a construction to utilize in the gradient descent method. The gradient descent

method is constructed in such a way that φ k goes to the ground state as t → ∞.

(8)

φ 0

− ∆ t∇

φ,L

E

2

β

,Ω

φ 1 φ 2

φ 3 min E β,Ω (φ)

Figure 1: The Gradient Descent Method

In figure 1 we can see how the gradient descent method converges to the minimal energy as lim k→∞ φ k . Note that taking to large steps with this kind of method can undo parts of previous steps or even increase the energy oppose to for an example conjugate gradient which finds more optimal step sizes. However it is possible to optimize the step sizes further while not over-complicating the method to much. For an example one could try a few different step sizes at each step and calculate the energy for each step and then use the step which minimizes the energy the most.

2.2 Semi-Implicit Euler

To calculate the value of φ k at step k during the gradient descent we will in this paper use a semi-implicit Euler scheme. We get this by taking the value of φ from the previous iteration for the nonlinear terms in the next which will give us a linear system of equations. Each time step we will project the solution on the unit sphere S to enforce the normalization constraint from (1.12).

The semi-implicit backward Euler scheme is given by φ ˜ n+1 − φ n

∆t = −∇ φ,L

2

E β,Ω ( ˜ φ n+1 ), (2.7)

(9)

which we can write as

 1

∆t − ∆ + 2V + 2β|φ n | 2 − 2ΩL z



φ ˜ n+1 = φ n

∆t . (2.8)

Here ˜ φ n+1 is the state of φ at step n + 1 before normalization, so after each time step we will project the solution onto the unit sphere according to

φ n+1 = φ ˜ n+1

|| ˜ φ n+1 || , (2.9)

enforcing the normalization defined in (1.12).

φ n−1

φ n−1

φ n φ n

− ∆ t∇

φ,L

2

E β

,Ω

− ∆ t∇

φ,L

2

E β

,Ω

n +1 φ

n +1 φ

Figure 2: Normalization after each time step.

The normalization can be visualized in figure 2.

φ n−1 φ n−1 φ n φ n

φ n +1

φ n +1

−∆ tP Tφ,L

2

E β,

−∆ tP Tφ,L

2

E β,

Figure 3: Projection onto the tangent space of the mass constraint.

(10)

Another more effective approach to enforce the (1.12) is to project the gra- dient along with the tangent space of the constraint which is done in [10]. The projection approach gives an error term proportional to O(∆t 2 ) [9] which could be removed by normalizing at each step. This scheme would make the method harder to implement with the semi-implicit Euler scheme and would probably not give us that much of an advantage since we can just increase the step size to compensate. This kind of scheme can be seen in figure 3.

In any case, to actually use this Euler scheme we have to start with some approximation φ 0 . Particular starting approximations are discussed in section 2.6.

Since we want to find the converged state lim k→∞ φ k , but taking an infinite amount of steps numerically is not possible, we have to choose some criterion that determines whether or not we are close enough to the solution. We do this by implementing the stopping criterion |φ k+1 − φ k | ≤ ∆t, for some small .

One way to quicken this algorithm is by using an adaptive time-step ∆t.

For example, in the beginning of the descent taking large time-steps is fine and will go faster than taking lots of small time-steps. As the solution starts to converge, one may however need to take smaller steps to actually get a good approximation of the solution.

2.3 Spatial discretization

The semi-implicit scheme introduced in the last chapter is a way of approxi- mating the movement in ”time”. To actually use this scheme, we also have to discretize the physical area in which we want to calculate the ground state. To start off we introduce the grid

O := (−a x , a x ) × (−a y , a y ) × (−a z , a z ) (2.10) where a x , a y , a z describe the length of the sides in the area wherein we want to calculate the ground state. We are here implicitly assuming that all of the ground state resides in this area with |φ| 2 = 0 on and outside of the boundary.

This approximation is fine to make since our trapping potentials V will enforce almost all of the condensate to reside in some finite area, after which we can choose a x , a y , a z to include this area. We then discretize this grid uniformly, rep- resenting it with J ×K ×L points, with J, K, L points in the x−, y−, z−direction respectively. We therefore have the discrete grid

O J,K,L = {(x j , y k , z l ) | 0 ≤ j ≤ J − 1, 0 ≤ k ≤ K − 1, 0 ≤ l ≤ L − 1}. (2.11)

where x j = −a x + 2a x j/J, y k = −a y + 2a y k/K, z l = −a z + 2a z l/L. We also

define that h x := x j+1 − x j , h y := y k+1 − y k , h z := z l+1 − z l .

(11)

h

x

h

y

x j x j+1

x j−1 x J

y K y k+1

y k

y k−1

{−a x , −a y } {−a x , a y }

{a x , −a y }

Figure 4: Illustration of spatial scheme in 2D.

(12)

h

x

h

y

h

z

x j x j+1 x j−1

x J y K

y k+1

y k

y k−1

z L z l+1

z l z l−1

{−a x , −a y , −a z } {a x , a y , a z }

{−a x , a y , a z }

{a x , −a y , a z }

Figure 5: Illustration of spatial scheme in 3D.

The spatial discretization scheme is illustrated in figure 4 and figure 5. In this discretization, the integrals for normalizing the wave-function and calculating the energy are approximated as sums. For example, we have that

kφk 2 ≈ h

J −1

X

j=0 K−1

X

k=0 L−1

X

l=0

|φ(x j , y k , z l )| 2 , (2.12) with h := h x h y h z , so when normalizing we instead use this approximation of the norm. The energy can be expressed similarly as

E β,Ω (φ) ≈ h X

j,k,l

 1

2 |∇φ(x)| 2 + V (x)|φ(x)| 2 + β

2 |φ(x)| 4 − Ω< (φ (x)L z φ(x))



.

(2.13)

As for applying operators, this discretization can be directly used with the

identity operator, the potential V as well as the non-linear term β|φ| 2 , since all

(13)

these operators only depend on the discrete point in question and can therefore be applied pointwise.

Let us clarify what we mean by this. The most intuitive and easiest way to regard φ with this discretization is as a 3-dimensional matrix, where the element at location (j, k, l) in the matrix is the corresponding φ-value at point (x j , y k , z l ). Since we are not regarding φ as a vector, we will never actually perform an explicit matrix multiplication, but rather we want to consider all operators as functions that we can apply pointwise. For example, applying the V operator to a state φ, would be done by letting

φ(x j , y k , z l ) 7→ V (x j , y k , z l )φ(x j , y k , z l ),

for each j, k, l, and similar for the identity operator and the non-linear term. We see here that the value for a point φ(x j , y k , z l ) under application of the operator V only depends on the point in question, which means that we do not have to reshape φ into a vector and perform any explicit matrix multiplications.

However, both the ∆-term and the ΩL z -term require us to approximate spatial derivatives. Here the value has to depend on other points since any derivative does, and we can not apply these terms pointwise as above.

To accurately approximate the partial derivative we use a method proposed in [1] where we use the fact that differentiation in Fourier space is just a simple multiplication - that is,

F  d n f dx n



(ω) = (iω) n F [f ] (ω) (2.14) where F is the continous Fourier transform operator.

Figure 6: Illustrating the scheme of approximating a function as a sum of sinu- soidal functions.

The idea behind the method can be illustrated in figure 6 where we can approximate a function with a sum of periodic sinusoidal functions which are easy to differentiate.

An important part when using spectral methods is that we have a periodic

boundary condition.

(14)

0 5 10 15 20 25 30 35 40 45 50 0

0.5

1 In the Frequency Domain

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

-1 0

1 In the Time Domain

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

-1 0

1 Derivative In The Time Domain

Approximation With Fourier Differentiation Analytic

0 5 10 15 20 25 30 35 40 45 50

0 0.5

1 In The Frequency Domain

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

-1 0

1 In the Time Domain

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

-1 0

1 Derivative In The Time Domain

Approximation With Fourier Differentiation Analytic

Figure 7: Effect of using spectral methods on a periodic boundary condition vs non periodic. Left figure is a 5 Hz signal that has a periodic boundary condition while the right is a 6.5 Hz signal that does not.

In figure 7 we can see a simple example in which having a periodic boundary condition is necessary in order for the spectral method to be accurate. Since we are assuming that the condensate resides in our finite area O with |φ| 2 = 0 on the boundary and outside of O, we can enforce periodic boundary conditions in our particular case.

When doing discrete Fourier transforms in multiple dimensions we denote the corresponding frequency vectors for x j , y k and z l as

F x ({x j }) p = µ p , F y ({y k }) q = λ q , F z ({z l }) r = ξ r , (2.15) where

µ p = πp a

x

, λ q = πq a

y

, ξ r = πr a

z

. (2.16)

and where F x is the discrete Fourier transform in the x−direction and similar for y and z. A partial derivative in the x-direction could then in our case be approximated in Fourier space by taking the discrete Fourier transform of φ in the x−direction and multiplying with iµ p , which gives us that

F x [∂ x φ] (µ p , y k , z l ) ≈ iµ p J −1

X

j=0

φ(x j , y k , z l )e −µ

p

i(x

j

+a

x

) . (2.17)

Taking the inverse discrete Fourier transform gives us an approximation of the partial derivative in the time domain

∂ x φ(x j , y k , z l ) ≈ 1 J

J/2−1

X

p=−J/2

iµ p J −1

X

j=0

φ(x j , y k , z l )e −µ

p

i(x

j

+a

x

)

 e µ

p

i(x

j

+a

x

)

(2.18)

and it is easy to see that the same method can be applied in the y−, and

z−directions. The second order partial differential operators in the Laplacian

(15)

can thus be expressed as

x 2 φ(x j , y k , z l ) ≈ 1 J

J/2−1

X

p=−J/2

−µ 2 p

J −1

X

j=0

φ(x j , y k , z l )e −µ

p

i(x

j

+a

x

)

 e µ

p

i(x

j

+a

x

) (2.19)

2 y φ(x j , y k , z l ) ≈ 1 K

K/2−1

X

q=−K/2

"

−λ 2 q

K−1

X

k=0

φ(x j , y k , z l )e −λ

q

i(y

k

+a

y

)

#

e λ

q

i(y

k

+a

y

) (2.20)

z 2 φ(x j , y k , z l ) ≈ 1 L

L/2−1

X

r=−L/2

"

−ξ r 2

L−1

X

l=0

φ(x j , y k , z l )e −ξ

r

i(z

l

+a

z

)

#

e ξ

r

i(z

l

+a

z

) (2.21)

The Laplacian operator can be expressed as the sum of equations (2.19),(2.20) and (2.21) as

∆φ(x j , y k , z l ) = (∂ x 2 + ∂ y 2 + ∂ z 2 )φ(x j , y k , z l ). (2.22) The angular momentum is defined as

L z φ = −i(x∂ y − y∂ x )φ (2.23) which we can evaluate in Fourier space similar to the Laplacian as

ix jy φ(x j , y k , z l ) ≈ 1 K

K/2−1

X

q=−K/2

"

−x j λ q

K−1

X

k=0

φ(x j , y k , z l )e −λ

q

i(y

k

+a

y

)

#

e λ

q

i(y

k

+a

y

) (2.24)

−iy kx φ(x j , y k , z l ) ≈ 1 J

J/2−1

X

p=−J/2

y k µ p

J −1

X

j=0

φ(x j , y k , z l )e −µ

p

i(x

j

+a

x

)

 e µ

p

i(x

j

+a

x

) . (2.25) We see that we have the exact situation as before, where we are able to apply the ∆ operator and the L z operator pointwise. The main difference is that we can only do this pointwise in Fourier space, requiring us to first apply a Fourier transform, then applying the operator pointwise in Fourier space, before applying the inverse Fourier transform to get back.

In general, one may jump back and forth between physical space and the

Fourier space by applying Fourier transforms, and applying operators in the

space where they are the easiest to apply. An important thing to consider

(16)

is therefore grouping operators in an effective way, as to not apply unneces- sary Fourier transforms and increase computation time. Evaluating both the Laplacian and the angular momentum at the same time is therefore preferred if possible.

Later in the paper when the method is implemented in MATLAB we use the more efficient Fast Fourier Transform (FFT) instead of the discrete Fourier transform which reduces the number of operations from O(N 2 ) to O(N log N ) and can also be computed in parallel with the computer’s GPU. Note that when implementing the methods in MATLAB calculations will be done with vectors and not 3-dimensional matrices since it is more efficient. Note that we still apply operations pointwise without explicit matrix multiplications. When evaluating terms which need Fourier transforms, the vectors will be reshaped back to 3- dimensional matrices to easier use the fft function. All implementation is done with MATLAB (version 9.7.0.1190202 (R2019b)).

A major reason for implementing this spectral method instead of a finite difference method or similar is the difference in convergence rate. For finite difference methods as N increases we typically have the convergence of O(N −m ) for some constant m that depends on the order of approximation. With spectral methods the convergence is at least O(N −m ) for every m according to [13], provided that the solution is infinitely differentiable.

2.4 A fixed-point approach

The system (2.8) can be written as ( A ˜ φ n+1 = b

φ n+1 = ˜ φ n+1 /k ˜ φ n+1 k (2.26) where the operator A is defined by

A = I

∆t − ∆ + 2V + 2β|φ n | 2 − 2ΩL z ,

and where b = φ n /∆t. To solve the system (2.26) one would ideally find the inverse A −1 of this operator to easily and explicitly calculate ˜ φ n+1 , but inverting A would be very time-consuming.

Another simple iterative method to implement introduced in [14] and [1], is instead to partition A into two different matrices A = A ω − A T F ω , defined by

A ω = I

∆t − ∆ + ωI, A T F ω = 2ΩL z − 2V − 2β|φ n | 2 + ωI (2.27) where ω is some stabilization parameter and where I is the identity operator.

Notice that applying A ω in Fourier space is relatively easy as shown in the

last chapter. Because of this its inverse A −1 ω can be applied easily in a similar

(17)

fashion. Therefore we can rearrange the system (2.26) as

A ω − A T F ω  φ ˜ n+1 = b (2.28)

A ω φ ˜ n+1 = A T F ω φ ˜ n+1 + b (2.29) φ ˜ n+1 = A −1 ω 

A T F ω φ ˜ n+1 + b 

. (2.30)

This final equation suggests finding ˜ φ n+1 using fixed point iteration, which can be done by calculating a series of approximations { ˜ φ (k) } k=0,1,... until convergence through the algorithm

φ ˜ (0) = φ n (2.31)

φ ˜ (k+1) = A −1 ω 

A T F ω φ ˜ (k) + b 

, k = 0, 1, . . . (2.32) Every step in time therefore consists of applying (2.32) a large number of times until φ (k) is a good enough approximation, and then letting φ n+1 = φ (k) /kφ (k) k.

Convergence is decided by some stopping criterion max

(x

j

,y

k

,z

l

)

φ ˜ (k+1) (x j , y k , z l ) − ˜ φ (k) (x j , y k , z l )

≤ ε. (2.33) From linear algebra we get that for this fixed point iteration to converge we require the spectral radius

ρ A −1 ω A T F ω  := max |λ| : λ is an eigenvalue of A −1 ω A T F ω

(2.34) to be smaller than 1. As shown in [1], one can choose ω to get a small enough spectral radius when the rotation Ω is small. In particular, the ideal value for ω is proved in [14] to be

ω := 1

2 min(V + β|φ n | 2 ) + max(V + β|φ n | 2 ) . (2.35)

However, for large Ω the spectral radius gets too large and the method fails

to converge. In figure 8 we can see how the number of iterations grows as

the angular velocity increases. For values above Ω = 0.9 the methods fails to

converge.

(18)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10 1

10 2 10 3 10 4

Fixed Point Iterations

Number Of Fixed Point Iterations For First Step In Imaginary Time

Figure 8: The number of fixed point iterations needed for the first step in imaginary time. In this plot we had ∆t = 0.01, ε = 10 −12 and β = 2000.

2.5 The Conjugate Gradient method

Since we want to be able to model a Bose-Einstein condensate even during large rotations, we want to use another method than the fixed-point method.

A better method is for example the Conjugate Gradient (CG) method. This algorithm is too advanced to derive in this paper and is thus not as intuitive as the fixed-point iteration, but a derivation of CG and the underlying methods can be found in [15]. The CG algorithm, for solving a system Ax = b is described in Algorithm 1 where x 0 is the initial guess.

Algorithm 1 The Conjugate Gradient method r 0 := b − Ax 0

p 0 := r 0

for j = 0, 1, . . . until r j is small enough do α j := hr j , r j i/hAp j , p j i

x j+1 := x j + α j p j

r j+1 := r j − α j Ap j

β j := hr j+1 , r j+1 i/hr j , r j i p j+1 := r j+1 + β j p j

end for

(19)

To make this method more stable one may precondition the system before applying the CG method, giving us the Preconditioned Conjugate Gradient method or PCG. The particular preconditioner we use is called the Thomas- Fermi preconditioner and only uses the potential terms. This preconditioner is described in [1]. The Thomas-Fermi preconditioner is defined as

P T F n =

 I

∆t + 2V + 2β|φ n | 2

 −1

(2.36) and is easy to apply since it does not require the application of any Fourier transforms. Another common preconditioner mentioned in [9] is the kinetic energy preconditioner which only uses the kinetic term in the preconditioner.

This preconditioner is defined as

P ∆ = (α ∆ − ∆) −1 (2.37)

and for α ∆ = 1 is equivalent using the Sobolev gradient defined at (2.6). Since the kinetic energy preconditioner requires more Fourier transforms to apply we will use the Thomas-Fermi preconditioner because of its simplicity. By defining

A ∆,Ω = −∆ − 2ΩL z (2.38)

we can transform the system (2.26) as follows:

n+1 = b (2.39)

 I

∆t − ∆ + 2V + 2β|φ n | 2 − 2ΩL z



φ n+1 = b (2.40)

 I

∆t + 2V + 2β|φ n | 2



+ [−∆ − 2ΩL z ]



φ n+1 = b (2.41)



[P T F n ] −1 + A ∆,Ω



φ n+1 = b (2.42) (I + P T F n A ∆,Ω ) φ n+1 = P T F n b (2.43) Aφ ˘ n+1 = ˘ b, (2.44) where we define ˘ A = I + P T F n A ∆,Ω and ˘ b = P T F n b. Each time-step in the PCG method therefore consists of applying the CG algorithm to the system ˘ Aφ n+1 =

˘ b until convergence, and then normalizing the solution before moving on to the next time-step. As with the fixed-point iteration, convergence is decided by some stopping criterion

max

(x

j

,y

k

,z

l

)

˘ b(x j , y k , z l ) − ˘ A ˜ φ (k) (x j , y k , z l )

≤ ε. (2.45)

When implementing the algorithm in MATLAB we use the built-in PCG func-

tion. There are more iterative methods that one can use instead of PCG. Pos-

sible methods are for example the BiConjugate Gradient Stabilized method

(20)

(BiCGStab), or the Generalized Minimal Residual method (GMRES). In any case, one might want to precondition the system differently, but both these methods are used in [1]. In figure 9 we see a comparison between PCG and BiCGStab, motivating why PCG should be a good choice since it is faster in some cases.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

20 40 60 80 100 120 140 160 180

Sec

Comparison Between Iterative Solvers

PCG BICGSTAB

Figure 9: Computation time to converge to ground state with different angular velocity and different solvers for β = 2000 and a 2 6 × 2 6 grid. We also have that

∆t = 0.1 and  = 10 −5 .

Note that the computation time in figure 9 increases drastically after Ω = 0.7 and this could be partly because of our starting guess and partly because of the preconditioner we use. Both the Thomas-Fermi approximation and the Thomas- Fermi preconditioner should be effective when the non-linearity is small which for larger rotations might not be the case.

It is also clear that all of these methods are more efficient than calculating

an inverse explicitly. In figure 10 we are creating a random symmetric and

positive definite matrix A and random vector b in MATLAB and plotting the

computation time for solving the equation Ax = b with the built-in MATLAB

function inv for calculating an explicit inverse vs the iterative solver PCG. We

can see that the MATLAB inv takes a long time to compute as N grows but

the iterative method does not increase that much in comparison.

(21)

0 500 1000 1500 2000 Matrix size N

0 0.05 0.1 0.15 0.2 0.25

Sec

Time To Solve A Linear System With A Symmetric Positive Definite Matrix

MATLAB inv PCG with tolres 10

-10

Figure 10: Computation time to solve a system of equations for different matrix sizes for the built-in MATLAB function inv vs an iterative solver PCG on a Intel Core i5 1035-G4.

2.6 Approximate Solutions

2.6.1 The Thomas-Fermi Approximation

In order for our implementation to converge to a ground state we need a good enough starting guess. In our implementation we will mostly use the Thomas- Fermi approximation. The Thomas-Fermi approximation can be obtained by neglecting the kinetic energy and rotation in (1.11) which gives

µφ(x) = V (x) + β|φ(x)| 2  φ(x) (2.46) after which we can explicitly solve for |φ| 2 and get that

|φ(x)| 2 = µ − V (x)

β , (2.47)

after which we get to define the Thomas-Fermi approximation φ T F (x) =

(p (µ − V (x)) /β, µ ≥ V (x),

0, µ < V (x). (2.48)

Here µ depends on β and which potential V is used, but can be calculated explicitly for chosen parameters by enforcing that

Z

T F | 2 = 1. (2.49)

(22)

In the usual harmonic case we use

µ = µ T F β = (4βγ x γ y γ z /π) 1/2 /2 (2.50) with γ x = γ y = γ z = 1.

2.6.2 Additional tricks

To quicken computations one may employ a few different tricks. If we want to find the ground state for a particular parameter setup, we start off with the Thomas-Fermi approximation on a coarser grid-size than the one we want. After this converges, we may interpolate the solution to a finer grid-size, and use this interpolation as the starting guess for the larger grid. The interpolation can be done in a few different ways, but we utilize the interp2 MATLAB function with the ’spline’ option, causing the interpolation to be done with a piecewise polynomial.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

200 400 600 800 1000 1200

Sec

Time To Converge For Different And =2000

26x26 Grid 27x27 Grid

Figure 11: Top left is the converged ground state for β = 2000 and Ω = 0.4. Top

right is the converged ground state but interpolated to a higher grid definition

using spline. Bottom left is the converged ground state interpolated to a higher

grid definition and then converged again. Bottom right shows the computation

time when computing for different angular velocities and different grid sizes on

an Intel Core i5 1035-G4. We also had that  = 10 −5 and ∆t = 1.

(23)

In figure 11 we can see that the interpolated ground state from the coarser (top left plot) to the finer grid (top right plot) is very close to the converged ground state of the finer grid (bottom left plot). We can also see that converging the 2 6 × 2 6 grid takes significantly less time than the 2 7 × 2 7 grid.

When investigating how the wave function depends on a particular param- eter, say Ω or β, we often want to see how the ground state changes when the parameter varies. If we for example change the parameter Ω from 0 to 1 with small steps of 0.01, we can use the converged state for the last Ω value when cal- culating the next one. For example, we can use the converged state for Ω = 0.34 as our starting guess for computing the converged state for Ω = 0.35.

Using an interpolation in this way and utilizing previous converged states should reduce the total time needed to find the ground state. It might however cause unexpected negative effects. Something to remember is that functions in MATLAB like FFT and PCG run on the computers CPU and can therefore take a long time to compute for large matrices. By using functions that utilize parallelization, for example the MATLAB function gpuArray that directly uses the GPU, one can reduce the total runtime significantly.

Since the matrix for the 3 dimensional case is a 3 dimensional matrix it is not completely trivial how to show the result, but a helpful function from the MATLAB file exchange was Viewer3d by Dirk-Jan Kroon which can be accessed at [16]. Viewer3d is a program originally built for magnetic resonance imaging and computed tomography, but since it can show the density of a given 3D matrix it is usable when plotting |φ| 2 in 3D.

3 Results

3.1 Results in two dimensions

In figure 12 and figure 13 we plot the converged ground state for a rotating

Bose-Einstein condensate in 2D. The plots show |φ(x, y)| 2 with a x = a y = 16,

J = K = 2 8 and with β and Ω as described in the particular plots. The potential

used is V (x, y) = (x 2 + y 2 )/2. The convergence has been accomplished with

PCG, and the starting guess was the Thomas-Fermi-approximation.

(24)

Figure 12: Converged ground state with harmonic potential for different angular

(25)

Figure 13: Converged ground state with harmonic potential for different angular velocities for  = 10 −6 and dt = 0.01.

In figure 14 we found the ground state for β = 2000 and Ω ∈ [0, 0.9] with steps in Ω of 0.002. The potential is harmonic, and the starting guess was the Thomas-Fermi approximation for Ω = 0 and then the last approximation for Ω > 0. For example, the converged state for Ω = 0.344 was used as the starting guess for Ω = 0.346. By doing this we can see how the number of vortices depends on the angular velocity Ω. Note the discrepancy between Ω = 0.4 in figure 12 for which there exists vortices and figure 14 where there are no vortices.

The starting guess in the first case was the Thomas-Fermi approximation, while

the starting guess in the second case was the converged state for Ω = 0.398.

(26)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0

10 20 30 40 50 60 70 80 90

Number of Vortices

Vortices vs angular velocity for =2000

Figure 14: The number of vortices as a function of Ω with β = 2000 and a harmonic potential. Here ∆t = 0.01 and  = 10 −4 was used.

In figure 15 we have done a quadratic fit and an exponential fit on the

values of Ω for which there exist vortices. Note that the Ω values before the

jump at about 0.46 were thus not included in this interpolation. The reason

we do a quadratic fit is because the number of vortices should be symmetric

when mirrored across the y-axis, because a negative Ω only means rotating in

the other direction which should not change the behavior of the vortices. A

quadratic fit that models the vortices well would therefore have its minimum

close to Ω = 0.

(27)

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0

10 20 30 40 50 60 70 80

Number of Vortices

Interpolation Of Angular Velocity Vs Vortices

Data Exponential Interpolation

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

0 20 40 60 80 100

Number of Vortices

Interpolation Of Angular Velocity Vs Vortices

Data

2nd Order Polynomial Interpolation

Figure 15: An exponential fit and a polynomial fit modeling how the number of vortices depend on Ω

The exponential fit in figure 15 has the shape ae bx . The coefficients were calculated to be:

a = 0.8179 with 95% confidence interval (0.7912, 0.8446), b = 5.035 with 95% confidence interval (4.995, 5.075).

The polynomial fit in figure 15 has the shape p 1 x 2 + p 2 x + p 3 . The coefficients were calculated to be:

p 1 = 403.4 with 95% confidence interval (392.3, 414.6), p 2 = −410.9 with 95% confidence interval (−426.2, −395.6), p 3 = 117.2 with 95% confidence interval (112, 122.3).

The fitting is done with the built-in MATLAB function fit, which additionally gives us the following parameters that can be used analyze the fits. We have that sse is the Sum of Squares Due to Error, rsquare is the R 2 −value, dfe is the degrees of freedom of the system, adjrsquare is the adjusted R 2 -value accounting for degrees of freedom, and rmse is the Root Mean Square Error.

Definitions and general information for these parameters can be found in [17].

Interpolation Type sse rsquare dfe adjrsquare rmse

Exponential 749.4371 0.9949 428 0.9949 1.3233

2nd Order Polynomial 1.1169e+03 0.9925 427 0.9924 1.6173

Figure 16: Parameters for the exponential and polynomial fits

(28)

3.2 Results in three dimensions

In figures 17 through 26 we plot the converged ground state for a rotating Bose-Einstein condensate in 3D with ∆t = 1 and  = 10 −4 . The plots show

|φ(x, y, z)| 2 with a x = a y = a z = 16, J = K = L = 2 8 and with β and Ω as described in the particular plots. The potential used is V (x, y) = (x 2 +y 2 +z 2 )/2.

The convergence has been accomplished with PCG.

Figure 17: Left figure front view. Middle figure slice in z-plane. Right figure side view. Converged ground state for β = 2000, Ω = 0.0.

Figure 18: Left figure front view. Middle figure slice in z-plane. Right figure

side view. Converged ground state for β = 2000, Ω = 0.4.

(29)

Figure 19: Left figure front view. Middle figure slice in z-plane. Right figure side view. Converged ground state for β = 2000, Ω = 0.5.

Figure 20: Left figure front view. Middle figure slice in z-plane. Right figure side view. Converged ground state for β = 2000, Ω = 0.6.

Figure 21: Left figure front view. Middle figure slice in z-plane. Right figure

side view. Converged ground state for β = 2000, Ω = 0.7.

(30)

Figure 22: Left figure front view. Middle figure slice in z-plane. Right figure side view. Converged ground state for β = 2000, Ω = 0.8.

Figure 23: Left figure front view. Middle figure slice in z-plane. Right figure side view. Converged ground state for β = 2000, Ω = 0.9.

Figure 24: Left figure front view. Middle figure slice in z-plane. Right figure

side view. Converged ground state for β = 2000, Ω = 0.95.

(31)

Figure 25: Left figure front view. Middle figure slice in z-plane. Right figure side view. Converged ground state for β = 2000, Ω = 0.95.

Figure 26: Left figure front view. Middle figure slice in z-plane. Right figure side view. Converged ground state for β = 2000, Ω = 0.98.

4 Discussion

4.1 Vortices

The results in the last chapter give us some interesting information regarding how the vortices vary in the condensate. In particular, we can see that in both the 2D and the 3D case, for Ω ∈ [0, 1), the number of vortices start at 0 when Ω = 0 and increase as Ω increases. The graph in figure 14 and the fits done in figure 15 in particular give us some more information about this behavior. We see that the exponential fit better seems to model the behavior as Ω → 0 than the quadratic fit. We also see that the exponential fit has better fitting parameters:

both the R 2 -value and the adjusted R 2 -value are higher, which means that the exponential fit is accounting a greater proportion of the variance in the data.

We also see that both the SSE value and the RMSE are lower for the exponential

fit, which means this fit has a smaller random error component and is therefore

(32)

more useful for predicting the data than the quadratic fit [17].

However, it might be unreasonable for the amount of vortices to increase in- definitely which would be suggested by the exponential interpolation. A Gaus- sian curve might be more correct, increasing until a certain Ω, after which it slowly falls down to zero again. Another reasonable interpretation is that for a high enough angular velocity, the Bose-Einstein condensate will break apart. In our testing, we repeatedly saw that modelling a condensate at Ω = 1 or slightly higher was difficult, which might have to do that Ω = 1 correlates with the trap- ping frequency ω m due to the normalization. This suggests that the condensate breaks apart at Ω = 1, resulting in no vortices for Ω ≥ 1. Either way, having more data points for larger Ω would be of interest in determining the behavior of the vortices better, as would doing the same fit for different values of β.

There are in any case some error sources to consider. For example, the gap where Ω jumps from 0 to 12 at Ω ≈ 0.46 seems unphysical. We also note that we get a different converged state in figure 12 for Ω = 0.4 than in figure 14. The cause of this might be that the use of close solutions as starting approximations (for example using the converged state for Ω = 0.345 as the starting approximation for Ω = 0.346) causes the simulation to converge to an excited state instead of the ground state. An excited state might not have any vortices even if the ground state does. It might however simply be because the tolerance  used during the calculations is too high, causing the algorithm to stop before φ has come close to the ground state.

4.2 Future studies

There are a lot of interesting directions one may take when modelling Bose- Einstein condensates in the future. One major issue that has arisen during the simulations done in this article is the amount of computation time needed for convergence, and therefore the time needed to get results to analyze. One way to fix this would be to make the methods used more effective or to use better methods. Another way to fix this is by doing these simulations more often and on more powerful computers (or supercomputers). In that way, one could study the phenomenon of Bose-Einstein condensates for different parameters more effectively. In particular, modeling the dependence on the parameter β and on different trapping potentials V was not done in this article, partly because of the time-consuming simulations. In this report we only computed the ground state with one component but as shown in [1] it is also possible to compute multi-component Bose-Einstein condensates with coupled nonlinearities which could show interference and other interesting results.

As always when producing numerical results in this fashion, it would be of interest to compare them to the experimental behavior of a Bose-Einstein con- densate. Any similarities and differences between the experimental observations and the numerical results can be studied to gain further understanding of the subject.

When considering using different numerical methods altogether there are a

lot of interesting things to study. In [18] there are some suggestions for utilizing

(33)

different norms when defining the gradient used during the gradient descent.

Another natural change during the gradient descent might be utilizing another scheme than the semi-implicit Euler used in this report. It might for example be interesting to see in which way an implicit or explicit Runge-Kutta scheme could be used.

As for the spatial discretization and in particular the approximation of spa- tial derivatives, the spectral methods used in this paper are both efficient and robust, as shown in [1]. There are however a lot of different methods one could use instead, for example using a finite difference scheme as used in [14], or a finite element scheme as used in [10]. In these papers rotation isn’t yet implemented which would be a reasonable direction to work in.

After implementing the gradient descent and the spatial discretization, one

needs some algorithm to solve the resulting equation, since doing explicit in-

versions is very inefficient. In this paper we mostly used fixed-point iteration

and the Conjugate Gradient method for producing results, but there are lots

of other iterative methods. Both the BiConjugate Gradient Stabilized method

(BiCGStab) and the Generalized Minimal Residual (GMRES) method were

mentioned earlier and are relatively quick to implement with the spectral meth-

ods in this paper as seen in [1]. These methods are some of the many Krylov

subspace methods, many which are derived in [15], and many of which should

be usable in these types of problems. Finally, using a different preconditioner,

such as the Laplace (∆) preconditioner introduced in [1], can be of interest.

(34)

References

1. Antoine, X. & Duboscq, R. Robust and Efficient Preconditioned Krylov Spectral Solvers for Computing the Ground States of Fast Rotating and Strongly Interacting Bose-Einstein Condensates. Journal of Computational Physics 258, 509–523 (Feb. 2014).

2. Griffiths, D. J. Introduction to Quantum Mechanics Second (Prentice Hall, 2004).

3. Wikipedia contributors. Boson — Wikipedia, The Free Encyclopedia [On- line; accessed 11-May-2020]. 2020. https://en.wikipedia.org/w/index.

php?title=Boson&oldid=954663207.

4. Wikipedia contributors. Fermion — Wikipedia, The Free Encyclopedia [Online; accessed 11-May-2020]. 2020. https://en.wikipedia.org/w/

index.php?title=Fermion&oldid=947488390.

5. Wikipedia contributors. Bose–Einstein condensate — Wikipedia, The Free Encyclopedia [Online; accessed 5-May-2020]. 2020. https : / / en . wikipedia . org / w / index . php ? title = Bose % E2 % 80 % 93Einstein _ condensate&oldid=953270353.

6. Anderson, M. H., Ensher, J. R., Matthews, M. R., Wieman, C. E. & Cor- nell, E. A. Observation of Bose-Einstein Condensation in a Dilute Atomic Vapor. Science 269, 198–201. issn: 0036-8075. eprint: https://science.

sciencemag.org/content/269/5221/198.full.pdf. https://science.

sciencemag.org/content/269/5221/198 (1995).

7. Abo-Shaeer, J., Raman, C., Vogels, J. & Ketterle, W. Observation of Vor- tex Lattices in Bose-Einstein Condensates. Science (New York, N.Y.) 292, 476–9 (May 2001).

8. Bao, W., Wang, H. & Markowich, P. Ground, Symmetric and Central Vortex States in Rotating Bose-Einstein Condensates. Communications in mathematical sciences 3 (Mar. 2005).

9. Antoine, X., Levitt, A. & Tang, Q. Efficient spectral computation of the stationary states of rotating Bose–Einstein condensates by preconditioned nonlinear conjugate gradient methods. Journal of Computational Physics 343, 92–109. issn: 0021-9991. http://dx.doi.org/10.1016/j.jcp.

2017.04.040 (Aug. 2017).

10. Henning, P. & Peterseim, D. Sobolev gradient flow for the Gross-Pitaevskii eigenvalue problem: global convergence and computational efficiency.

arXiv: Numerical Analysis (2018).

11. JW, N. Sobolev Gradients and Differential Equations (Jan. 2010).

12. Danaila, I. & Kazemi, P. A New Sobolev Gradient Method for Direct

Minimization of the Gross–Pitaevskii Energy with Rotation. SIAM Journal

on Scientific Computing 32 (Nov. 2009).

(35)

13. Trefethen, L. N. Spectral Methods in MATLAB eprint: https://epubs.

siam.org/doi/pdf/10.1137/1.9780898719598. https://epubs.siam.

org / doi / abs / 10 . 1137 / 1 . 9780898719598 (Society for Industrial and Applied Mathematics, 2000).

14. Bao, W., Chern, I.-L. & Lim, F. Y. Efficient and spectrally accurate numerical methods for computing ground and first excited states in Bose–Einstein condensates. Journal of Computational Physics 219, 836–

854. issn: 0021-9991. http : / / www . sciencedirect . com / science / article/pii/S002199910600218X (2006).

15. Saad, Y. Iterative Methods for Sparse Linear Systems Second. eprint:

https : / / epubs . siam . org / doi / pdf / 10 . 1137 / 1 . 9780898718003.

https://epubs.siam.org/doi/abs/10.1137/1.9780898718003 (Society for Industrial and Applied Mathematics, 2003).

16. Kroon, D.-J. Viewer3d MATLAB Central File Exchange. [Retrieved 7-May-2020]. 2020. https : / / www . mathworks . com / matlabcentral / fileexchange/21993-viewer3d.

17. Evaluating Goodness of Fit - MATLAB & Simulink [Online; accessed 13- May-2020]. http://www.mathworks.com/help/curvefit/evaluating- goodness-of-fit.html#bq_5kwr-3.

18. Danaila, I. & Kazemi, P. A New Sobolev Gradient Method for Direct Mini-

mization of the Gross–Pitaevskii Energy with Rotation. SIAM J. Scientific

Computing 32, 2447–2467 (2010).

References

Related documents

that we interpreted as the signatures of superfluid droplets of aggregated polarons that self-organized as coherent bosonic states 22. We now report: 1) optical pump-terahertz

Finally, Section 3.4 will show that the eigenvalue problem for the Laplacian operator is in fact more or less equivalent to the clustering problem, and hence, it can be used to

As an example to simulate the repayment of a simulated purchase of 1000 kr that was executed on the 3 rd May 2016 that will be repaid using the three month campaign, the algorithm

In this work we investigate whether it is possible to synthesize the control law for a discrete event dy- namic system, using a polynomial representation of the system and

boundary layer measurements, random vibration, spectral super element method, wave expansion difference scheme, fluid-structure interaction... Application of the spectral finite

The coherent pairing of electrons and holes results in the formation of Bose-Einstein condensate of magnetic excitons in a single- particle state with wave vector K.. We show

The models indicate interesting features of the instability: surface tension implies departure from the linear growth of modes and separation of droplets, which are

One other case for coherent Rabi coupling also merits careful study: a single vortex at r 1 with |q 1 | = 1 in com- ponent one of a trapped Thomas-Fermi two-component condensate..