### Master Thesis

### Melting of Striped Phases on the 2D

### Square Lattice

### David Aceituno

Statistical Physics, Department of Theoretical Physics, School of Engineering Sciences

Royal Institute of Technology, SE-106 91 Stockholm, Sweden Stockholm, Sweden 2016

Typeset in LA_{TEX}

Akademisk avhandling f¨or avl¨aggande av teknologie masterexamen inom ¨amnesomr˚adet teoretisk fysik.

Scientific thesis for the degree of Master of Engineering in the subject area of The-oretical physics.

TRITA-FYS-2016:16 ISSN 0280-316X

ISRN KTH/FYS/–16:16—SE © David Aceituno, May 2016

### Abstract

The interplay of forces among particles is central to the phenomenon of spontaneous pattern formation. Particles may then self-organize into stripe-like structures, as seen for instance on ferromagnetic films, mono-layer adsorbates and vortex arrays on superconductors. In this thesis project spontaneous pattern formation is studied on 2D square lattice models using Monte Carlo simulations. Particular attention is given to the nematic-isotropic phase transition where, at a critical temperature, a striped phase melts into a disordered one. Recently Jin et al. showed that the nematic-isotropic phase transition of the J1-J2Ising spin model has

Ashkin-Teller-like criticality meaning that critical exponents vary with the ratio of couplings J2/J1; beginning with first-order phase transitions at J2/J1= 0.5 there is a switch

to weakly universal continuous transitions at J2/J1 ≈ 0.67, where four-state Potts

critical exponents start changing towards those of the 2D Ising model in the decou-pling limit J2/J1→ ∞ [1] . Here these results are reproduced and corroborated. To

expand on these results, twelve Lattice Gas models are studied with different com-binations of short-ranged two- and three-body forces, resulting in as many phase diagrams. In these, two nematic phases are found and their phase transition is studied in light of the J1-J2 model. The Stripe phase, with vertical (or horizontal)

chains of particles, is found to exhibit largely the same critical behavior as the stripes of the J1-J2model, with a few notable exceptions. Conversely, the so-called

Stair phase with diagonal chains, is found to be of first-order.

Key words: J1-J2 Ising spin model, Lattice gas, stripe, super-antiferromagnetic,

phase transition, weak universality, Monte Carlo, Metropolis, Wang-Landau.

### Sammanfattning

Samspelet av krafter mellan partiklar har stor betydelse f¨or spontan m¨onsterbildning. Partiklar kan d˚a p˚a ett sj¨alvorganiserat s¨att forma randiga strukturer, som ses till exempel p˚a ferromagnetiska tunnfilmer, monolager av adsorbat och p˚a virvel-strukturer i supraledare. I det h¨ar arbetet studeras spontan m¨onsterbildning p˚a kvadratiska 2D gittermodeller med hj¨alp av Monte Carlo-simuleringar. S¨arskild uppm¨arksamhet ¨agnas ˚at randiga-till-isotropa fas¨overg˚angen d¨ar, en randig fas sm¨alter till ett oordnat vid en kritisk temperatur. Nyligen har Jin m.fl. visat hur randiga-till-isotropa fas¨overg˚angen i J1-J2-modellen har Ashkin-Teller-liknande

kri-tiskt beteende [1] med kritiska exponenter som varierar med kvoten J2/J1; med

b¨orjan p˚a f¨orsta-ordningens fas¨overg˚angar vid J2/J1= 1/2, s˚a ¨andras det till svagt

universella kontinuerliga ¨overg˚angar vid J2/J1≈ 2/3, d¨ar fyr-tillst˚and Potts-modellens

kritiska exponenter b¨orjar r¨ora sig mot dem f¨or 2D Ising modellens vid gr¨ansen J2/J1 → ∞. H¨ar upprepas och bekr¨aftas dessa resultat. F¨or att ut¨oka resultaten

studeras tolv Gittergasmodeller med olika kombinationer av tv˚a- och trekroppskraf-ter p˚a korta avst˚and, vilket resulterar i lika m˚anga fasdiagram. I dessa hittas tv˚a randiga faser, och deras fas¨overg˚ang studeras i ljuset av J1-J2-modellen.

Randfa-sen, med vertikala (eller horisontella) kedjor av partiklar, uppvisar samma kritiska beteende som J1-J2-modellen, med n˚agra f˚a men anm¨arkningsv¨arda undantag. ˚A

andra sidan uppvisar Trappfasen, som har diagonala r¨ander, alltid f¨orsta ordningens ¨

overg˚angar.

Nyckelord: J1-J2 Ising-modell, Gittergas, randighet, superantiferromagnetisk,

fas¨overg˚ang, svag universalitet, Monte Carlo, Metropolis, Wang-Landau.

## Preface

This thesis is the result of my Master’s degree project at the Department of The-oretical Physics from the summer of 2015 to spring of 2016 under the supervision of Egor Babaev. The work concerns Monte Carlo simulations of various 2D Lattice Gas models and the characterization of their ground states and phase transitions.

The interest in this topic was sparked by the observation that three-body inter-actions gives striped phases in various models. It became my project to find out the conditions for this to occur on square 2D lattice models, and to check if there could be interesting physics at the phase transition. Answering these questions has taught me valuable lessons about the process of research. Being particularly interested in condensed matter physics and simulation, the time spent working on this project has been nothing but a pleasurable challenge.

### Acknowledgments

I would like to thank my supervisor Egor Babaev for giving me the opportunity to work on this project, which has become my greatest field of interest. I’m very thank-ful to Karl Sellin, Daniel Weston and Julien Garaud in Egor’s research group, for their guidance and great after-lunch discussions on thermodynamics. Oskar Palm, with whom I’ve shared office during this project, has my appreciation for the great advice and for patiently answering all my frequent questions on statistical mechan-ics. I am also thankful to Mikael Twengstr¨om for discussing and helping me with everything related to MPI and computing. I have received selfless help, insight-ful commentary and great company from fellow Master students Erik Holmgren, Fredrik Hansen, Jens Wir´en and Luca Fresta. Erik and Fredrik deserve additional gratitude for thoroughly proof-reading this thesis. Johan Carlstr¨om and Hannes Meier managed to give me very useful advice in the short time we’ve shared for which I’m also thankful. Many thanks to everybody on the corridor of the Depart-ment of Theoretical Physics for welcoming me these months and making it a very memorable experience.

Finally, I’d like to thank my family for the encouragement during all my years of education and my fianc´e Boel Morenius for her enormous support at all times in practice and spirit.

## Contents

Abstract . . . iii Sammanfattning . . . iv Preface v Contents vii Acronyms xi 1 Introduction 1 1.1 Why Study Stripes? . . . 11.2 An Overview of Phase Transitions . . . 4

1.2.1 Order and Disorder . . . 4

1.2.2 Types of Phase Transition . . . 5

1.3 The Nematic to Isotropic Phase Transition . . . 6

1.3.1 Origin in the Continuous Case . . . 6

1.3.2 Discrete Case . . . 6 1.4 This Thesis . . . 7 1.4.1 Objective . . . 7 1.4.2 Scope . . . 7 1.4.3 Outline . . . 7 2 Models 9 2.1 The J1-J2 Ising Spin Model . . . 9

2.2 The Lattice Gas Model . . . 10

2.2.1 The Lattice Gas Hamiltonian . . . 11

2.3 Lattice Topology and Symmetries . . . 13

2.3.1 Boundary Condition . . . 13

2.3.2 Symmetries . . . 13

2.3.3 Spontaneously Broken Symmetries . . . 13

2.4 Order Parameters . . . 15

2.4.1 Occupancy Order Parameter . . . 15

2.4.2 N´eel Order Parameter . . . 15 vii

viii Contents

2.4.3 Stripe Order Parameter . . . 16

2.4.4 Net Order Parameter . . . 18

2.5 Frustration due to Lattice Size . . . 19

3 Simulation Methods 21 3.1 The Metropolis Algorithm . . . 21

3.1.1 Markov Chain Monte Carlo . . . 21

3.1.2 The Metropolis Algorithm Step by Step . . . 22

3.1.3 Parallel Tempering . . . 24

3.2 The Wang-Landau Algorithm . . . 26

3.2.1 The WL Algorithm Step by Step . . . 26

3.2.2 Convergence Criterion . . . 28

3.2.3 The 1/t algorithm . . . 29

3.2.4 Two-Dimensional WL: Random Walk over Energy and Order Parameter Space . . . 29

3.2.5 Parallelization of the WL Algorithm . . . 30

3.2.6 Error Estimation . . . 32

4 Analysis of Phase Transitions 35 4.1 Distinction of Phase Transitions . . . 35

4.1.1 Finite-Size Effects . . . 36

4.2 Canonical Distribution . . . 37

4.3 Bulk Free Energy Barrier . . . 38

4.4 Free Energy and Canonical Distribution as Functions of the Order Parameter . . . 39

4.5 Finite-Size Scaling . . . 40

4.5.1 Scaling of First-Order Phase Transitions . . . 40

4.5.2 Scaling of Continuous Phase Transitions . . . 42

4.5.3 The Binder Energy Cumulant V(L). . . 43

4.6 Universality Hypothesis for Continuous Transitions . . . 47

4.6.1 Weak Universality . . . 47

5 Results 49 5.1 The J1-J2 Ising Spin Model . . . 50

5.2 Two-body Interactions . . . 58

5.2.1 Chemical Potential and Nearest Neighbors . . . 58

5.2.2 Nearest and Next Nearest Neighbors . . . 60

5.2.3 Nearest and Third Nearest Neighbors . . . 66

5.2.4 Next Nearest and Third Nearest Neighbors . . . 67

5.3 Three-Body Interactions . . . 71

5.3.1 Corner and Straight Triplets . . . 71

5.3.2 Chemical Potential and Triplets . . . 80

5.4 Two- and Three-Body Interactions . . . 81

Contents ix

5.4.2 Next- and Third-Nearest Neighbors vs. Corner Triplets . . . 86

5.4.3 Nearest Neighbors and Straight Triplets . . . 87

5.4.4 Next Nearest Neighbors and Straight Triplets . . . 91

5.5 Table of Results . . . 95

6 Conclusion 97 6.1 Summary . . . 97

6.1.1 Ground State Phase Diagrams . . . 97

6.1.2 The Discrete Nematic to Isotropic Phase Transition . . . 98

6.1.3 Universality Class . . . 98

6.1.4 Observation: Scaling of the Binder Energy Cumulant . . . 99

6.2 Discussion . . . 99

6.2.1 Evidence . . . 100

6.2.2 Sources of Error . . . 100

6.2.3 Suggestions for Further Inquiry . . . 101

## Acronyms

DOS Density of States. 26–31, 36

MCMC Markov Chain Monte Carlo. 22 MCS Monte Carlo Sweep. 23, 27–29, 35, 45

PBC Periodic Boundary Condition. 13, 14, 55 PT Parallel Tempering. 24, 25, 27, 30, 34, 45

WL Wang-Landau. 26–30, 35, 36, 45, 93

### Chapter 1

## Introduction

### 1.1

### Why Study Stripes?

A wide variety of natural processes give rise to self-organized patterns such as stripes [2]. In the context of condensed matter physics, the phenomenon is remarkable for two reasons. The first is for being spontaneous, meaning that anisotropic order is induced by varying parameters such as the temperature or density, i.e., with-out resorting to explicit anisotropy. The second reason is for being a collectively emergent behavior, meaning that microscopic constituent parts, such as particles or spins, conspire to exhibit long-range order in the mesoscopic scale, even though the particles themselves interact isotropically.

Stripes1have been observed experimentally in many two-dimensional condensed-matter systems, including ferromagnetic thin films [3], Langmuir monolayers [4], liquid crystal films [5], and vortex arrays in Type-1.5 superconductors [6–9].

The origin of stripes may differ substantially. Several theoretical models take a “bottom-up” approach of understanding these observations, where stripes emerging from simple rules mimic those found experimentally. The following three examples show this relation.

Example 1: Soft-Shoulder Potential

The radially symmetric soft-shoulder potential describes a hard core and a sur-rounding soft repulsive shell. The potential is defined for an interparticle separation rij = ∣ ⃗ri− ⃗rj∣ as φ(rij) = ⎧⎪⎪⎪ ⎪⎨ ⎪⎪⎪⎪ ⎩ ∞ if rij< σ if σ< rij < σs 0 if rij> σs, (1.1)

1_{Also known as nematic states.}

2 Chapter 1. Introduction

where is the shoulder height and σ and σsare the core and shoulder radii,

respec-tively. In two-dimensional continuous systems a variety of striped patterns appear when varying the ratio σs/σ [10, 11]. This model may reasonably describe

micro-scopic features of some colloidal systems in which dense-core particles have been dispersed in a solution together with polymers. The polymers stick to each particle to form a brush-like soft corona, that effectively acts as a repulsive shell [10, 12].

Example 2: Competing Short- and Long-Range Forces

Stripe formation is often attributed to the frustration of competing attractive and repulsive forces at different length scales. The microscopic origin of such frus-tration varies and subsequently there are models that implement this mechanism differently:

• Stripe domains discovered in thin ferromagnetic films of Fe/Cu(100) [13] are generally thought to occur because nearest neighbor exchange interac-tions, which promote spin alignment, compete with long-range dipolar anti-alignment. The simple model Hamiltonian

H= −J ∑ ⟨i,j⟩ sisj+ A ∑ (i,j) sisj r3 ij , (1.2)

largely captures this schematic and has been found to support striped phases in the numerical analysis of two-dimensional lattices [14–17] and on the iso-morphic lattice gas model [18].

• Stripes of vortices discovered on Type 1.5 superconductors based on MgB2

[6, 7, 9] and Sr2RuO4[8, 19, 20] are thought to be a consequence of having two

superconducting components . Essentially, single-component superconductors are characterized by their coherence length ξ and magnetic penetration depth λ, which depending on the ratio λ/ξ gives attractive or repulsive vortex cores, in what are known as Type I or II superconductors, respectively. Having two (or more) superconducting components a and b invites a situation where ξa< λ < ξb, with both attractive and repulsive forces between vortices at two

different length scales [21]. This state has been dubbed Type 1.5 supercon-ductivity [22–25]. Vortices will then repel due to short-range current-current interactions and attract at long-range due to overlap of the “outer” vortex cores. Striped phases have been found in numerical studies of models that encapsulate such interactions [6, 26, 27].

A method for predicting stripe formation in spherical and Ising spin models with isotropic pair-potentials has been suggested by Edlund and Jacobi [28].

1.1. Why Study Stripes? 3

Example 3: Non-Pairwise Potentials

Striped morphologies are predicted in some systems with non-pairwise potentials, meaning three or more particles interact with forces that are not simply a su-perposition of two-body forces, such as Coulomb forces. The existence of such potentials has been demonstrated in multi-component Type 1.5 superconductors and corresponding numerical studies have shown that these support striped phases [21, 29–31].

### Stripes in This Context

This thesis focuses on stripes in discrete two-dimensional square lattice models. For almost a century such discrete “toy models” have been used to mimic real processes in matter. In particular the 2D Ising spin model and Lattice gas models have been used as simplified models of ferromagnetic thin films and monolayer adsorption on a substrate surface, respectively [32]. Monte Carlo simulations have proved useful in this endeavor, and in this project too Monte Carlo simulations facilitate the tasks of finding pattern formations and studying their phase transitions.

Ultimately, the purpose studying of simplified models is to provide an experi-mentally falsifiable prediction about the behavior of a corresponding real system. This permits a deeper understanding of the rules that govern real-world phenomena. Often a deeper understanding reveals interesting commonality between seemingly unrelated phenomena, e.g. the concept of universality classes2. Even more intrigu-ing are exceptions to established ideas, and in this sense the meltintrigu-ing of striped phases on lattices have shown to depart from the conventional theory of phase transitions under conditions that are still being studied.

4 Chapter 1. Introduction

### 1.2

### An Overview of Phase Transitions

As the temperature of matter is increased it acquires symmetries by melting. Matter is then said to undergo a phase transition, often at some critical temperature, that separates two phase regimes. A thorough exposition of phase transitions and symmetries in condensed matter systems can be found in the book by Chaikin and Lubensky [33]. What follows is a brief summary of these concepts.

### 1.2.1

### Order and Disorder

The spatial arrangement of particles can vary greatly depending on external con-ditions such as temperature or magnetic field, and internal such as the potential energy between particles, which for a model can be described by its Hamiltonian. For models studied here the only variable external condition shall be the tempera-ture.

At a sufficiently high temperature the potential energy between particles is negligible compared to thermal energy fluctuations. Then the probability of finding a particle at a certain point is independent of position and whether any other particles are nearby. A system exhibiting such spatially uncorrelated and isotropic structure has many symmetries, e.g. translational and rotational invariance, and is said to be in a disordered phase. In Lattice Gas models this means that sites on the lattice are independent and as likely to be filled as empty.

At low temperature the preferred arrangement is one that minimizes the energy defined by the Hamiltonian, the so-called ground state configuration. There are usually just a few ground state configurations3which all exhibit spatial correlation that was not present at higher temperature. The system is then said to be in an ordered phase. This is also suggestive of a broken symmetry, i.e. a symmetry that was lost during the transition from higher temperature. For instance, in real systems this corresponds to liquids with continuous translational and rotational symmetry condensing to a periodic crystal at low temperature, invariant only with respect to a discrete set of translations. In Lattice Gas models this corresponds to sites becoming highly correlated, often showing obvious patterns such as stripes. The topic of symmetry will continue in Chapter 2.3.

To tell low- and high-temperature phases apart one can use an order
param-eter. This is a thermodynamic function related to some observable of the system
that has different value in each phase. For a ferromagnetic model the order
pa-rameter is simply the magnetization, but for other models it can be non-trivial to
define an appropriate order parameter that truly captures the nuances of the phase
transition. The guiding rule is that the order parameter should not be invariant to
the broken symmetry of the phase it describes4_{.}

3_{In the absence of phenomena like frustration, lattice defects or boundary effects.}

1.2. An Overview of Phase Transitions 5

### 1.2.2

### Types of Phase Transition

At the critical temperature that separates the ordered and disordered regime is a phase transition, the study of which is a large subject of Statistical mechanics and Field theory. One property that distinguishes types of temperature driven phase transitions is whether there is any latent heat, i.e. if energy is absorbed (or released) at constant temperature during the phase transition.

Under the modern classification, a phase transition is said to be either a first-order phase transition if it has latent heat or a continuous phase transition if it does not. This classification is used in this report.

There are other features that characterize phase transitions, such as phase coex-istence in first-order phase transitions and diverging correlation length in continuous transitions. Phase coexistence is perhaps most familiar in water at the boiling point where liquid and vapor coexist, while a diverging correlation length may be familiar from the ferromagnetic transition where the size of magnetic domains diverge. In particular the property of phase coexistence is exploited in this project to tell phase transitions apart.

It should be mentioned that there are systems with other types of phase transi-tions than the ones considered here, such as quantum phase transitransi-tions and topo-logical phase transitions. These have a different microscopic origin, and the tem-perature can in general be be replaced by some other parameter. For instance, in percolating systems, one has the connection probability instead of temperature.

The foundation of a phenomenological theory of phase transitions was intro-duced by Landau in 1937 [34]. There the concept of spontaneous symmetry breaking was equated with long-range order in many-particle systems, and a series expan-sion of the thermodynamic potential in terms of the order parameter was used as a phenomenological description of phase transitions. This is now known as Landau theory.

Ehrenfest Classification

Historically, a phase transition was said be first or second order depending on it having a discontinuity in a first or second order derivative of the free energy [35]. A discontinuous jump in entropy or volume was therefore characteristic of a first-order phase transition, and the size of the jump associated with the amount of latent heat. An example is the solid-liquid transition of water. Meanwhile a second-order phase transition had discontinuities in the specific heat, susceptibility or compressibility instead. Notable examples are the superconducting and superfluid transitions.

The Ehrenfest classification was found to be rather simplistic. For instance, it does not account for divergences present in the specific heat of the ferromagnetic phase transition. Even so, because there is some overlap in definition, the term “second-order” is still frequently used in this field.

6 Chapter 1. Introduction

### 1.3

### The Nematic to Isotropic Phase Transition

### 1.3.1

### Origin in the Continuous Case

The word nematic comes from the Greek nema, meaning “thread”. The topic of nematic phases is mostly found within the context continuous media, for in-stance in liquid crystals or polymers [36], where rod-shaped molecules may enter an anisotropic “uniaxial” phase where they all align. The preferred orientation of molecules at any point⃗r is described by a unit vector on a d-sphere ⃗n(⃗r), the direc-tor. Vectors ⃗n and −⃗n are physically equivalent. In a nematic phase, the director ⃗n(⃗r) = constant everywhere, meaning that the whole system is symmetric under rotations by π.

It has long been known that the nematic-isotropic phase transition is of first-order in the case of continuous media. L.D. Landau is often cited as the first to argue this on geometrical grounds in 1937 [34, 37]. In 1969, based on the Landau theory of phase transitions, P.G. de Gennes introduced a phenomenological description of the nematic-isotropic transition in liquid crystals [38], now known as Landau-de Gennes theory. In a classical paper from 1975, Brazovskii elaborated by showing also that the nematic-isotropic transition is “weakly” first-order, meaning for instance that the latent heat is small. The theory has found applications in displays and televisions using liquid crystals, and later in the description of blue phases in liquid crystals [39]. See [40] for a comprehensive review.

More recently an extension to the work of Brazovskii has shown the nematic-isotropic transition in two dimensions falls into the well known Kosterlitz-Thouless universality class [41].

### 1.3.2

### Discrete Case

It is not obvious what similarities one should find when comparing the nematic-isotropic phase transition of discrete lattice systems to continuous systems, since different symmetries are involved.

With the exception of dimer models, the discrete models usually consider point-like particles (or spins) at each lattice site, represented by a discrete set of values (zero and one, say), as opposed to rod-shaped molecules with real-valued positions and direction. Therefore, a nematic phase on a discrete system does not have parti-cle alignment in the sense described above. Rather, the nematic phase has different spatial correlations in particle positions in each direction, creating “thread”-like structures with orientational order. Thus in the discrete case a nematic phase is symmetric under rotations by π just as in the continuous case. On the other hand, the symmetry of the isotropic phase is radically different in the discrete case compared to the continuous case.

In the scope of this thesis there will be two nematic phases: the Stripe5 _{phase,}

with vertical or horizontal chains of particles at unit separation, and the Stair

1.4. This Thesis 7

phase, which is similar but oriented diagonally with respect to the lattice. Nematic phases on the lattice has been studied previously with short-range [32, 42, 43] and long-range interactions [14, 16, 44]. In particular, the so-called J1-J2 model of

Ising spins has been shown to have a Stripe phase with both (weak) first-order and continuous phase transitions [1, 45–48]. See more in the discussion about the J1-J2

model in Section 2.1.

### 1.4

### This Thesis

### 1.4.1

### Objective

From its conception, the aim of this thesis project has been to study nematic phases on lattice models with three-body interactions. Over time the work has been divided into the following objectives:

• Find the ground states of 2D Lattice models with short-ranged two- and three-body interactions.

• Find the domains of striped phases (and possibly others).

• Determine the type of thermal phase transitions of nematic phases.

• Through finite-size scaling analysis, deduce the universality class of those phase transitions.

### 1.4.2

### Scope

The classical models studied here include the chemical potential as well as two-and three-body potential terms in the Hamiltonian. The models are all short-ranged, meaning that all potential energy contributions come from particles in the immediate vicinity of each other. This allows for a meaningful study to be done on small systems, in the order of 500− 5000 spins (or particles), with local update algorithms that run reasonably fast on a 48-core computing cluster. The particles are confined to a two-dimensional square lattice with periodic boundary condition. Finally, only a very small subset of ratios of coupling constants receive a finite-size scaling analysis because of time constraints.

### 1.4.3

### Outline

This thesis is divided into six chapters which can be summarized as follows: • Chapter 1 (Introduction) Sets this project into context and introduces basic

concepts necessary to understand this work in the broadest sense.

• Chapter 2 (Models) Explains the particle models that will be studied by defining their Hamiltonian and the properties of the lattice they reside in.

8 Chapter 1. Introduction • Chapter 3 (Simulation Methods) Goes on to describe the two types of Monte

Carlo simulation that underpin the computer experiments of this project. • Chapter 4 (Analysis of Phase Transitions) Explains how to draw conclusions

about the phase transition from simulation data.

• Chapter 5 (Results) Presents the experimental results in the shape of phase diagrams, tables of critical exponents and graphs and finite-size scaling anal-ysis for a number of choices for the coupling constants.

• Chapter 6 (Conclusion) Further discusses the findings, attempts to reach conclusions based on the results and relates them to a more general context while considering sources of error.

### Chapter 2

## Models

This chapter begins with presenting and motivating the inclusion of the J1-J2

model, which is a simple extension to the Ising model, the “fruit fly” of Statis-tical Mechanics. Section 2.2 presents the Lattice Gas model and describes various two- and three-body terms that have been added to its Hamiltonian and outlines their physical interpretation. Section 2.3 discusses the choice of lattice, its proper-ties and how it affects the models. Section 2.4 introduces the order parameters that are used to measure and classify the different ground state phases of these models. Section 2.5 considers frustration of periodic ground states and further motivates the choice of lattice size used throughout the project.

### 2.1

### The J

1### -J

2### Ising Spin Model

The Ising model is widely used as a simple model of spin systems such as the ferromagnet. Each site on a lattice is occupied by a particle with spin s which can either be s= +1 or s = −1, to represent spin “up” or “down”. The energy of a spin configuration on a lattice is given by the nearest neighbor Ising Hamiltonian

H= −J1 ∑ ⟨i,j⟩

sisj− h ∑ i

si, (2.1)

where ⟨i, j⟩ means that the sum is carried over nearest neighbor spins only, h is the applied field and J1is the interaction energy (or coupling constant) for nearest

neighbor spins. There are no nematic phases present using the Hamiltonian above. A Stripe1 phase is obtained by introducing competing interactions to the Ising model with the following extension (no magnetic field is considered, i.e. h= 0)

H = −J1 ∑ ⟨i,j⟩

sisj− J2 ∑ ⟨⟨i,j⟩⟩

sisj, (2.2)

where⟨i, j⟩ and ⟨⟨i, j⟩⟩ denote nearest neighbors and next-nearest neighbors.

1_{Also called the super-antiferromagnetic (SAF), collinear or simply nematic phase.}

10 Chapter 2. Models

The J1-J2Ising model has received attention in recent years because it has been

difficult to determine the order and universality class2 of the phase transitions in the Stripe phase domain. The Stripe phase domain was found early to satisfy the ratio g= −J2/∣J1∣ > 1/2, with J2< 0 [42, 43, 49]. The order of the phase transitions

proved more difficult and there has been controversy over the value of the ratio
g∗ _{that separates the Stripe domain into first-order and continuous regions on the}

phase diagram, whether such ratio exists [45, 47], and if the continuous region
exhibits non-universal behavior [46]. Large scale simulations brought clarity to the
matter as recently as in 2012, when Jin et al. [48] showed that there was a region of
weak first-order phase transitions when 1/2 < g < g∗ _{where g}∗≈ 2/3. This result was

corroborated by Kalz and Honecker [50]. They continued to show that at g∗ _{the}

model has the universality of the four-state Potts model, and that it has Ashkin-Teller criticality in the continuous region g∗ ≤ g < ∞, meaning that the critical

exponents vary continuously between those of the four-state Potts model at g∗_{and}

the Ising model at g→ ∞ [1, 50].

The J1-J2 model has been included in this project as a reference point from

which to understand the physics of nematic melting in Lattice gas models of the next section. Since those stripes break the same symmetries3 as the J1-J2 model,

it is reasonable to expect similar critical behavior.

### 2.2

### The Lattice Gas Model

The Ising model can be reinterpreted as a model for a fluid, called the Lattice gas by using the change of variable

p=s+ 1

2 , (2.3)

where p means site occupancy, i.e. p= 1 represents a “particle” and p = 0 represents “hole” at a site on the lattice. The reason for using this formulation here is to study three-body interactions in the framework of “attraction” and “repulsion”.

In the context of the Lattice gas one must reinterpret the applied magnetic field h as the chemical potential µ. Thermodynamical quantities must also be reinterpreted; the magnetization and magnetic susceptibility m and χ become the density ρ and compressibility κ. The mapping from the nearest neighbor Ising model to the Lattice gas can be done precisely such that the two models exhibit the same broken symmetries and thermodynamical properties [51]. This mapping is used together with the exact solution of the 2D Ising model [52] to check the accuracy of the simulations described in Chapter 3.

2_{See Chapter 4.6.}
3_{See Chapter 2.3.}

2.2. The Lattice Gas Model 11

### 2.2.1

### The Lattice Gas Hamiltonian

With the change of variables in Eq. 2.3 one can now build a Hamiltonian for the Lattice gas term by term. To the left of every Hamiltonian term will be an example of particle configuration it applies to. A negative sign convention is used before all coupling constants except the chemical potential µ, meaning that if > 0 such interactions are promoted, while < 0 means they’re suppressed.

Chemical Potential

The chemical potential introduces an energy for each particle in the system, that is independent of their relative positions. When µ> 0 the total energy increases for every particle added, and vice-versa. The Hamiltonian is written as

Hµ= µ ∑ i

pi. (2.4)

Note that the sign convention is positive before µ, as opposed to the negative sign before the rest of the Hamiltonian terms in this chapter.

Nearest Neighbor Interaction

A general Hamiltonian for pairwise interaction among particles is H2= ∑

(i,j)

φijpipj, (2.5)

where φij = φ(rij) defines the potential energy between particles pi and pj at the

distance rij = ∣ri− rj∣, and the sum goes over all particle pairs at sites (i, j) on

the lattice. The potential energy when restricted to nearest neighbor particles is defined as φ(rij) = ⎧⎪⎪⎪ ⎪⎨ ⎪⎪⎪⎪ ⎩ ∞ if rij< 1 −2nn if rij= 1 0 otherwise, (2.6)

where the subscript “nn” denotes nearest neighbors. The first line means that each site can only be occupied by one particle at a time. The second line defines the energy 2nn as the depth of the potential well between a pair of particles; a negative sign means that particles attract while a positive sign makes them repel. The third line simply means that pairs that are not nearest neighbors will not contribute to the total energy. Thus, for nearest neighbor interactions the Hamiltonian is simply

H2nn = −2nn ∑

⟨i,j⟩

pipj, (2.7)

where the notation ⟨i, j⟩ means that the sum goes over sites i and j that are nearest neighbors only. Including the chemical potential gives the mapping to the Ising model mentioned earlier. Specifically, choosing µ= 8 and 2nn = 4 maps to the ferromagnetic Ising model with zero magnetic field (J= 1, h = 0).

12 Chapter 2. Models

Next Nearest Neighbor Interaction

The principle of nearest neighbor interactions can be applied to next nearest neigh-bors, meaning the neighbor placed on the nearest site in diagonal direction. Such potential is described by φ(rij) = ⎧⎪⎪⎪ ⎪⎨ ⎪⎪⎪⎪ ⎩ ∞ if rij< 1 −2nnn if rij= √ 2 0 otherwise, (2.8)

with the Hamiltonian written in a similar notation as before H2nnn= −2nnn ∑

⟨⟨i,j⟩⟩

pipj, (2.9)

where⟨⟨i, j⟩⟩ denote particles i and j that are next nearest neighbors. Third Nearest Neighbor Interaction

Following the same prescription the third nearest neighbor Hamiltonian applies to particles separated by two unit steps on the lattice. Thus

φ(rij) = ⎧⎪⎪⎪ ⎪⎨ ⎪⎪⎪⎪ ⎩ ∞ if rij< 1 −2tnn if rij= 2 0 otherwise, (2.10)

with the Hamiltonian

H2tnn = −2tnn ∑

⟨⟨⟨i,j⟩⟩⟩

pipj, (2.11)

where⟨⟨⟨i, j⟩⟩⟩ denotes particles i and j that are third nearest neighbors. Three-body interactions

The last type considered here are non-pairwise interactions, i.e., interactions with potential energy defined for the coordinates of three or more particles. For triplets of particles that are nearest neighbors on a square lattice there are two possible types of configuration. The three-body Hamiltonian is therefore separated into two terms that apply to each type:

H3⌜= −3⌜ ∑ ⟨i,j,k⟩ pipjpk (2.12) H3−= −3− ∑ ⟨i,j,k⟩ pipjpk, (2.13)

where the subscript “⌜” denotes nearest neighbor triplets on a corner configu-ration and “−” denotes triplets on a line, as illustrated.

2.3. Lattice Topology and Symmetries 13

### 2.3

### Lattice Topology and Symmetries

### 2.3.1

### Boundary Condition

All models of this project are studied on a two-dimensional square lattice, meaning that particles have four equidistant nearest neighbors on a plane. Throughout this project a Periodic Boundary Condition (PBC) is imposed on the lattice. This simply means that opposite edges of the lattice are connected, effectively confining the lattice to the topology of a torus.

The reason for choosing PBC is to mimic an infinite lattice as much as possible thus reducing unwanted boundary effects. Since all lattice sites get to have four nearest neighbors, any point on the lattice is an interior point.

Despite PBC, not having a true infinite lattice can be problematic because no structure can become larger than the lattice itself. This noticeably affects the properties of continuous phase transitions with diverging correlation length. For instance, even if the specific heat or susceptibility were to diverge at Tc in the

limit L→ ∞, they do not diverge during a finite-size simulation of the same model. Moreover, there can be a significant shift in Tc that requires a careful extrapolation

to accurately pinpoint it in the thermodynamic limit (L→ ∞). Such extrapolations, known as Finite-size scaling, require PBC and will be explained in Chapter 4.5.

### 2.3.2

### Symmetries

A Hamiltonian model confined to a 2D square lattice of linear size L is invariant under a set of symmetry transformations G. Those symmetry transformations available due to the lattice itself are:

• Unit step translations, ˆTx= ⃗ex and ˆTy= ⃗ey (Group ZL× ZL)

• Rotations, ˆR(90°). (Group Z4)

• Reflections about the x-axis, y-axis, x+ y-line and x − y-line. (All group Z2)

In some cases there can be a spin-rotation symmetry (Group Z2), meaning that

flipping all spins at once leaves the Hamiltonian unchanged, as in the zero-field Ising model. In Lattice Gas models this corresponds to particle-hole exchange symmetry. In addition, all Hamiltonians in this project are invariant under time translation and time reversal. Note that all spatial symmetry transformations considered here are discrete.

### 2.3.3

### Spontaneously Broken Symmetries

In general, the high-temperature disordered phase is invariant under the same group of transformations G as its Hamiltonian. Then, the only thermodynamical averages and correlations that are nonzero are those unaffected by operations in G. This is characterized for having lattice sites largely independent of their position and of

14 Chapter 2. Models

their neighboring sites. Such is the paramagnetic phase of the Ising model with magnetization⟨m⟩ = 0.

In contrast, the low-temperature ordered phase is distinguished by the ap-pearence of thermodynamic averages and correlations which are not invariant under the group G. Such is the magnetic phase of the Ising model with⟨m⟩ ≠ 0, where flipping all spins simultaneously changes the sign of ⟨m⟩ but does not affect the Hamiltonian. Since the ordered phase is invariant only to a subset of G, one says that the system has broken the symmetry of the Hamiltonian. The term sponta-neously broken symmetry is used when this process is driven by entropy, as opposed to the case where symmetries are explicitly broken by a property of the Hamiltonian itself.

Put simply, a spontaneously broken symmetry occurs when the set of configu-rations available to a system become restricted to ones that can be described by a smaller symmetry group than those of the Hamiltonian.

There can be more than one type of broken symmetry at once. For instance, a model with “Stripe” phase breaks a ZLtranslational (orthogonal to the stripes) and

a Z4 90° rotational symmetry down to ZL

2 and Z2 respectively. Then the broken symmetry groups are still written together as ZL× Z4.

To take a more specific example, the Hamiltonian H = H2nn+ Hµ with 2nn = µ= −1 is invariant with respect to the cyclic symmetry group ZL× ZL of the

inte-gers (addition modulo L) because it is invariant under any translation n ˆTi where

n∈ 1, 2, ..., L and i ∈ x, y. Yet, at low temperature this model has a “checkerboard” phase, where any unit step translation toggles between the two distinct checker-board states, as seen by the N´eel order parameter of the next section. Thus, at the phase transition the original ZL× ZL symmetry is spontaneously broken down to

the smaller ZL 2 × Z

L

2.4. Order Parameters 15

### 2.4

### Order Parameters

Order parameters are functions that have different value above and below the crit-ical temperature. As a guiding principle, an order parameter has to be sensitive to the broken symmetries of the phase it describes. It is far from obvious if a chosen order parameter is the true order parameter for a phase and then one must be careful not to draw overly far-reaching conclusions based on them. Nevertheless, in this project four different order parameters are used to discern phases. All order parameters shall be denoted by m with a subscript indicating the type.

### 2.4.1

### Occupancy Order Parameter

The occupancy is measured in the same way as magnetization in Ising spin models. The subscript “M” means “Mono-domain” (or Magnetization). Using the change of variables in Eq. 2.3 to replace spins yields:

mM= 1 N N ∑ i=1 2pi− 1. (2.14)

It goes from−1 when the lattice is completely empty, to +1 when it is filled. This order parameter can be used to measure states where µ= 22nn applies, correspond-ing to the Iscorrespond-ing model in zero magnetic field, accordcorrespond-ing to the mappcorrespond-ing mentioned in 2.2. In those configurations, toggling the value of all sites once transforms one ground state into the other without changing the energy level.

Of course, this order parameter is simply a linear transformation of the density
ρ = N−1∑N_{i=1}pi and can be interpreted as such. It is useful in cases where the

number of possible ground state configurations is very large and the other order parameters aren’t suitable. Then the density may converge to a mean value that is good enough to distinguish structures quantitatively. This is the case for instance in semi-disordered states that will be seen in Chapter 5.

Figure 2.1: The two empty and filled ground states of the “Ising fer-romagnet” phase on the Lattice gas break Z2 particle-hole symmetry.

White squares are particles (p= 1) and black squares are holes (p = 0).

### 2.4.2

### N´

### eel Order Parameter

A “Checkerboard” phase can be quantified by the N´eel order parameter. It is commonly found in the context of anti-ferromagnetic Ising models where it measures the staggered magnetization. The subscript “CB” means “Checkerboard”. Here the order parameter is adapted for the Lattice gas as earlier by using the change of variable in Eq. 2.3:

16 Chapter 2. Models mCB= 1 N N ∑ i=1 (2pi− 1)(−1)xi+yi. (2.15)

In contrast to Eq. 2.14 there is a factor (−1)xi+yi _{that changes sign if the}
coordinates(xi, yi) of site i is even or odd. Thus, every site needs to have opposite

value to its neighbors for this order parameter to depart from mN´eel ≈ 0. The

checkerboard ground state is doubly degenerate with states connected by a unit step in the x or y directions that give mCB= ±1.

Figure 2.2: The checkerboard phase breaks the translational symmetry ZL× ZLdown to ZL

2 × ZL2 and particle-hole Z2 symmetry.

### 2.4.3

### Stripe Order Parameter

This parameter is also called the orientational order parameter, and it quantifies the orientation of interfaces on the lattice. An interface is a pair of nearest neighbors sites on the lattice with different value. The order parameter is found in the papers [14, 15, 17, 18, 44] in various forms. Here it is defined as4

m∥=

nv− nh

N , (2.16)

where nv and nh denote the number of vertical and horizontal interfaces and N

is the number of lattice sites. It ranges from −1 for horizontally striped lattice configurations to+1 for vertically striped configurations, and it is zero in phases with 90° rotational symmetry. The stripe phase breaks translational (orthogonal to the stripes) and rotational symmetries of the lattice.

The values of nv and nh are computed using the values of nearest neighbors to

each particle pi denoted with arrows5 in the following expressions:

nv= N ∑ i=1 pi(1 − p←) + pi(1 − p→) (2.17) nh= N ∑ i=1 pi(1 − p↑) + pi(1 − p↓). (2.18)

Figure 2.3: The four configurations of the striped phase break ZL×Z4

translational and rotational symmetries down to ZL 2 × Z2.

4_{Others have opted for n}

v+nhin the denominator, but Eq. 2.16 has worked more reliably in

the Wang-Landau simulations of Chapter 3.2.

5_{E.g. if p}

2.4. Order Parameters 17

Diagonal Stripes

A small modification gives an order parameter for diagonal stripes instead. Using arrows to denote directions again, the equations are similar:

mÖ= n⤢− n⤡ N , (2.19) n⤢= N ∑ i=1 pi(1 − p↗) + pi(1 − p↙) (2.20) n⤡= N ∑ i=1 pi(1 − p↖) + pi(1 − p↘). (2.21)

Unit-step translations in x or y directions as well as 90° rotations give up to 6 ground states in total for the densest diagonal stripe type seen in Figure 2.4. This order parameter is also useful for measuring the “Stair” phase seen in Figure 2.5. If any of these ground states is rotated by 90° the order parameter changes its sign. Both ground states break translational and rotational symmetry.

Figure 2.4: The diagonal stripe phase breaks ZL×Z4translational and

rotational symmetries down to ZL 3 × Z2.

Figure 2.5: Toggling all sites of the diagonal phase gives a stair phase that breaks the same symmetry.

18 Chapter 2. Models

### 2.4.4

### Net Order Parameter

The “Net” phase has vertical and horizontal stripes at the same time and there are 3 lattice translations that yield new (different) nets: ˆTx, ˆTyand ˆTx+ ˆTy. Consequently,

the phase has a degeneracy of 4.

No order parameter for nets was found in literature so one had to be designed for this project. Its construction combines the ideas of the N´eel and Stripe order parameters.

The first step is to count how many particles belong to either one of the four possible nets. Each net can be composed by tiling the figure to the left of each n, and the count works by adding+1 if the particle could belong to such net, and −1 otherwise: n1= N ∑ i=1 (2pi− 1)(−1)xiyi (2.22) n2= N ∑ i=1 (2pi− 1)(−1)(1+xi)yi (2.23) n3= N ∑ i=1 (2pi− 1)(−1)xi(yi+1) (2.24) n4= N ∑ i=1 (2pi− 1)(−1)(xi+1)(yi+1) (2.25)

The second step is to study the quadruplet ⃗n = [n1, n2, n3, n4]. If indeed the

lattice is in a net phase only one n will fully agree to the count, giving for instance
⃗n = [N, 0, 0, 0] for the first net, and so on. The last step is to map the quadruplet to
a final order parameter. The following definition has worked reliably in simulation6_{:}

m⊞=

1

N(n1− n2+ n3− n4). (2.26) It gives m⊞= +1 for two of the nets and m⊞= −1 for the other two. If one would

like to distinguish the 4 nets one could make the replacement(n1− an2+ an3− n4)

which gives m⊞= ±1 and m⊞= ±a for the four net configurations.

Figure 2.6: The four configurations of the net phase break ZL× ZL

translational symmetries down to down to ZL 2 × Z

L 2.

2.5. Frustration due to Lattice Size 19

### 2.5

### Frustration due to Lattice Size

Not all lattice sizes can support fully striped ground states when using periodic boundary conditions. For instance, to fit neatly the striped ground states in Figure 2.3 require a lattice with even linear size L=√N , where N is the number of lattice sites. If L is even stripes of particles and holes can pair up across the periodic boundary. If L is odd there will be frustration on the lattice because the pairing must break somewhere, leading to an increased degeneracy. An example can be seen in Figure 2.7. Similarly, the “stair” ground states become frustrated if L is not a multiple of 3, as seen in Figure 2.8.

To avoid frustration altogether L has to be a multiple of the period of the stripe pattern. The period of the net and stripe phases is 2, while the period of the stair phase is 3. Thus for this project the lattice size is chosen to meet these requirements with L= 18 or L = 24, and so on, i.e. sizes with L divisible by both 2 and 3.

Figure 2.7: Some of the frustrated stripe ground states with L= 15.

### Chapter 3

## Simulation Methods

The Hamiltonian models of the previous chapter maps particle configurations (mi-crostates) on a lattice to energy levels. The next step is to make simulations to study these models in detail and explore properties such as the ground states, thermodynamic quantities, phase transitions and critical temperatures. For this purpose, two types of Monte Carlo algorithms were implemented: the Metropolis algorithm, and the Wang-Landau algorithm.

This chapter will serve as a reminder of these techniques and introduce the con-ventional language and notation. First, the Metropolis algorithm will be outlined along with aspects specific to the implementation used here. Then the same is done for the Wang-Landau algorithm. In practice, however, the Wang-Landau algorithm can be run prior to the Metropolis algorithm in order to inform the choice of tem-peratures in the parallel computation of the latter, as will be explained in Section 3.1.3.

### 3.1

### The Metropolis Algorithm

Since this type of simulation is common in the field of simulations, an overview of the most essential concepts will suffice here. A more detailed description can be found in any textbook on Monte Carlo simulations, notably the excellent book by Newman and Barkema [53] whose notation will be used throughout this chapter.

### 3.1.1

### Markov Chain Monte Carlo

The point of a Metropolis Monte Carlo simulation is to compute some observable quantity⟨Q⟩ of a thermal system, such as the internal energy U. Ideally, ⟨Q⟩ is computed at some temperature T using Boltzmann weights as

⟨Q⟩ = ∑µQµe−βEµ

∑µe−βEµ

, (3.1)

22 Chapter 3. Simulation Methods

where Z is the partition function, µ the microstates1 _{and β}_{=} 1

kBT is the inverse temperature. Even for small systems there can be an enormous number of mi-crostates, making it infeasible to compute the sum. Since only a small subset of microstates contribute significantly to the sum, one could in principle estimate⟨Q⟩ if one could just pick among microstates in that subset and ignore the rest. This is exactly what the Metropolis algorithm accomplishes. It is a Markov Chain Monte Carlo (MCMC) algorithm, meaning that it uses a Markov process to generate a new microstate ν when fed an old one µ, picking among the microstates with prob-ability distribution pµ= Z−1e−βEµ. One can repeat the process by feeding the new

microstate ν as input each time, thus generating a Markov chain of microstates that are “likely” at the current temperature.

Once M microstates have been generated, an estimate QM of the true average

⟨Q⟩ is simply computed as QM = 1 M M ∑ i=1 Qµi. (3.2)

where the Boltzmann weights have canceled due to the choice of probability distri-bution.

### 3.1.2

### The Metropolis Algorithm Step by Step

Before anything else, the lattice site coordinates are defined, and each site on the lattice is assigned an integer:

Site integer ⎧⎪⎪⎨⎪⎪ ⎩

s= +1 or − 1 for magnetic systems (spin up/down)

p= 1 or 0 for lattice gas systems (particle/hole). (3.3) The sites may be populated at random (infinite temperature) or in some ordered fashion. This project only deals with square lattices, and the size of the lattice is then given by its linear size L such that a two-dimensional lattice, for instance, contains L× L = N sites in total.

Then the algorithm starts by repeating the following Monte Carlo Step: 1. From state µ, pick a random site on the lattice,

2. propose a new state ν by flipping the integer at that site, 3. compute the change in energy dE= Eν− Eµ,

4. decide to accept or reject the proposed state:

1_{This chapter temporarily suspends the use of µ as the chemical potential and uses it to denote}

3.1. The Metropolis Algorithm 23

(a) if dE< 0, accept ν immediately.

(b) if dE> 0, generate a random real number r ∈ [0, 1]. Accept ν if r < e−βdE, else reject.

5. If the trial state was accepted, update the energy setting Eµ ← Eν. If it

was rejected, restore the state µ by flipping the site integer back and reset Eµ← Eµ.2

Repeating steps [1-5] N times is referred to as a Monte Carlo Sweep (MCS) and it is common to use “MCS” as a measure of simulation time.

Equilibration

The lattice is usually not initialized in one of the ”likely” microstates, i.e. one in the subset of microstates that is likely for that particular temperature. The system needs to equilibrate for some time, roughly 1000 MCS, before it starts generating such states. Once equilibrium is achieved, the sampling of the system can begin.

It is not always obvious if the system has reached equilibrium or not; it may just be lingering in a local minima of the energy landscape, for instance. Whatever the reason, it is wise to make overly long trial-runs of the simulation to get acquainted with the speed of equilibration of the system that is being studied. Under most circumstances 20000 MCS is enough to equilibrate the systems with short range potentials considered here.

Sampling

Data collection begins after equilibration. In this project the sampling is performed once every MCS. It is generally enough to sample somewhere between 10 to 100 thousand MCS, depending on the autocorrelation time of the system, which can grow to up to hundreds of MCS at low temperatures or near phase transitions.

Error Estimation

Sampling as often as every MCS isn’t necessarily the best idea since the sampled states can be highly correlated - in fact, the autocorrelation time τ of systems considered here is in the order of 5-50 MCS, which ruins the error estimates of the simulation if not corrected for. Still, it is most practical to do this type of correction during post-processing rather than sampling say once every τ MCS, which would require estimating τ before sampling. For a large number of samples M taken once every MCS, the effective standard deviation σeff of⟨Q⟩ is computed as

2_{The left arrow notation “←” signifies assignment as used in the context of algorithms in}

24 Chapter 3. Simulation Methods

σ2_{eff} =2τ

MVar(Q), (3.4)

where M is the number of samples and the variance is computed directly from the M sampled values of Q. The Jackknife resampling method is used to estimate the standard deviation of functions of Q, such as the specific heat. The details can be found in most relevant textbooks on simulations, like [53].

### 3.1.3

### Parallel Tempering

The thermodynamic averages have to be estimated at several temperatures, espe-cially at temperatures near the critical temperature Tc of a phase transition. This

requires several Metropolis simulations, one for each temperature. To hasten the computations one can run these simulations concurrently on a parallel computer. The simplest approach would be to simply do independent simulations on a prede-fined set of temperatures, but a lot of performance can be gained by using a simu-lated annealing method called Parallel Tempering (PT). Then, instead of running independent simulations, one allows pairs of simulations to exchange their current microstates with each other with some probability that depends on their energy and temperature difference. This effectively allows slowly evolving (low-temperature) simulations to tunnel out from local energy minimas, that would otherwise take a long time to escape from by random fluctuations alone. In practice it is more efficient to exchange temperatures instead of the large arrays of coordinates for the microstates. The implementation used in this project uses MPI to send tempera-tures between threads on a distributed memory architecture.

The PT Algorithm Step by Step

1. Let(µT1, µT2, ...µTi, µTi+1, ...µTmax) be a collection of concurrent simulations (replicas) running at temperatures T1< T2< ... < Ti< Ti+1< ... < Tmax.

2. Every t MCS, simulations with temperature Ti where i is even propose a

temperature exchange with their neighbor at Ti+1. t should be in the order

of 1 or 2 autocorrelation times τ .

3. Accept with probability P(Ti↔ Ti+1) = min(1, exp [(βi− βi+1)(Ei− Ei+1)]).

4. Repeat from step 2, each time alternating i between even and odd.

Temperature Ladder for PT - A Simple Workaround

The effectiveness of the PT algorithm described in 3.1.3, relies on a properly con-structed sequence of temperatures. It is not obvious how to place the temperatures such that the exchange probability, or acceptance ratio, is uniform across all repli-cas, especially near a phase transition. On the other hand it is clear that a histogram

3.1. The Metropolis Algorithm 25

of energies of concurrent simulations should have some overlap, at least pairwise, to have non-zero acceptance ratio.

Several solutions to this problem have been suggested. Examples are adaptive temperature control schemes that modify temperatures iteratively to reach a target acceptance ratio by either repositioning the temperatures [54] or inserting new ones [55]. An acceptance ratio of 20 % was found to be sufficient. A similar conclusion was reached by maximizing the mean square displacement of temperatures as they perform a 1D random walk over the temperature ladder [56]. A different approach by Katzgraber et al. [57] uses an adaptive feedback-optimized algorithm to mini-mize round-trip times between T1 and Tmax. This approach addresses bottlenecks

in acceptance ratios that are present near Tc.

During the development of this project these ideas were implemented with vary-ing degrees of success. In the end a simple workaround worked more reliably: to place the temperatures according to the slope of the entropy, which can either be obtained roughly from integrating the specific heat of a short PT-run with linearly spaced temperatures, or more accurately from a Wang-Landau simulation that is run prior. The procedure is:

1. Let S1< S2< ... < Smax be equally spaced levels of entropy S corresponding

to a temperature range of interest, such that Si+1− Si= constant.

2. Define the rungs Tiof the temperature ladder as Ti= T(Si).

Using this method the overlap between energy histograms is kept while avoiding gaps throughout the energy range as long as there is a sufficient number of temper-atures, eliminating bottlenecks in the exchange between parallel simulations. Most importantly, the chosen temperatures are placed more densely at the “interesting” regions near phase transitions. Figure 3.1 illustrates the procedure for a 16× 16 ferromagnetic Ising model, i.e., µ= 8 and 2nn= 4 (or J1= 1, h = 0).

0 2 4 6 0 0.2 0.4 0.6 T S (T )/ N

(a) The temperature ladder built using the entropy S(T). The density of T-values is highest when S′(T) is highest.

0 0.5 1 1.5 2 0 500 1,000 1,500 2,000 U/N Occurrences

(b) The histograms of energy for 12 concurrent Metropolis simulations us-ing the temperature ladder from (a). Figure 3.1

26 Chapter 3. Simulation Methods

### 3.2

### The Wang-Landau Algorithm

In 2001 David P. Landau and Fugao Wang presented the Wang-Landau (WL) algorithm, a Monte Carlo approach to estimate g(E), the Density of States (DOS) of classical systems [58]. Multiplying g(E) by Boltzmann weights e−βEgives access to the partition function Z from which to extract quantities like specific heat, entropy and free energy at any temperature. Unlike the Metropolis algorithm, the WL algorithm is not temperature dependent and it is insensitive to barriers in rough energy landscapes. The algorithm has the added benefit of producing the entire energy spectrum, as well as examples of microstates at any energy, including the ground state.

### 3.2.1

### The WL Algorithm Step by Step

The lattice is first constructed as in the Metropolis algorithm. Then the algorithm estimates g(E) by doing several random walks in energy space. g(E) is unknown in the beginning and set to 1 as a first guess. A random walk continues until it produces a flat histogram of visited energies, which terminates an iteration of the algorithm. As the random walk visits energies, g(E) is multiplied by a modification factor f and the entries of a histogram are increased by unity, i.e. g(E) ← g(E)×f and H(E) ← H(E) + 1. Initially, f is set to e ≈ 2.71828 and is decreased each iteration by letting f ← √f with. The entire simulation finishes once f is small enough, in this case when log fmin≈ 10−7.

In practice it is wise to work with the logarithms of these numbers to avoid overflow issues; the entries in the DOS can quickly grow larger than double precision numbers. The algorithm reads:

1. At the start of the simulation, set log f= 1, H(E) = 0 and log g(E) = 0 ∀E. 2. Starting from a state with energy Eµ, obtain a new state by flipping a random

site on the lattice and compute the new energy Eν.

3. Accept the new state with probability

P(Eν∣Eµ) = min(1, exp [log g(Eµ) − log g(Eν)]).

4. If accepted, set Eν ← Eµ, increase H(Eν) ← H(Eν) + 1 and log g(Eν) ←

log g(Eν) + log f. If rejected, do the same but with Eµ and flip back the

integer from step 2.

5. Repeat steps [2-4] until a convergence criterion is met. If so, reset H(E) =
0∀E, decrease the factor ln f ← 1_{2}log f . If log f> 10−7, start a new iteration
from step 2; if not, end the simulation.

As was the case in the Metropolis algorithm, steps [2-4] are called a Monte Carlo Sweep when performed N times. A random walk means performing steps [2-5] exactly once.

3.2. The Wang-Landau Algorithm 27

Once the simulation ends, the DOS is used to define the unnormalized canonical distribution P(β, E) with which one computes the partition function Z by summing over the energy. The thermodynamic functions follow using textbook formulas.

Z(β) = ∑

E

P(β, E) = ∑

E

g(E)e−βE. (3.5)

Internal energy: U(β) = ⟨E⟩(β) = 1

Z ∑_{E}Eg(E)e
−βE _{(3.6)}
Specific heat: c(β) = β
2
NVarE=
β2
N(⟨E
2_{⟩ − ⟨E⟩}2_{)} _{(3.7)}

Entropy: S(β) = log Z + βU (3.8)

Free energy: N f(β) = −1 βlog Z= U − S β. (3.9) 0 0.5 1 1.5 0 1 2 3 4 5 6 7 U / N T Exact WL PT

(a)Internal energy

0 0.2 0.4 0.6 0.8 1 0 1 2 3 4 5 6 7 mI T Exact PT (b)Order parameter 0 0.4 0.8 1.2 1.6 2 0 1 2 3 4 5 6 7 c ( T ) T Exact WL PT (c) Specific heat 0 0.2 0.4 0.6 0 1 2 3 4 5 6 7 S / N T Exact WL (d) Entropy −0.006 −0.004 −0.002 0 0 1 2 3 4 5 6 7 f T WL

(e)Free energy

0 100 200 300 400 0 1 2 3 4 log g ( E ) U /N WL

(f )Log. density of states Figure 3.2: Some of the thermodynamical quantities available after PT and WL simulations. The model is a 24× 24 “Ising ferromagnet” on the Lattice gas with µ= 8 and 2nn = 4, simulated for 10

4 _{MCS in}

PT and down to log f = 10−8 in WL. T in units of 2nn. The exact

curves are computed from the Onsager solution. Notice also that the PT temperatures are placed densely near Tc due to the temperature

28 Chapter 3. Simulation Methods

### 3.2.2

### Convergence Criterion

The convergence criterion used in this project is not the one described in [58].
As mentioned earlier, reaching a flat histogram was originally used by Wang and
Landau as a convergence criterion for each iteration. Then one would check once
every 104 _{MCS or so, that all histogram values H}_{(E) were no less than 95 %}

of the average ⟨H(E)⟩. It quickly became clear that this was a poor measure of convergence because the height of H(E) grows linearly with time, reaching relative ”flatness” regardless of any actual convergence of the WL random walk.

After many proposed improvements to the algorithm, in 2006 Lee, Okabe and Landau [59] suggested a new convergence criterion by studying the saturation of the DOS-error during a random walk. The approach was described in some more detail by Lee, Okabe and Koh in 2013 [60], which guided the implementation used here. The convergence criterion then follows these steps:

1. Denote the histogram for the k’th random walk as Hk(E) 3. Avoid linear

growth by subtracting the minimum value of Hk(E), i.e. define the reduced

histogram

˜

Hk(E) = Hk(E) − min Hk(E).

2. Denote the area under ˜Hk(E) by

∆Hk= ∑ E

w(E) ˜Hk(E),

where w(E) is the energy bin width of the histogram. The area ∆Hk is

conjectured to measure the error in the DOS.

3. Monitor the area ∆Hk(t) during a random walk. It first increases rapidly

and then equilibrates around some mean value, meaning that the error has saturated and the current random walk has converged.

For this project it worked well enough to monitor the slope α of the latest interval of ∆Hk(t) during a simulation, to establish convergence. The latest interval was

simply chosen as the last 25% of the total time4. Initially α>> 0. When the error saturates and convergence is reached α dips below 0 because of fluctuations in ∆Hk.

Thus a useful convergence criterion is simply α< 0.

In Figure 3.3 one can see the algorithm converging for a collection or random walks during a WL simulation. Usually the first couple of walks saturate within 104 MCS for small 8× 8 systems, while later walks with smaller modification factor f can take more than 106 MCS.

3_{Do not confuse with the Hamiltonian!}

3.2. The Wang-Landau Algorithm 29 103 104 105 106 107 108 MCS ∆ Hk f = 2.7182818285 f = 1.0039138893 f = 1.0000305180 f = 1.0000000298

Figure 3.3: Example of the saturation of the DOS error by studying the convergence of the areas ∆Hkduring random walks k= 1, 9, 16 and

26. Beginning with slope α >> 0, each iteration terminates once the slope of the last 25 % becomes negative.

### 3.2.3

### The 1

**/t algorithm**

In the WL algorithm it is required that the modification factor f decreases mono-tonically with time. Decreasing f by its square root at each step like fi+1 =√fi

as in the original WL algorithm may become inadequate at the later stages of the
simulation. It turns out that after a certain number of random walks the error
in the simulation saturates, meaning that no accuracy is gained by running it for
a longer time. To address this issue, Belardinelli et al. introduced the “1/t”
al-gorithm [61–64]. They suggest that, after a certain number of random walks, the
modification factor is redefined as f= 1_{t} instead, where t is the simulation time in
units of MCS5. At each random walk i of the simulation, fi is modified as follows:

1. If fi+1=√fi> 1_{t}, set fi+1=√fi, exactly as in step 5 of Section 3.2.1.

2. If fi+1 = √fi ≤ 1_{t} set fi+1 = f(t) = 1_{t} and update it once every MCS. The

histogram H and the convergence criterion of Section 3.2.2 are not used for the remainder of the simulation.

Usually the crossover to the 1/t algorithm takes place after 1 × 106 _{to 5}_{× 10}6

MCS, when the modification factor is f ≲ 10−6_{. The 1}_{/t algorithm is then active}

until reaching 107 _{MCS, i.e. f}_{= 10}−7_{, after which the simulation ends.}

### 3.2.4

### Two-Dimensional WL: Random Walk over Energy

### and Order Parameter Space

An extension to the WL algorithm is used to obtain the DOS in terms of both energy E and order parameter m, g(E, m). Although the practical implementation

5_{Another option is f =}NE

j , where NEis the number of energy levels and j the total number

30 Chapter 3. Simulation Methods
−1
−0.5
0
0.5
1 _{1.6}0.8 0
2.4
3.2
4
0
70
140
210
280
350
log g(E, m)
mM U /N
log g(E, m)

(a) Log. density of states.

−1 −0.5 0 0.5 1 1.5 2 2.5 3 3.5 0 0.01 0.02 P (β, m)/Z mM T P (β, m)/Z (b) Probability of states. Figure 3.4: The main results of a 2D WL simulation. These belong to a 24× 24 “Ising ferromagnet” on the Lattice gas (µ = 22nn). As the

temperature is lowered below Tc ≈ 2.3 the order parameter goes from

0 to ±1, and one can see that the probability separates into the two ground states.

is challenging the physical concepts remain largely the same compared to 1D WL simulations over energy space only. This extension permits a finer study of the phase transition because it gives access to the order parameter as m(β) and functions dependent on it such as the free energy f(β, m) and canonical distribution P(β, m), which will be discussed in Chapter 4.4.

In contrast to Eq. 3.5, the partition function is this time obtained by summing over both E and m:

Z(β) = ∑

E,m

g(E, m)e−βE. (3.10)

The details of implementation are discussed in [58, 65, 66] and consist of keeping a two-dimensional histogram while performing the random walk in both energy and order parameter space. The convergence criterion is modified accordingly, reinter-preting ∆H as a volume instead of an area. To illustrate, Figure 3.4 shows an example of the DOS surface obtained from a 2D WL simulation, and the proba-bility of states as function of the temperature and order parameter. Extracting thermodynamical quantities from the DOS is done similarly as in the 1D case after obtaining the partition function.

### 3.2.5

### Parallelization of the WL Algorithm

The speed of convergence of 2D WL random walks is much slower than their 1D
counterparts, since the number of entries on the histograms grow roughly as L4_{[46].}

There are several parallelization schemes of varying complexity to alleviate this issue. The idea is to have multiple random walkers that contribute independently