• No results found

On the Possibility of Testing the Weak Equivalence Principle Using Cosmological Data Sets

N/A
N/A
Protected

Academic year: 2021

Share "On the Possibility of Testing the Weak Equivalence Principle Using Cosmological Data Sets"

Copied!
56
0
0

Loading.... (view fulltext now)

Full text

(1)

IN

DEGREE PROJECT ENGINEERING PHYSICS, SECOND CYCLE, 30 CREDITS

STOCKHOLM SWEDEN 2019,

On the Possibility of Testing the

Weak Equivalence Principle Using

Cosmological Data Sets

DEXTER BERGSDAL

KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ENGINEERING SCIENCES

(2)

Master of Science Thesis

On the Possibility of Testing the Weak

Equivalence Principle Using Cosmological

Data Sets

Dexter Bergsdal

Particle and Astroparticle Physics, Department of Physics, School of Engineering Sciences

KTH Royal Institute of Technology, SE-106 91 Stockholm, Sweden Stockholm, Sweden 2019

(3)

Typeset in LATEX

Akademisk avhandling f¨or avl¨aggande av teknologie masterexamen i teknisk fysik.

Scientific thesis for the degree of Master of Science in Engineering Physics.

TRITA–SCI-GRU 2019:379

c Dexter Bergsdal, November 2019

Printed in Sweden by Universitetsservice US AB, Stockholm November 2019

(4)

Abstract

The Equivalence Principle (EP) is the most fundamental concept of Einstein’s Gen- eral Relativistic theory of gravity (GR) which currently serves as the leading theory in modern cosmology governing the dynamical evolution of the universe. Follow- ing the observations of the seemingly accelerating expansion of the universe there is a consensus for models assuming the existence of a theoretical “dark energy”

component. There is an unsettling rift between the prediction of this component from quantum field theoretical arguments and the value inferred from observations of the expansion rate, causing tension regarding the validity of GR as an accurate theory of cosmological modelling. Since the EP is an integral part of GR, there is precedence for providing more thorough tests of its implications. Currently, the conviction in the EP is mostly based on rigorous tests performed within the confine- ments of our galactic vicinity. As such, it is an interesting proposition to investigate the EP on a grander scale, where the e↵ects of cosmology can be considered, to possibly further our understanding on these issues.

In this thesis we investigate the possibility of testing the EP using spectral lag data of Gamma-Ray Bursts (GRBs) combined with Shapiro time delay data inferred from the large-scale matter distribution of the universe. We motivate a model for the cosmological Shapiro delay that is described by the gravitational potential fluctuations of the large-scale structure. We show that a decisive test requires a detailed description of these fluctuations on the full line-of-sight (LoS) between the source and the observer. Although our data in this work lacks the quality to put new constraints on EP violation, our test is promising for future generation sky surveys.

Key words: Cosmology: large-scale structure – Gravitation: Equivalence Principle test

iii

(5)

Sammanfattning

Ekvivalensprincipen (EP) ¨ar det mest fundamentala konceptet inom Einsteins all- m¨anrelativistiska gravitationsteori (GR) som f¨or tillf¨allet anv¨ands som den ledan- de teorin inom modern kosmologi f¨or att beskriva den dynamiska utvecklingen av universum. Efter observationerna av den till synes accelererande expansionen av universum r˚ader konsensus f¨or modeller som antar existensen av en teoretisk

“m¨ork energi”-komponent. Det finns en obekv¨am klyfta mellan f¨oruts¨agelsen av denna komponent fr˚an kvantf¨altsteoretiska argument och v¨ardet h¨arlett fr˚an ex- pansionstakten som skapar sp¨anning g¨allande giltigheten av GR som en precis te- ori av kosmologisk modellering. D˚a EP ¨ar en v¨asentlig del av GR, existerar ett behov av att bidra med fler genomg˚aende tester av dess implikationer. F¨or tillf¨allet

¨ar ¨overtygelsen f¨or EP fr¨amst baserad p˚a rigor¨osa tester utf¨orda inom v˚ar galax n¨arhet. Det ¨ar d¨arf¨or intressant att utforska EP p˚a st¨orre skalor, d¨ar kosmologiska e↵ekter kan beaktas, f¨or att ut¨oka v˚ar f¨orst˚aelse av dessa problem.

I denna tes unders¨oker vi m¨ojligheten att testa EP genom att anv¨anda data av spektral f¨ordr¨ojning f¨or gammablixtar kombinerat med data av Shapiro-tidsf¨ordr¨oj- ning h¨arlett fr˚an universums storskaliga materief¨ordelning. Vi motiverar en modell f¨or den kosmologiska Shapiro-tidsf¨ordr¨ojningen som beskrivs av de gravitationella potentialfluktuationerna av den storskaliga strukturen. Vi visar att ett ¨overtygande test kr¨aver en detaljerad beskrivning av dessa fluktuationer ¨over hela propagationen mellan gammablixtk¨allan och observat¨oren. ¨Aven om v˚ar data i detta arbete saknar den kvalitet som beh¨ovs f¨or att s¨atta nya begr¨ansningar p˚a EP-¨overtr¨adelse ¨ar v˚art test lovande f¨or framtida generationers kosmologiska data.

Nyckelord: Kosmologi: storskalig struktur – Gravitation: test av Ekvivalensprin- cipen

iv

(6)

Preface

This thesis is the result of nine months of work from February 2019 to November 2019 at the division of Cosmology, Particle Astrophysics and Strings at the Physics Department of Stockholm University. The work has been carried out under the supervision of Jens Jasche.

Outline

In Chapter 1 we introduce the context in which a cosmological Equivalence Prin- ciple (EP) test is relevant. In Chapter 2 we provide a background on the basic physics used in this work by introducing concepts from cosmological and gravita- tional physics. In Chapter 3 we detail the theory for the type of test we use in this thesis. In Chapter 4 we apply the knowledge of the previous chapters to test the EP. Lastly, in Chapter 5 we summarise our work and provide conclusions for future research.

Acknowledgements

My eternal gratitude goes out to my supervisor Jens Jasche who have guided me through this work and very generously o↵ered me his time to advise on all aspects of being an aspiring researcher. Additionally I want to thank Adam Johansson Andrews for assisting me and making my time spent in the research group a very pleasant one. I also would like to thank my KTH supervisor Mattias Blennow for taking the time to read several drafts and providing valuable suggestions for improvements. Most importantly I want to thank my family: My mother G¨ulsen;

my father Torbj¨orn; and my brother Deniz for their endless support during my 5 years at the KTH Engineering Physics programme.

v

(7)

vi

(8)

Contents

Abstract . . . iii

Sammanfattning . . . iv

Preface v Contents vii 1 Introduction 1 2 Background 3 2.1 The expanding universe . . . 3

2.2 The large-scale structure of the universe . . . 4

2.2.1 Growth of large-scale structure . . . 5

2.3 Correlation functions and the Power spectrum . . . 6

2.4 The Equivalence Principle . . . 7

2.5 Post-Newtonian formalism . . . 7

3 Cosmological Shapiro Delay 9 3.1 The Shapiro time delay e↵ect . . . 10

3.1.1 Shapiro delay in cosmology . . . 11

3.2 Testing the WEP with cosmological Shapiro delay . . . 12

3.2.1 Quantifying WEP violation using . . . 12

3.2.2 Outline and approach . . . 13

3.3 Behaviour of cosmological Shapiro delay . . . 15

3.3.1 Gaussian Random Fields . . . 15

3.3.2 Dark matter simulation . . . 20

4 Testing the Weak Equivalence Principle 23 4.1 SDSS/BORG data . . . 23

4.2 Analysis – Data & Method . . . 27

4.2.1 Model . . . 28

4.2.2 Mock data . . . 30

4.3 Results . . . 33 vii

(9)

viii Contents

4.4 Discussion . . . 36

5 Summary and conclusions 39

Bibliography 40

(10)

Chapter 1

Introduction

Arguably the most fundamental question in all of science regards the creation of our universe. The study of physics, and in particular cosmology, attempts to explain how our universe developed from a hot and dense state after the Big Bang to the universe that we observe today. Much of the developments made in cosmology have been a direct result of the introduction of Einstein’s theory of General Relativity (GR) some century ago [1]. Since then, tremendous developments in cosmological theory coupled with groundbreaking observations have rapidly turned the field from largely philosophical into a flourishing science. Although the successes are plentiful, some aspects of our understanding have stagnated in the last couple of decades.

The consensus standard model of cosmology, the ⇤CDM model, is currently the most successful model able to account for a wide set of cosmological precision observational tests of, e.g., the Cosmic Microwave Background (CMB) and the large-scale distribution of galaxies [2]. The model is associated with a parameteri- sation that assumes that roughly 95% of the contents of the universe is of unknown origin. Roughly 70% is attributed to the driving force of the observed accelerating expansion of the universe [3, 4], called dark energy. In this model, the leading theory for dark energy is the cosmological constant, ⇤. Despite the fact that the cosmological constant holds a prominent role in cosmology, on a fundamental level it is poorly understood. One of the bigger issues with the ⇤CDM model is that quantum field theory predicts a “cosmological constant” that should be 120 orders of magnitude larger than what is currently observed. This is referred to as the cosmological constant problem [5].

The foundation of the ⇤CDM model is that the governing physics of gravitation in the universe can be described by GR. Naturally issues such as the cosmological constant problem justifies a concern of whether the reliance on GR might be detri- mental to understanding the true source of dark energy. There have been attempts to explain the observation of the accelerating expansion through modifications of GR and alternative metric theories of gravity without a successful breakthrough [6]. The fact that GR is a classical theory and thus incompatible with quantum

1

(11)

2 Chapter 1. Introduction mechanics [7] might suggest that many of the existing problems in cosmology today could only be solved by a theory which unifies the classical and quantum realms through a theory of quantum gravity.

Although GR has passed a heap of observational tests, it is of highest relevance to continue testing the theory in light of the discussed inconsistencies. Among the more important tests involve that of the Equivalence Principle (EP). The EP is the statement that non-gravitational fields are gravitationally indistinguishable in the universe. This property permits the formulation of gravity as a geometric theory mathematically described by a curved spacetime metric. The EP is the most fundamental assumption to the class of metric gravity theories and a violation of this principle would prove to be a groundbreaking result for the interpretation of gravity in our universe.

Tests of the EP are usually investigated through a substatement called the Weak EP (WEP). The WEP has been tested extensively, starting from Galileo Galilei in the 17th century to more modern tests [8–11], without any indications of a violation. With the rapidly increasing availability of cosmological data sets, testing the WEP in a cosmological setting have become an interesting prospect. A recent surge of papers ([12–32], among others) have attempted such WEP tests with the allure of significantly improving constraints on a violation set by the leading non-cosmological experiments.

In this thesis we investigate the possibility of testing the WEP using cosmolog- ical data sets of large-scale structure and cosmic transients, such as Gamma-Ray Bursts (GRBs).

The thesis is structured as follows: In Chapter 2 we provide some background on cosmology, large-scale structure, and the Equivalence Principle. In Chapter 3 we investigate how to test the EP utilising the Shapiro time delay e↵ect in a cosmolog- ical context. In Chapter 4 we implement a WEP test comparing spectral lag data of GRBs to Shapiro delay data inferred from the large-scale matter distribution. In Chapter 5 we provide a summary of the work and our conclusions.

(12)

Chapter 2

Background

2.1 The expanding universe

The ⇤CDM model is currently the leading model explaining observations of our universe and is completely dependent on General Relativity (GR) describing the dynamics of gravity. GR is a geometric theory of gravity where the dynamics can be represented through the geometry of a curved four dimensional pseudo- Riemannian manifold of space and time. Gravity in GR is an interplay between the curvature of spacetime and the distribution of matter summarised in the Einstein Field Equations (EFE)

Gµ⌫+ ⇤gµ⌫ =8⇡G

c4 Tµ⌫ (2.1)

where Gµ⌫ is the Einstein tensor, gµ⌫ is the metric tensor, ⇤ is the cosmological constant, G is the gravitational constant and Tµ⌫ is the energy-momentum tensor.

The application of GR to cosmology involves some estimation of these compo- nents. To achieve this we make use of the cosmological principle which states that on large scales (> 100 Mpc) the distribution of matter appears homogeneous and isotropic [33]. If we adopt this notion to be an approximate description of the uni- verse as a whole, i.e., a background solution, this translates into an approximate spacetime metric

ds2= c2dt2+ a(t)2d⌦2 (2.2) called the Friedmann-Lemaˆıtre-Robertson-Walker (FLRW) metric. Here a(t) is the scale factor describing the homogeneous and isotropic expansion of space while

d⌦2= dr2+ r2 d✓2+ sin2(✓)d 2 (2.3) is the spatial di↵erential for a flat universe obeying the cosmological principle. The FLRW metric is an exact solution to the EFEs which allows for the derivation of

3

(13)

4 Chapter 2. Background the Friedmann equations

H(t)2= 8⇡G⇢c

3 , (2.4)

¨ a(t)

a(t) = 4⇡G 3

c+3p c2

, (2.5)

which are dynamical equations describing the evolution of the background space- time. Here H(t)⌘ a(t)˙a(t) is the Hubble parameter, ⇢c is the critical density and p is the pressure.

2.2 The large-scale structure of the universe

Although the FLRW metric is often used to model the expansion of the universe, a more detailed metric is required to trace the structures in the matter distribution that a↵ect the universe on more local scales. In GR terms, these perturbations are generally weak and can therefore be approximated by linear metric perturba- tions in terms of their Newtonian analogue, the Newtonian gravitational potential fluctuations, . The linearly perturbed FLRW metric thus becomes

ds2= ⇣ 1 + 2

c2

⌘c2dt2+ a(t)2

1 2

c2

⌘d⌦2 (2.6)

where the weak field limit, ⌧ c2, applies [34, 35]. In the weak field limit, we can also relate the gravitational potential fluctuations to the distribution of matter through the Newtonian analogue of the EFEs, the Poisson equation

r2 (r, t) = 4⇡G⇢0(t) (r, t), (2.7) where (r, t) are the density fluctuations in proper coordinates and ⇢0(t) is the mean density of matter in the universe. The density field fluctuations are also referred to as the density contrast

(r, t) =⇢(r, t) ⇢0(t)

0(t) , (2.8)

where ⇢(r, t) is the matter density field.

The Poisson equation expressed on this form can be simplified in several steps.

Firstly, we can decouple the e↵ects of the expansion for our coordinate system by expressing the equation in terms of comoving coordinates, x, relating to the proper coordinates by

r = a(t)x. (2.9)

The Poisson equation expressed in comoving coordinates thus becomes

r2 (x, t) = 4⇡G⇢0(t) (x, t)a(t)2. (2.10)

(14)

2.2. The large-scale structure of the universe 5 In an expanding universe, the constant matter content implies that the mean density decreases with time. The evolution of the mean density is thus proportional to

0(t)/ a(t) 3 and can be expressed w.r.t. a reference time t0

0(t) = ⇢0(t0)

a(t)3 , (2.11)

where a(t0) = 1.

2.2.1 Growth of large-scale structure

Observations of the Cosmic Microwave Background (CMB) show that the density field was almost homogeneous in the early universe. The measured anisotropies in the CMB temperature distribution further show that fluctuations in the density field were small ( ⌧ 1) and fit a Gaussian distribution [36]. It is conjectured that these primordial density fluctuations arose from quantum fluctuations in the earliest stages after the Big Bang which “inflated” into the macroscopic regime [37]. Due to the nature of gravitational instability, fluctuations grow. Overdense regions attract matter from underdense ones over time, allowing the formation of the structures that we see in the universe today.

For small perturbations the dynamics of gravitational instability is accurately described by linear theory. As perturbations grow larger and start to deviate from Gaussianity, it is necessary to introduce non-linear theory to describe the formation of more complex structures that we see today (sheets, filaments, galaxy clusters etc.) [37]. While it is tedious applying non-linear theory, there are some good alternative approximations that has proven successful in various ways (e.g., the Zel’dovich approximation [38]).

In the case of observing the growth of structure on larger scales in the late time universe, perturbations can once again be considered small according to the weak field limit in Eq. (2.6). The linear theory approximation is thus that perturbations grow self-similarly as a function of time

(x, t) = D(t) 0(x, t0), (2.12) where D(t) is the linear growth factor. This is also in accordance with the cosmolog- ical principle. The evolution of the linear growth factor is described in [34, 39]. We can insert this expression in the Poisson equation, Eq. (2.10), to have it expressed using the field at a specific time t0,

r2 (x, t) = 4⇡G⇢0(t)D(t) 0(x, t0)a(t)2. (2.13) Now using Eq. (2.11), we get

r2 (x, t) = 4⇡G⇢0(t0)

a(t)3 D(t) 0(x, t0)a(t)2=r2 0(x, t0)D(t)

a(t), (2.14)

(15)

6 Chapter 2. Background and thus

(x, t) = 0(x, t0)D(t)

a(t). (2.15)

The linear growth factor can equally be expressed as a function of the redshift z through the relation

1 + z = 1

a(t), (2.16)

relating the cosmic time to the redshift. This formula assumes that z = 0 and a(t0) = 1 represents the present universe.

Furthermore, the Poisson equation for 0can more conveniently be expressed in terms of the cosmological parameters ⌦m(t0) and H0(t0) by utilising the Friedmann equation Eq. (2.4). Thus we obtain the Poisson equation

r2 0(x, t0) =3

2⌦mH02 (x, t0) (2.17) where

m= ⇢m

c

(2.18) is the fraction of matter energy density in the universe and ⇢cis the critical density.

For the sake of computation, it is often easier using the Poisson equation in Fourier space

0(k) = 3

2⌦mH02 (k)

k2 . (2.19)

2.3 Correlation functions and the Power

spectrum

The most common way to describe the statistics of large-scale structure is in the language of correlation functions. The full statistics can be completely described by the infinite set of n-point correlation functions [33]

2(r) =h (x) (x + r)i , (2.20) ...

n(x1, x2, . . . , xn) =h (x1) (x2) . . . (xn)i . (2.21) On large-scales the two-point function is a very powerful approximation alone [37].

The two-point function describes the excess probability of finding a pair of galaxies a distance r apart, in comparison to a random distribution,

dP = ⇢20(1 + ⇠2) dV1dV2. (2.22) We define the matter power spectrum

P (k)⌘D

| (k)|2E

, (2.23)

(16)

2.5. Post-Newtonian formalism 7 as the Fourier transform of the two-point correlation function determined by

2(r) =

Z d3k

(2⇡)3P (k)eikr (2.24)

2.4 The Equivalence Principle

The Equivalence Principle (EP) is one of the most fundamental principles in physics and responsible for allowing Einstein to formulate his theory of General Relativity (GR). It is usually divided into two sub-statements called the Strong and the Weak EP (SEP & WEP). The SEP is a more general statement of the principle that applies to all laws of physics while the WEP is a specific statement regarding the equality of inertial and gravitational mass. Thus proving that the WEP is violated immediately implies SEP violation but not necessarily the other way around.

As stated, the WEP is the statement that inertial mass equals gravitational mass. The implication of this is that in a gravitational field test bodies are im- parted with equal acceleration irregardless of their masses. This notion applies to all non-gravitational fields in the spacetime and is sometimes referred to as univer- sal coupling [40]. While this formulation holds equally well in Newtonian theory, Einstein used its implications to change the viewpoint of how gravity is perceived into the concept of gravity as a geometric theory of curved spacetime. This concept works well in part because, according to the WEP, any test body can be described to contain within mathematically identical spacetimes, i.e., one shared spacetime.

Crucially, a theory of gravity where the WEP does not hold, cannot be given a general geometric description since the geometry would directly depend on the test body in question.

The exact path of a test body in the curved spacetime is determined by the geodesic equation which is the equation of motion in the language of GR. A freely falling body in a gravitational field is said to follow its geodesic, i.e., the path of least action in its spacetime geometry [41]. Thus, according to the WEP, any test bodies propagating in free fall between the same spacetime points follow the same geodesic.

2.5 Post-Newtonian formalism

In testing the WEP we want to explore possibilities for deviations to the existing theory. In the weak field regime where we have approximated large-scale structures using the Newtonian gravitational potential, we are essentially expressing the theory using the language of Newtonian gravity. This is referred to as the post-Newtonian formalism. Instead of formulating a new theory including a violation of the WEP we can explore this using the post-Newtonian formalism. In particular, we adopt the parameter form called parameterised post-Newtonian (PPN) formalism where a metric theory is defined by a set of parameters relating to the physics [40, 42].

(17)

8 Chapter 2. Background In the case of the WEP we focus on a particular parameter, . The -parameter measures the amount of space curvature produced per unit rest mass w.r.t. GR (thus = 1 in GR). Thus by relaxing the constraints on in GR we can explore violations of the WEP through the curvature of space.

(18)

Chapter 3

Cosmological Shapiro Delay

The Weak Equivalence Principle (WEP) can be tested in a multitude of ways. The most common astrophysical approaches have historically involved either using the deflection of light or time delay measurements [40]. So far constraints on WEP violation have been obtained reliably from tests on non-cosmological scales, e.g., by time delay measurements of photons and neutrinos from Supernova 1987A (SN 1987A) [10, 11]. Statements regarding the WEP in our galactic vicinity however, cannot conclusively be considered to hold universally. There is still uncertainty over whether a practically immeasurable deviation within these scales might magnify in a cosmological setting to the extent that basing a universal theory of gravity on the Equivalence Principle becomes invalid. The goal of this thesis is to investigate and explore the possibility of testing the WEP using cosmological probes to shed light on the validity of the WEP, a pillar of metric theories of gravity.

Our research revolves around a class of recently proposed tests focusing on multi-messenger time delay data from distant transient sources to probe the WEP ([12–32], among others). The basic idea is that a violation of the WEP implies an energy dependence to particle geodesics which in principle could manifest through a di↵erence in arrival times of distinguishable propagating particles. This theoretical time delay is modelled using the Shapiro time delay e↵ect, a prediction of general relativity first discovered by Irwin Shapiro in a 1964 radar timing experiment [43].

The potential of the method is interesting in a cosmological setting as observed sources at large distances provide a large baseline for the WEP to be tested, with the potential of greatly improving on previous constraints.

The difficulty in extending WEP-tests utilising the Shapiro delay e↵ect from galactic scales to cosmology is that determining the Shapiro delay requires knowl- edge about the gravitational potential along the line-of-sight (LoS) towards the relevant source. In principle this requires a detailed map of the entire large-scale structure in the universe. Since the gravitational potential field on these scales is unknown, many authors have in various ways assumed alternative models for the gravitational potential in order to obtain reasonable LoS Shapiro delays. However,

9

(19)

10 Chapter 3. Cosmological Shapiro Delay we will show later that improper modelling of cosmic gravitational potentials can lead to erroneous conclusions drawn from observations.

In this thesis we investigate the use of the Shapiro time delay e↵ect as a cosmo- logical probe of the WEP in reference to several authors on the subject. We provide a discussion on the cosmological Shapiro delay e↵ect using statistics of simulations and implement an improved estimate of the gravitational potential based on Sloan Digital Sky Survey (SDSS) galaxy data [44] and the BORG (Bayesian Origin Recon- struction from Galaxies) algorithm [45] and apply this to test the WEP in a more robust manner.

The chapter is structured as follows: In the first section we introduce the Shapiro time delay e↵ect and motivate a model for its application in a cosmological setting.

In the second section we discuss how we can use the cosmological Shapiro delay to test the WEP via the PPN formalism and compare this to the methods of other authors. In the third section we discuss the statistical behaviour of cosmological Shapiro delay and its estimates using simulations of large-scale structure.

In this thesis we use a cosmology of the 2018 Planck Collaboration with ⌦m= 0.2889, ⌦b = 0.048597, ⌦ = 0.7111 and H0 = 67.74 km s 1 Mpc 1 (h = 0.6774) [46].

3.1 The Shapiro time delay e↵ect

The Shapiro time delay e↵ect is inherent to the theory of general relativity. Along with the three tests proposed by Einstein, the perihelion precession of Mercury’s orbit, gravitational redshift of light, and deflection of light by the Sun [1], it is considered to be one of the classical tests of the theory [43].

The e↵ect applies to particles propagating through a gravitational potential field which, relative to an outside observer, yields a time delay proportional to the strength of the experienced gravitational potential. In the example of a photon propagating across the solar system, equidistant photon paths (according to the observer) have varying propagation time depending on the path’s proximity to the Sun. The e↵ect at play here is twofold:

• Firstly, there is a contribution from the gravitational time dilation e↵ect. In the presence of a potential, the clocks of local observer frames of the photon path in a strong potential appear to be moving slower according to an outside observer. Einstein showed that this is a direct consequence of the equivalence principle [47].

• Secondly, there is also a contribution from the curvature of spacetime. The enhanced spacetime curvature around a strong potential e↵ectively lengthens the path [47].

In GR both of these e↵ects, coincidentally, contribute equally to the total Shapiro time delay.

(20)

3.1. The Shapiro time delay e↵ect 11 There is also a contribution to the time delay from the physical deflection of the light path which can be considered, but this contribution has been shown to be negligible in comparison [47].

3.1.1 Shapiro delay in cosmology

The intimate connection between the Shapiro delay and the gravitational potential implies that in a cosmological context, the Shapiro delay depends on the large-scale distribution of matter. To describe the propagation of a particle in such a universe we use the spacetime of the linearly perturbed FLRW metric defined in Eq. (2.6),

ds2= ⇣ 1 + 2

c2

⌘c2dt2+ a(t)2

1 2

c2

⌘d⌦2, (3.1)

which accounts for the inhomogeneous matter distribution through the gravitational potential fluctuations . For massless particles at a fixed LoS the line element is given by

ds2= ⇣ 1 + 2

c2

⌘c2dt2+ a(t)2

1 2

c2

⌘dr2= 0. (3.2)

From here we integrate along a specific LoS to get the propagation time of a photon travelling from a source reto an observer r0. Assuming the weak field limit ( ⌧ c2) we obtain by algebraic transformation

dt = a c

vu ut 1 2c2

1 +2c2

dr⇡a c

⇣1 2 c2

⌘dr = dt0+ dtgra.

The propagation time is split into a time due to the background and a contribution from the inhomogeneities, i.e., a gravitational time delay. By integrating along the LoS we obtain the expression for the gravitational time delay (hereafter Shapiro delay)

tgra= 2 c3

Z

(r, z)a(z)dr = 2 c3

Z

0(r)D(z)

a(z)a(z)dr (3.3)

= 2

c3 Z r0

re

0(r)D(z)dr, (3.4)

where the comoving radius r and the redshift z interchangeably refer to distance measures in this cosmology. Here we applied Eq. (2.15) to get the dependence on the more convenient form of gravitational potential fluctuations at z = 0 using the linear growth factor D(z) (see Sec. 2.2.1).

Note that since 0 can fluctuate around zero, the Shapiro delay can assume both positive and negative values. This result is interesting, particularly since it di↵ers from the perspective of the Shapiro delay in the solar system context. There, the reference is a remote point from the potential where the total delay is zero. In

(21)

12 Chapter 3. Cosmological Shapiro Delay cosmology however, such a reference is nonsensical. The natural reference of zero delay is instead the cosmic mean and as such the Shapiro delay is considered as a comoving e↵ect.

The result of this di↵erence in perspective is that in the context of a Keplerian potential, like the Sun, the Shapiro delay is an accumulative e↵ect while in a cos- mological setting it is fluctuating w.r.t comoving spacetime. A further discussion on this topic is provided by Minazzoli et al. [48] also pointing out that a Keplerian potential in general is not limited to generating accumulative Shapiro delays. This is because the Shapiro delay is a gauge dependent quantity which requires motiva- tions of a coordinate choice that matches the observer and the observations. The gauge choice in this thesis, referred to as the Newtonian gauge [35], is motivated by the assumption that Eq. (2.6) provides a good approximation of cosmological gravitational structures.

Throughout this thesis we continuously revisit the consequences that this mis- characterisation of the cosmological Shapiro delay has in reference to several au- thors’ work [12–32].

3.2 Testing the WEP with cosmological Shapiro

delay

3.2.1 Quantifying WEP violation using

In the scenario where photons propagate through the large-scale universe from a distant transient source, they experience Shapiro delay. It is interesting to inves- tigate what e↵ects relaxing the conditions of the WEP have on the Shapiro delay and how this can lead to ways of providing constraints to the violation of WEP.

We know that a consequence of the WEP is that the geodesic of a particle prop- agating through the universe is independent of the particle’s internal composition.

This implies that, for instance, two massless particles with di↵erent energies propa- gate identically through the universe (barring external e↵ects). If we instead allow the geodesic to be energy dependent this would no longer necessarily be the case.

One way we can make a geodesic change with energy is to invoke this on the space- time curvature. Without needing to create a new theory with this attribute we can look at deviations from GR using the PPN formalism discussed in Sec. 2.5. With the definitions of the PPN formalism the curvature of space for a metric theory is described by the parameter . For the sake of simplicity we allow (E) to be some function of the energy capable of breaking the WEP.

Naturally, the equation for the cosmological Shapiro delay, Eq. (3.4), does not discriminate the energy of propagating photons and as such cannot accommodate for any breaking of the WEP. By introducing (E) we can modify the e↵ective gravitational potential a particle of energy E is experiencing along its propagation.

This is what allows the change in geodesic of a particle along the LoS and thus (E) serves as a gateway to quantifying the breaking of the WEP.

(22)

3.2. Testing the WEP with cosmological Shapiro delay 13 Assuming that the breaking is decoupled from the contribution of the back- ground (c.f. [13]), we can introduce the -parameter as an e↵ect on only the fluctuations of the potential by simply multiplying it to one of the potential terms in Eq. (3.2)

ds2= ⇣

1 + 2 (E) c2

⌘c2dt2+ a(t)2

1 2

c2

⌘dr2= 0. (3.5)

We proceed as before

dt = a c

vu ut 1 2c2

1 + 2c2

dr⇡a c

⇣1 1 + c2

⌘dr = dt0+ dtgra,

and end up with the modified expression for the cosmological Shapiro delay tgra( ) = 1 + (E)

c3

Z r0

re

0(r)D(z)dr. (3.6)

Assuming that we have knowledge of 0(r) along the LoS of integration, we can determine the full travelling time for a particle with energy E in an inhomogeneous cosmological potential. If we have two particles respectively with energy E1and E2

travelling on the same LoS, the di↵erence in propagation time is determined from the di↵erence in gravitational time delay

t1,2gra= (E1) (E2) c3

Z

0(r)D(z)dr = 1,2ILOS (3.7) which, in theory, exists if in fact the WEP was broken, i.e., 1,2 6= 0. Thus we can use to measure the degree of WEP violation.

3.2.2 Outline and approach

The LoS gravitational potential fluctuations are related to the density field fluctu- ations by the Poisson equation (Eq. (2.19)), which in Fourier space yields

0(k) = 3

2⌦mH02 (k)

|k|2, (3.8)

where (k) are the Fourier transformed density field fluctuations. Thus, if we have an extensive large-scale density fluctuation field, we can calculate the Shapiro delay for any LoS fully within the field. We can then calculate the expected time delay of arrival for photons simultaneously emitting from a source, depending on the value of (E). If is large enough in this context, the delay should in theory be observable.

The kind of test we seek to perform relies on the observable delay known as spectral lag. The spectral lag is a time delay calculated from the cross correlation

(23)

14 Chapter 3. Cosmological Shapiro Delay between two light curves in di↵erent energy bands usually via the observation of photons from astrophysical events producing either Gamma Ray Bursts (GRBs) or Fast Radio Bursts (FRBs) [49]. The general idea of this class of WEP tests is thus to compare the Shapiro delay with the spectral lag to gather information on the value of . This is further detailed in Sec. 4.2 when we set up the model for the test.

The backbone of this approach is in the assumption that we can somehow obtain the density field fluctuations for the LoS in question. This requires knowledge of the large-scale structure that with current sky surveys are unattainable. Instead, many authors have looked for ways to obtain estimates of the Shapiro delay by assuming simplified gravitational potential models ([12–32] include various approaches).

The approach here is usually to establish some mass and location defining a Ke- plerian potential representing some nearby dominant gravitational source such as the Milky way or the Laniakea supercluster. Similarly to the solar system context, these models are adhering to a gravitational system that strictly produces positive Shapiro delays. The implication drawn from this model assumption is, that the description of the most nearby part of the LoS can be used as a strict lower limit to the full LoS Shapiro delay. This conversely implies that can be bound from above (through inverse proportionality). The majority of work on this subject have thus been focused towards putting constraints on in terms of upper limits for various sources. We claim that this approach of WEP constraints are fundamen- tally flawed when considering particles propagating through a non-static infinite spacetime. As we have argued, in the context of cosmology, the description of the Shapiro delay takes on a di↵erent role along a LoS that is not an accumulative quantity, rather it is fluctuating. This then invalidates arguments of upper limit constraints to any reasonable degree. This flaw is also discussed extensively in [48]

where it is shown that these models lead to an unphysical divergence of the Shapiro delay when applied to cosmology.

In light of this realisation we are curious to find alternative methods of con- straining the WEP using our arguably more appropriate Shapiro delay description.

We take inspiration from possibly a more sophisticated test via the work of Yu et al. [12]. The idea is to employ a collection of sources, in this case GRB sources, and look for a correlation between the spectral lags and the LoS Shapiro delays.

The di↵erence however, is that they use a Keplerian potential model of the Lani- akea supercluster while we provide the first test, that we are aware of, utilising a gravitational potential model based on real large-scale structure (see Sec. 4.1).

Regardless of the model, the range of our field is limited when it comes to covering most sources, which still leaves us with this same problem of not knowing the full LoS Shapiro delay. Thus, before we go through with using any model that does not calculate the full LoS Shapiro delay, we need to investigate how we can utilise limited LoS information and still perform a useful WEP test. Understanding the statistical behaviour of the e↵ect in a cosmological setting is an appropriate starting point and is addressed in the following section.

(24)

3.3. Behaviour of cosmological Shapiro delay 15

3.3 Behaviour of cosmological Shapiro delay

In the previous section we have established that to guarantee a reliable WEP test we require knowledge on the full LoS Shapiro delay between a source and the observer. Since obtaining LoS Shapiro delays for sources at cosmological distances is a challenging task we have to rely on information that only covers parts of the LoS nearest the observer. To determine the impact that partial knowledge of the full gravitational potential along a LoS has on testing the WEP, we look further into modelling the unobserved parts of the LoS. This is achieved by investigating the statistical properties and associated uncertainties of such incomplete potential data with simulations. The most relevant properties, such as the characteristics of large-scale structure, can be investigated with simulations. This enables us to study the density field beyond the limits of the real data range and calculate Shapiro delays across vast distances. This will permit us to get an insight into associated uncertainties for an eventual WEP test.

3.3.1 Gaussian Random Fields

To investigate the impact of the cosmic large-scale structure on WEP tests we start by simulating 3D Gaussian Random Fields (GRFs) emulating the cosmic matter distribution. The foundation of a GRF is that the phases of its Fourier modes are uniformly distributed, i.e., uncorrelated and random [37]. The result of having statistically independent modes is that their sum, according to the central limit theorem, approaches a normal distribution for a large amount of modes. Thus for any sufficiently large subset of the field volume, the density contrast obeys a Gaussian. Although this di↵ers from the shape of the real universe density contrast, and introduces unphysical features such as negative density regions, the large-scale statistics are still well behaved (see Fig. 3.1). The advantage of GRFs is that they can be generated exactly using just the 2-point correlation function (Eq. (2.20)).

Thus we can generate a Gaussian universe using a power spectrum matching the real universe.

Specifically, we obtain GRFs by drawing a white noise density field from a Gaussian and convolving with the square-root of a 2-point correlation function.

The convolution then adopts this correlation function which modifies the initial white noise density field into a correlating field [37]. The convolution theorem then states that the Fourier transform of the new convolved field is a product of the Fourier transforms of the individual components [50]

F(f ⇤ g) = F(f) · F(g). (3.9)

On this form we can express the Fourier transformed components separately and multiply them together afterwards. The Fourier transformed white noise Gaussian field is simply a collection of Fourier modes with uniformly distributed random phases while the Fourier transformed 2-point correlation function is by definition the matter power spectrum, P (k) [51]. Using the matter power spectrum, we

(25)

16 Chapter 3. Cosmological Shapiro Delay straightforwardly compute this product on a discrete Fourier space grid and with an inverse Fourier transformation obtain a real space Gaussian random density field on a 3D grid.

Setup

The framework is set up so that we specify a cubic volume with some arbitrary side length and a coordinate system with the origin (the observer) in the center of the box. The volume is then discretised with arbitrary side resolution, 1/Ngrid, into a cubic grid of Ngrid3 field points representing the entire volume. This real space grid can likewise be realised in Fourier space, our preferred space for field related calculations, by applying discrete Fourier transformations. GRFs are then generated with the matter power spectrum given in [51] (see also Fig. 3.4) onto this Fourier grid and reverse transformed into a real space density field.

Next, we set up the framework for calculating LoS Shapiro delays. Within the cubic volume containing the density field we specify a spherical volume using a radius w.r.t. the observer at the origin. We will often refer to this radius using the redshift z. From Eq. (3.8) we generate the gravitational potential fluctuation field. We can then integrate along any LoS (with some angular resolution) out to the spherical shell using the result from Eq. (3.7),

tgra(rs, ✓, ) = 1 c3

Z 0 rs

0(r, ✓, )D(r)dr. (3.10) where rs is the radius of the spherical shell. For convenience, we will refer to this integral as the Shapiro delay even though this produces exactly half of the true Shapiro delay in GR (representing the spacetime curvature contribution, see Eq. (3.4)). Since a LoS generally does not pass through the grid points in which the field is defined, we use tri-linear interpolation to define the field along any given LoS. We can thus generate full maps/distributions of Shapiro delays for a given spherical shell radius and a given Gaussian random density field.

Using this framework of generating GRFs to calculate Shapiro delays, we are interested in investigating two things in particular

1. What is the distribution of tgra across all lines-of-sight at a fixed distance?

This will provide us with information on how the matter distribution a↵ects the LoS Shapiro delay value in terms of the order of magnitude and the variance.

2. How does this change when the distance is increased?

Angular statistics

Since the LoS Shapiro delay is calculated using the gravitational potential fluctua- tions, which by definition average out to zero over a substantial volume, we might

(26)

3.3. Behaviour of cosmological Shapiro delay 17

(a) Gaussian (b) MDR1

Figure 3.1: Distribution of LoS Shapiro delays at z = 0.15 for, a) GRFs, and b) the MDR1 simulation, as indicated by the coloured lines. The solid black line shows the ensemble mean distribution as averaged over 10 realisations respectively. The o↵-centered shifts of the distributions indicate that the local structures of observers have a significant influence on the Shapiro delay. The comparison is made to show that GRFs and N-body simulations produce similar statistics in terms of the LoS Shapiro delay (see also Fig. 3.4).

expect the Shapiro delay to fulfill the same requirement. We calculate the distribu- tion of tgrafor a fixed radius and repeat the process with di↵erent GRFs to create an ensemble of distributions.

We set up the grid using a box size of 1 Gpc h 1 with the grid resolution Ngrid= 128. The spherical shell of Shapiro delays is set at a radius z(rs) = 0.15 with an angular resolution defined by the HEALPix [52] mesh Nside = 2048. In Fig. 3.1a we plot the distributions of tgra for 10 unique GRF realisations. The distributions indicate that a GRF realisation can produce a spectrum of LoS Shapiro delays that rather significantly deviates from the expected zero average. While the statistics of the GRFs by construction are identical, the variability lies in the local structures unique to the particular location of the observer in the field. This can be explained by imagining that an observer is located in a small overdense region of space. A majority of LoS integrals will contribute with a positive Shapiro delay in this proximity until far enough out to where the ensemble statistics takes over. This e↵ect will appear as some shift in the Shapiro delay histogram depending on the nature of this particular nearby region of space. It is straightforward to show that this is indeed a local statistical feature by taking an average over all the histograms and noting that the average time delay fitshtgrai = 0 well and is consistent with h 0i = 0 that we would expect (see Fig. 3.1a).

Radial statistics

To gauge the statistical impact of increasing distance we look at the change in variance of the LoS Shapiro delay as a function of distance. In Fig. 3.3 we have

(27)

18 Chapter 3. Cosmological Shapiro Delay

(a) Matter power spectrum (b) Gravitational power spectrum Figure 3.2: Approximate power spectra of the real universe for a) the density contrast (see [51]), and b) the gravitational potential fluctuations. The purple dashed line indicates a reference scale of 1 Gpc h 1 (z⇡ 0.2), i.e., the box size of Fig. 3.4. The increase in power to the left of the purple line in b) is responsible for the behaviour of the variance in Fig. 3.3.

the LoS Shapiro delay variance of a Gaussian realisation for successively increasing redshifts out to z = 6. We note that the variance appears to monotonically increase with distance although flatten at higher redshifts. If we compare this behaviour to what we expect for the LoS if it was associated with the matter density fluctuations, the cosmological principle tells us that the LoS eventually averages out due to the homogeneity and isotropy of the field. We can show that this discrepancy in LoS behaviour between the density fluctuations and the associated gravitational potential fluctuations is consistent with the theory.

Since the LoS Shapiro delay is described by the gravitational potential fluctua- tions, its statistics at di↵erent scales are naturally determined by the gravitational power spectrum. This power spectrum is defined by the Fourier transform of the 2-point correlation function of the gravitational potential fluctuations

⇠ (r) =h (x) (x + r)i , (3.11)

similarly to Eq. (2.20). By the definition of the power spectrum from Eq. (2.23) we have

P (k)⌘D

| (k)|2E

(3.12) Now by using the Fourier space Poisson equation, Eq. (3.8), where

k/ kk2, (3.13)

we can insert this into Eq. (3.12) and obtain a power spectrum relation P (k)/P (k)

k4 . (3.14)

(28)

3.3. Behaviour of cosmological Shapiro delay 19

Figure 3.3: Variance of the distribution of full sky, LoS Shapiro delays for a GRF at z = [0.5, 0.8, 1, 2, 3, 4, 5, 6]. The increase in the variance shows that the gravita- tional potential fluctuations of the large-scale structure is essential to calculating cosmological Shapiro delays. It also indicates that LoS Shapiro delays depend significantly on structures beyond just the local ones and are thus unpredictable without full LoS field information.

(29)

20 Chapter 3. Cosmological Shapiro Delay We plot both the matter and gravitational power spectra in Fig. 3.2. Because of the factor k4, the power for the gravitational spectrum is significantly magnified for small values of k, i.e., large distances (see Fig. 3.2b). As such there exists long range correlations that are much more prevalent for the gravitational potential field and prevents the LoS from averaging out like in the case of the matter density field. There is an element of suppression to the growth of the LoS that is naturally explained by the growth history of structure, here described by the linear growth factor D(r), that essentially vanishes at primordial level according to Eq. (3.10) (see also the discussion in Sec. 2.2.1).

These results are also in agreement with the work of Nusser [13] taking the approach of analysing angular power spectra. Nusser shows that the contribution of the Shapiro delay from the gravitational potential fluctuations at these scales are on the order of 80-100 times as large as the contribution from a Keplerian potential model of the Milky way. The e↵ect of large-scale fluctuations are therefore not to be dismissed in this context.

Although our results for the cosmological Shapiro delay statistics appear to be reasonable, we check the consistency of our results using a dark matter N-body simulation that to a higher degree resembles the statistical structure of the real universe.

3.3.2 Dark matter simulation

To show that using GRFs provide a good proxy for the real universe, we use a density field from a cosmological N-body simulation, MDR1, by the MultiDark project [53, 54]. The simulation includes 20483 particles in a 1 Gpc h 1 box and has a density contrast and a matter power spectrum close to the real universe (see Fig. 3.4). To generate several realisations of Shapiro delay distributions from this density field, we position the observer at di↵erent locations in the box. In practice this is achieved by choosing random points in the box, shifting them respectively to the observer position at the center along with the entire field, while applying periodic boundary conditions to each edge. Although these field realisations are essentially the same, a similar pattern to that of the GRFs emerges even in this scenario as is illustrated in Fig. 3.1b.

In conclusion, we have shown that Shapiro delay arising from propagation on cosmological scales is significantly a↵ected by large-scale structure morphology, characterised by the gravitational potential fluctuations. Our main goal was to study whether nearby structures to the observer could be used to either approximate or accurately infer the LoS Shapiro delay at greater distances. The growth of the variance with distance for the distribution of Shapiro delays suggests that lines-of- sight statistically do not saturate up until very high redshifts. This means that at any point beyond the nearby regions of our data, the LoS Shapiro delay can be significantly a↵ected by subsequent structures. As a consequence, describing a LoS Shapiro delay using partial LoS knowledge unlikely provides an accurate representation.

(30)

3.3. Behaviour of cosmological Shapiro delay 21

Figure 3.4: Comparison of matter power spectra for MDR1 and Gaussian density fields as indicated by the figure legend. Particularly at small scales (large k) there exists a stronger correlation in the MDR1 field which is due to high cluster regions absent from GRFs. The purple dashed line indicates the box size reference scale of 1 Gpc h 1 (z⇡ 0.2).

Although a particular LoS might not be well represented from partial LoS infor- mation, there may exist a correlation that can be captured by the nearby regions of the LoS which emerges when looking at an ensemble. Thus it is worth investigating whether a WEP test utilising multiple sources has the ability of exposing such a correlation. For this we will employ our framework of generating GRFs to construct a toy model that mimics the WEP test we seek to perform using mock data (see Sec. 4.2).

(31)

22

(32)

Chapter 4

Testing the Weak

Equivalence Principle

4.1 SDSS/BORG data

Constructing estimates of the Shapiro delay requires knowledge on the 3D large- scale matter density fields. Modern methods can infer these density fields from sky surveys, mapping the spatial distribution of millions of galaxies out to great cosmological distances (z < 0.8). In this thesis we use data from the Baryon Oscillation Spectroscopic Survey (BOSS) [55], which is the third generation of the Sloan Digital Sky Survey (SDSS-III) [44].

Specifically, we use density field reconstructions from the BORG (Bayesian Origin Reconstruction from Galaxies) algorithm [45] applied to SDSS-III DR12 data [56].

The BORG algorithm is a powerful Bayesian inference framework that applies non- linear structure growth dynamics in the form of Lagrangian Perturbation Theory (LPT) to construct density fields from primordial initial conditions (z ⇡ 1000).

The algorithm is able to consider galaxy data to jointly infer initial and final con- ditions of density fields through a Hamiltonian Monte Carlo (HMC) method. It also provides a systematic-free approach to correct for various contaminations, se- lection e↵ects, biases etc. in the galaxy data [57]. The density field reconstruction is stochastically generated through sampling algorithms. Thus the BORG algorithm provides a way to complete galaxy data into physically meaningful samples of the real density field. Lavaux et al. [57] showed that these reconstructions fit indepen- dently with observations by studying the cross-correlations to lensing convergence maps of the CMB from the Planck mission [58] (see Fig. 4.1).

Our data consists of 74 of these density field reconstruction samples at a redshift z ⇡ 0.8 with a sky coverage in the regions of the Southern and Northern Galac- tic caps, roughly covering a quarter of the full sky area. In Fig. 4.2 we plot the distribution of Shapiro delays across the 74 samples for one randomly chosen LoS

23

(33)

24 Chapter 4. Testing the Weak Equivalence Principle

Figure 4.1: Figure 9 from the work of Lavaux et al. [57] showing the correlation between lensing convergence maps of Planck CMB data [58] and the SDSS/BORG data.  is the value of the LoS lensing convergence. This plot indicates that the SDSS/BORG data is a representation of real large-scale structure. The solid black line highlights a perfect correlation.

(34)

4.1. SDSS/BORG data 25

Figure 4.2: Distribution of Shapiro delays for a randomly chosen LoS over the 74 SDSS/BORG samples. A representative plot of the general noise level of the data. We see that the noise level of LoS Shapiro delays from these density field reconstructions is on the order of the value itself.

to illustrate that even though the data correlates to real observations of the CMB there is significant noise involved.

In Fig. 4.3 we plot the time delay histogram of the data over the average of the samples to get a general idea what the Shapiro delay data looks like at a red- shift z = 0.8, in reference to Fig. 3.1. The shift towards positive Shapiro delays is, according to previous discussion, a clear indicator that local structures are, on average, overdense. Thus the data correctly indicates our proximity to clus- ter/supercluster structures (e.g., the great attractor of the Laniakea supercluster [59]). We also note that, although small, there are significant amounts of lines-of- sight that exhibit negative Shapiro delays, which further shows why assumptions on accumulative Shapiro delays can lead to misinterpretation in a cosmological setting.

(35)

26 Chapter 4. Testing the Weak Equivalence Principle

Figure 4.3: Distribution of LoS Shapiro delays for the SDSS/BORG data at z = 0.8 in reference to Fig. 4.4. In reference to the discussion surrounding Fig. 3.1, our data exhibits a significant shift favouring positive delays. This means that our local structures, on average, are overdense indicating our position in cluster/supercluster regions.

(36)

4.2. Analysis – Data & Method 27

Figure 4.4: Mollweide map of LoS Shapiro delays in galactic coordinates calculated from the SDSS/BORG data at z = 0.8 masked with the north and south galactic caps of the SDSS sky coverage. The grid lines indicate a 30 separation.

4.2 Analysis – Data & Method

In this section we want to use our framework for calculating LoS Shapiro delays of large-scale gravitational potential fluctuations to implement a cosmological test of the WEP.

As previously described we implement a test based on the work of Yu et al.

[12]. The idea is that if the WEP is broken, two photons with di↵erent energies emitted from the same event contribute to the spectral lag an amount proportional to its LoS Shapiro delay (see Eqs. (3.7) & (3.10)). Depending on the severity of this violation, it could be possible to observe this feature by looking at the correlation between the spectral lag and the LoS Shapiro delay over multiple sources, which is determined by the constant of proportionality (E). The goal of the test is thus to perform a statistical analysis and estimate given the available GRB spectral lags and SDSS/BORG density fields.

Based on the discussion in the previous sections there is a concern of whether the Shapiro delay data is accurate enough to provide a robust WEP test. As such, we provide tests using mock data whose purpose is to mimic the real data using the GRF realisations detailed in Sec. 3.3. The results of the mock tests will thus be a complement to the real test results in terms of determining its robustness.

(37)

28 Chapter 4. Testing the Weak Equivalence Principle The analysis itself is a result of a Markov chain Monte Carlo (MCMC) algorithm called emcee [60]. The algorithm takes a likelihood function with a set of free param- eters and optimises the marginalised distributions, including the most important parameter in our case, . The results will largely depend on the likelihood func- tion and its parameters which are determined by the implemented physical model.

As such, we will put some e↵ort into motivating a model that is both physically realistic/probable whilst also flexible towards unknown time delay physics.

4.2.1 Model

The basic problem that we wish to solve is simply the linear relation

tij = ijtgra, (4.1)

relating the LoS Shapiro delay, tgra, to the spectral lag, tij. However, implement- ing this as the complete physical model is naively attributing all of the observed spectral lag as a result of the WEP breaking and does not leave room for the pos- sibility of other e↵ects (systematic and stochastic) contributing to the spectral lag.

As a result, this model is inflexible and could lead to a misinterpretation of results.

The problem with formulating an accurate physical model is that the cause of spec- tral lag, in general, is relatively unknown [61]. The best we can do to create a fair and robust model is to motivate a model that allows for the necessary flexibility.

Spectral lag data

The spectral lag data we use in this thesis is a sample of the BATSE detection of GRBs cataloged by Hakkila et al. [62] containing information about source redshift and up to six spectral lag values taken from the permutations of four energy bins (including measurement errors). The bins, or channels, are sensitive to energies in the ranges, Ch1: 25-60 keV, Ch2: 60-110 keV, Ch3: 110-325 keV and Ch4: >325 keV. We make use of a set of sources compiled by [12] appropriate for spectral lag analysis. Due to the SDSS sky coverage of our density field data, only 143 of the total 668 GRBs are usable for our analysis. In Fig. 4.5 we see the redshift distribution of these 143 sources and in Fig. 4.6 we see the sky distribution in galactic coordinates.

The problem with the GRB spectral lag is that the physical mechanism(s) caus- ing it is (are) unknown. The search for the nature of spectral lag is very much an ongoing research topic where most of the focus lies on investigating possible e↵ects of the source (see [61, 63–65]). It has for example been shown that the e↵ects of spectral lag can be recreated from simple source models utilising rapid bulk accel- eration on relativistic jet shells [65]. Thus, with the assumption that a significant amount of spectral lag originate intrinsically from the source region, we introduce

(38)

4.2. Analysis – Data & Method 29

Figure 4.5: Redshift distribution of 143 GRBs from the BATSE catalog [62].

an intrinsic time delay parameter, µij, to the model which for simplicity is assumed to be equal across all sources.

tij = ijtgra+ µij. (4.2)

The intrinsic time delay is also assumed to be a function of the energy.

Before we attempt to derive a likelihood function from this relation, we need to establish the errors to this model. The errors given for each data point by the BATSE GRB data are simply independent measurement errors and does not account for any error resulting from the model itself. We are thus inclined to introduce an additional error to the model accounting for its ignorance towards the

“true” physics of the data. We refer to this third model parameter as the extra variance, extra.

At this point we have a very basic model that captures some of the essential physics of the problem. Most importantly though, the model is flexible towards having the ability to detect a correlation between the Shapiro delay and the spectral lag.

The logarithmic Gaussian likelihood function for this type of model utilising extra variance is given by [66] and specifically in our case assumes the form

lnLij = 1 2

X

l

"

ln ( 2l,ij+ extra2 ) + tij(l) ( ijt(l)gra+ µij)

2

l,ij+ extra2

#

, (4.3)

(39)

30 Chapter 4. Testing the Weak Equivalence Principle

Figure 4.6: Angular position of the 143 GRB sources from the BATSE catalog [62]

in galactic coordinates masked with the SDSS/BORG sky coverage (c.f. Fig 4.4).

where the index l is the source index and ij is the labeling of the energy bins. In this notation, t(l)gra are the LoS Shapiro delays for just one sample of the SDSS/BORG data and we will discuss how to implement the full information of the 74 maps to the analysis in the results section.

Before we get to the results of the test, we want to make sure that the model works as intended. We can test the model by using mock data that is generated from a predetermined . We can thus manufacture a correlation between the mock Shapiro delay and the mock spectral lag and let the analysis framework we built suggest a solution for comparison. For a robust model we would then expect the true value to be found within the uncertainties of the analysis results.

4.2.2 Mock data

The idea behind using mock data is primarily to ensure the robustness of our anal- ysis framework. The mock data is a manufactured data set based on predetermined values for the free parameters of the system. Thus if we run the mock data through the analysis framework, we can directly compare the results to its supposed true values. For the sake of eventually applying our knowledge of the mock tests to the real data we want to mimic its most important features.

For the first test we assume that the Shapiro delay is completely known without uncertainty. This e↵ectively isolates the outcome of the test onto the features of

(40)

4.2. Analysis – Data & Method 31 the spectral lag and the statistical model we have assumed. The main features we need to mimic are the number of sources (⇡ 150), the variance of the spectral lag data, and the individual measurement errors. This importantly recreates the uncertainties of the real data in the results (due to the spectral lag). The actual values of the mocked Shapiro delays and the mocked spectral lags are irrelevant for these purposes other than the Shapiro delays be generated in consideration with proper large-scale statistics. For this we employ our established framework of GRFs to get realistic mock Shapiro delays.

We use a randomly generated Gaussian realisation out to redshift z = 0.8 (iden- tical to the one used in Fig. 3.3), randomly generate the angular positions of 150 sources and extract their respective Shapiro delays. Then we choose the designated values of 0 and µ0 which serve as “true” parameters of the system. The mock spectral lag data is then generated by applying the linear relation Eq. (4.2) to each source together with a random Gaussian error matching the variance exhibited by the real data, thus creating the same spread for the mock data. Lastly, we randomly distribute a measurement error to each data point based on the distribution of the real spectral lag measurement errors.

A very crude order of magnitude estimate of a reasonable 0, given the real data, can be extracted from the data sets’ respective means, yielding

0,12⇡ h t12i

htgrai ⇡ 10 2

1012 = 10 14. (4.4)

Thus somewhere around this order of magnitude is where we want to look for a possible violation. In Fig. 4.7 we show results of the analysis predicting the value of 0 for three di↵erent cases, 0 = 0, 0 = 10 14 and 0 = 10 13. We note that the analysis manages to predict the correct value, within 1 , in all of the three cases. However most notably, the uncertainties of the distributions limit our ability to make a conclusive statement regarding WEP violation if is too small. This is evident when comparing Fig. 4.7b and Fig. 4.7c. Without additional information, it is difficult to determine, from just the distributions, that one of these mock data sets indicate a WEP violation and the other does not. It is only on scales on the order of = 10 13 or above we could possibly be confident to detect such a violation, if there were to be one, according to Fig. 4.7a.

The preceding argumentation assumes that we exactly know the LoS Shapiro delays of sources at z = 0.8. Of course, if we account for the fact that the Shapiro delay data is noisy (see Fig. 4.2) and the fact that we only have partial LoS knowl- edge for most sources, we would expect it to be even harder to draw any significant conclusions from such a test. We illustrate this issue by making another mock test where LoS information is compromised through the equation

tmock= ↵ tmock,true+ (1 ↵) trand. (4.5) Here ↵2 [0, 1] is a parameter representation of the LoS Shapiro delay knowledge where ↵ = 1 is true knowledge (recreating the previous test) while ↵ = 0 represents

References

Related documents

Each index is tested for the random walk hypothesis by employment of the Ljung-Box autocorrelation test, the runs test, the variance ratio test, the multiple

The last factor that contributes is the interaction rate probability per particle and unit time that a proton with energy E will interact with a photon and create a pion; if we

Stöden omfattar statliga lån och kreditgarantier; anstånd med skatter och avgifter; tillfälligt sänkta arbetsgivaravgifter under pandemins första fas; ökat statligt ansvar

Generally, a transition from primary raw materials to recycled materials, along with a change to renewable energy, are the most important actions to reduce greenhouse gas emissions

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

The X-ray spectrum from the GRB is conventionally fit to either a power-law or a broken power-law (see section 2.3.2). The Galactic absorption component N H,gal and the redshift z

Our Project aims to develop a movement tracking algorithm using Microsoft Kinect 3D camera and evaluate the quality of movements automatically.. Though there are other 3D Cameras

To set up a NARX neural network model to be able to predict any values and to be used in the tests as specified in section 3.1 we first trained the network with a portion of