• No results found

Supernova Cosmology in an Inhomogeneous Universe

N/A
N/A
Protected

Academic year: 2022

Share "Supernova Cosmology in an Inhomogeneous Universe"

Copied!
93
0
0

Loading.... (view fulltext now)

Full text

(1)

Supernova Cosmology in an Inhomogeneous Universe

(2)
(3)

Supernova Cosmology in an Inhomogeneous Universe

Rahul Gupta

Master’s Thesis

45 ECTS Credits

(4)

Rahul Gupta, Stockholm 2010c

Supernova Cosmology in an Inhomogeneous Universe Master’s Thesis (45 ECTS Credits)

Department of Physics, Stockholm University URN: NBN: se: su: diva-42162

Published on DiVA - Digitala Vetenskapliga Arkivet

http://su.diva-portal.org/smash/record.jsf?searchId=1&pid=diva2:346329 Kindly send your Comments and Suggestions to:

rahul@fysik.su.se

(5)

Abstract

The propagation of light beams originating from synthetic ‘Type Ia’ supernovae, through an inhomo- geneous universe with simplified dynamics, is simulated using a Monte-Carlo Ray-Tracing method.

The accumulated statistical (redshift-magnitude) distribution for these synthetic supernovae observa- tions, which is illustrated in the form of a Hubble diagram, produces a luminosity profile similar to the form predicted for a Dark-Energy dominated universe. Further, the amount of mimicked Dark-Energy is found to increase along with the variance in the matter distribution in the universe, converging at a value of ΩX ≈ 0.7.

It can be thus postulated that at least under the assumption of simplified dynamics, it is possible to replicate the observed supernovae data in a universe with inhomogeneous matter distribution. This also implies that it is demonstrably not possible to make a direct correspondence between the observed luminosity and redshift with the distance of a cosmological source and the expansion rate of the universe, respectively, at a particular epoch in an inhomogeneous universe. Such a correspondences feigns an apparent variation in dynamics, which creates the illusion of Dark-Energy.

(6)
(7)

Acknowledgments

I wish to thank my advisor Edvard Mörtsell for affording me an opportunity to work on a fascinating topic and the guidance during the course of this project. I am grateful to many colleagues at Stockholm University for their assistance during this period. I would like to thank Ariel Goobar for advice with the SNOC package, Patrick Scott for his advice on Fortran and Numerical Algorithms, Mikael Kardell for help and advice on many issues, in particular with Lyx and R, Maria Hermanns for assistance with Mathematica, Christian Walck for advice and discussions on the Random Number Generator Library, Jonathan Pipping for assistance with OpenMP, Amanullah Rahman for setting up the SNALYS pack- age and associated scripts, Jakob Jonsson & Chris Lidman for discussions, Sten Hellman with DiVA Publishing and Iouri Belokopytov, Torbjörn Moa & Sergio Gelato, who in a break from tradition of IT Support have been rather efficient in resolving issues. I also wish to thank Mansi Birla for her advise on the interpretation of statistics and Mats Greenhow for the ray-tracing graphic. Lastly, I wish to express my gratitude to Jana Hilding and Magnus Kullberg for extending financial support during this period which has allowed me to focus on the project.

(8)
(9)

Contents

Acknowledgements . . . . v

1 Introduction . . . . 1

2 Background . . . . 3

2.1 General Theory of Relativity . . . . 3

2.2 Cosmological Models . . . . 7

2.3 Effect of Inhomogeneities . . . . 11

3 The Project . . . 13

3.1 Previous Work . . . . 14

3.2 Cosmological Model . . . . 15

3.3 Scheme . . . . 18

3.4 Method . . . . 20

3.5 Implementation . . . . 25

4 Results . . . 31

4.1 Setup . . . . 31

4.2 Sample Beam Evolution . . . . 32

4.3 Lensing . . . . 39

4.4 Hubble Statistics . . . . 43

4.5 Discussion and Conclusions . . . . 48

Appendices . . . 50

A Test Run Results . . . 51

A.1 Binary Hubble Universe . . . . 51

A.2 Binary Density Universe . . . . 59

A.3 Varying Density Universe . . . . 67

A.4 Varying Hubble Universe . . . . 72

B Supplementary Notes . . . 77

B.1 Lensing Method . . . . 77

Bibliography . . . 81

(10)
(11)

1. Introduction

The study of the luminosity-redshift relationship of Type Ia supernovae which began towards the end of the last century (Supernova Cosmology Project [18, 31], High-Z Supernovae Search [34]), has provided observational evidence that the Einstein-de Sitter cosmological model (which assumes that the universe is spatially flat, matter-dominated, homogeneous and isotropic) does not describe the recent expansion history of the universe. Specifically, the supernovae at a high redshifts (z& 0.5) are observed to have a luminosity that is lower than theoretical predictions of the Einstein-de Sitter cosmological model.

Assuming that the nature of supernova observation are well understood, cosmological models need to be modified based on either one or more of three possibilities:

1. The matter-energy content and/or distribution of the universe is not understood.

2. The geometry of the universe is not understood.

3. General Theory of Relativity is an incomplete description of large scale dynamics of the uni- verse.

The most favoured explanation offered to explain the dimming of supernovae, amongst other observa- tions, is that the expansion of the universe has accelerated in the recent past, which is attributed to the modification of the matter content in the universe or equivalently, a modification of gravity through a cosmological constant term. This leads to the ΛCDM (ΛΛΛ-Cosmological Constant, Cold Dark Matter) model, which extends the Einstein-de Sitter model by introducing two new matter components into the universe, Dark-Energy and Dark-Matter. The “cosmological principle”, that is, assumptions of homo- geneity and isotropy of the universe are still preserved. While, this model agrees well with observations, with some notable exceptions [30], it remains a phenomenological fit rather than a well-founded theory [1]. Fundamental problems that arise in the ΛCDM framework are:

• The nature of, as much as 95% of matter-energy content of the universe, which is assumed to be “dark”, undetectable by its emitted electromagnetic radiation, is not understood.

- DARK-MATTER(∼ 23%) provides the bulk of the binding energy for large scale structures in the universe. There are several independant lines of evidence [7] for the existence of Dark-Matter.

- DARK-ENERGY(∼ 72%), is fluid with negative pressure, responsible for the acceleration of the universe leading to the dimming of supernovae1. The nature of Dark-Energy is poorly understood with no direct evidence suggesting its existance. Possible suggestions attribute Dark-Energy to vacuum energy density of a scalar field or zero point energy of quantum mechanical vacuum fluctuations or a constant term in the Lagrangian density of the theory [10]. However, a naive quantum mechanical calculation yields a value that is 1060-10120 times larger than observationally, a theoretical catastrophe.

• The Co-incidence Problem: Such a model does not explain why acceleration started in the re- cent past, around a redshift of unity or at half the age of the universe. It seems a co-incidence that we live in an epoch where the vacuum energy density has recently surpassed that of ordi- nary matter. Such a situation requires that the initial values of vacuum and matter content in the universe were set very carefully.

Given this lack of theoretical understanding about the parameters of the ΛCDM model, its success does not rule out the possibility that quite a different model can also be a good fit to the data.

1Vacuum Energy may also be looked upon as a cosmological constant, in which it might be looked upon as a modification to

(12)

While cosmological observations (discussed in section 2.2.5) confirm that universe is flat on the largest scales, structure formation at late times leads to deviations from homogeneity and isotropy; in such a case, the cosmological principle is fulfilled only statistically at some large scale. It has been suggested in the literature (see Ellis [14], for example) that the effect of structure formation might allow the possi- bility to account for observations without the need to invoke an accelerated expansion, consequently no Dark-Energy. Indeed, the deviation from homogeneity and isotropy affects the dynamics of the universe and consequently, the observations made in it. A realistic model of the universe take these effects into account has the potential to provide a much needed physical explanation for cosmological observations.

Many strategies have been suggested to investigate these effects; these are briefly mentioned in section 2.3

This project represents a new strategy to study the effects of deviation from homogeneity and isotropy on cosmological observations. Monte-Carlo Ray-Tracing method is employed to numerically simulate beam propagation in a universe that is only statistically homogeneous and isotropic, in order to derive observations of synthetic ‘Type Ia’ supernovae. The emphasis of the study is to examine the effects of modified dynamics, in particular the inhomogeneous expansion of different regions in the universe, on the evolution of light beam traversing through it. The influence from static effects such as gravitational lensing is also included; such effects have been studied extensively (Kantowski et al. [23]; Frieman [16];

Wambsganss et al. [39]; Holz & Wald[22]; Valageas et. al. [37]; Barber et al. [4]; Premadi et al. [32, 36]).

The static and dynamic effects together, result in a statistical distribution of synthetic supernovae; the form of which can be used to ascertain the effect of inhomogeneities. Of particular interest is to ascertain if the pattern of observed distribution for ‘Type Ia’ supernovae can be replicated.

(13)

2. Background

This section is a brief review of the cosmology. It begins with a summary of General Relativity, the framework within which the universe is understood to operate. This is followed by a discussion on cosmological models. Finally, the role of inhomogeneities is examined.

2.1 General Theory of Relativity

General Theory of Relativity is a relativistic theory of gravitation. It unifies Special Relativity and Newton’s law of universal gravitation into a single framework in which gravity is attributed to the underlying structure of space-time. It was published by Einstein in 1915 and has since formed our basis of understanding for all phenomena on cosmological scales.

2.1.1 Principles

General Relativity represents a remarkable achievement in science. It is a theory developed by incor- porating a number of theoretical and philosophical principles, based on considerations of elegance and symmetry, with little or no experimental motivation. The underlying principles of General Relativity are briefly discussed.

Principle of Relativity

The principle of relativity states that laws of the nature must be the same under some coordinate trans- formation. Special relativity applied this principle to all inertial frames. General Relativity extends the principle of relativity to all reference frames (connected through a metric, wherein infinitesimal space- time separations remains invariant), while naturally incorporating gravitation into the scheme.

Equivalence Principle

The foundation of General Relativity is the Equivalence Principle. The Principle of Equivalence of Gravitation and Inertia is the observation that, due to the equality of gravitational and inertial mass, freely falling observers do not feel the effects of gravitation [40]. This allows the laws of physics, stated in the form of covariant equations, that is, independent of choice of coordinates, to be extended to all frames of reference, including the ones with gravity, that is, accelerated frames. In particular, it is possible to transform gravity away and attribute it to the resulting geometry of space time.

Geometric Theory

A geometric theory is one where the results of physical measurements can be directly attributed to under- lying geometry of the manifold (a topological space that is locally Euclidean, or more informally flat).

In general relativity, the manifold of interest is space-time and gravity is attributed to the geometrical structure of space-time.

Field theory

General Relativity belongs to a class of theories known as Field theories. The interaction between the

(14)

1. In the first step, the source creates a field. This is described by a field equation, which deter- mines the field function for a given source distribution.

2. In the second step, the field acts on the test particle (an ideal particle that does not itself af- fect the field). This interaction is described by the equation of motion, which determines the evolution of test particle.

2.1.2 Equations of General Relativity

As explained above, the General Relativity description of a phenomena is governed by two sets of equations, the field equation and the equation of motion, which are briefly discussed and motivated. In order to do this, however, the quantities which enter the equations must be introduced first.

Energy-Momentum

In General Relativity, Energy and Momentum together are the source of gravity and can be treated as a single 4-momentum vector. The mass-energy and linear momentum in any direction are locally conserved; that is, the total amount of these quantities inside any region can only change by the amount that passes in or out of the region through the boundary. This is expressed as the continuity equation or as the conservation the respective 4-currents.

Energy-Momentum Tensor

The mass-energy and linear momentum of matter and radiation are coupled together into a single vector called four-momentum,

pµ= E/c pi

!

= c−1× Energy Momentum

! .

The Energy and Momentum density and its flux is then given by the Energy-Momentum Tensor or Stress-Energy Tensor,

Tµ ν = J(0)µ J(i)µ

!

= Energy Density c−1× Energy Current Density c× Momentum Density Momentum Current Density

!

. (2.1a)

Space-Time Curvature

Space-Time is the ‘field’ in General Relativity. It has the mathematical properties of a smooth connected Lorentzian manifold, which can be geometrically described in terms of the Metric tensor. The Metric Tensor is a generalization of the concept of dot product from Euclidean Geometry, providing a mech- anism for extracting magnitude of scaled projections (effectively ‘lengths’ and ‘angles’) from pairs of tangent vectors [2].

Unlike Special Relativity, the metric in General Relativity is coordinate dependent which means that it differs with space-time coordinates. In other words, the ‘lengths’ of tangent vectors and the ‘angles’

between them at different space-time coordinates can be different and the metric tensor stores exactly how this variation occurs. However, General Relativity restricts space-time as a metric space, coordinate spaces that are connected through the metric tensor, or a space where infinitesimal space-time intervals are invariant and consequently, the ‘lengths’ and ‘angles’ vary smoothly with coordinates.

Since space-time forms a manifold, it is possible to find a coordinate transformation, such that it is locally flat. That is, we can make a coordinate transformation such that metric corresponds to a flat space and the first derivative of the metric vanishes. A consequence of this result is that curvature, given by the Riemann Tensor, is a function of second derivative of the metric.

(15)

Riemann Tensor

The Riemann Tensor is defined in terms of the Christoffel Symbols and their derivatives, Rµ

ν α β = ∂αΓµν β− ∂βΓµν α+ Γµ

λ αΓλν β− Γµλ βΓλν α. (2.2) Christoffel Symbol are defined in terms of the metric as, ,

Γλµ ν=1

2gλ ρ(∂νgµ ρ+ ∂µgν ρ− ∂ρgµ ν). (2.2a)

The Einstein equation

The field equation of General Relativity is the Einstein Equation. It is a dynamical equation that relates the curvature1of space-time to the energy and momentum of matter and radiation.

Gµ ν = κTµ ν, (2.3)

where κ = −8πGN/c4 is the constant of proportionality with GN Newton’s Constant of Gravitation and c the speed of light in vacuum.

Details of the Einstein Tensor

The Einstein Tensor is a function of the curvature and is expressed in terms of the Ricci Tensor Rµ ν and Ricci Scalar R,

Gµ ν= Rµ ν−1

2Rgµ ν, (2.4)

where the Ricci Tensor Rµ ν and Ricci Scalar R are the contractions of the Riemann tensor Rα µ β νand the metric gµ ν,

Rµ ν = gα βRα µ β ν and R = gµ νRµ ν. (2.5)

While mathematically complex, the Einstein Equation is a very elegant statement. The Einstein tensor (Gµ ν), is the most general function2of the curvature that is covariantly constant (vanishes upon further differentiation or very simply, conserved). It has the same properties as the Energy-Momentum Tensor (Tµ ν), which is conserved, by definition. Thus, the connection between curvature and matter must be established by equating the two quantities.

The Geodesic Equation

Geodesics are defined to be curves whose tangent vectors remain parallel if they are transported along it. Locally, this makes them the shortest path between points on the space-time manifold (the equivalent of a straight line in Euclidean Space). This means, in curved space-time, geodesic is the path followed by a body not under the influence of any external force (gravity is no longer treated as a force). For those still inclined to think in terms of gravity, a geodesic is the “free-fall” path of the body in space-time. The Geodesic Equation (2.6) is the equation of motion in General Relativity. It determines the geodesic, and hence the path of a body, once the curvature of space-time is known.

d2xµ

2 + Γρ σµ dxρ

dxρ

dλ = 0. (2.6)

1It is important to note that the Einstein Equation does not determine the curvature completely. The Riemann Tensor (which has 20 independent components) can be decomposed in terms of the Ricci Tensor and Weyl Tensor (each of which have 10 independent components). The Ricci Tensor, determined by the Einstein Equations, tracks the volume evolution of infinitesimal space-time, which might be looked upon as the curvature at a point, due to mass-energy distribution in it. The Weyl tensor tracks changes in shape of infinitesimal space-time due to gravitational influences external to the space-time point.

(16)

Here xµare the four-coordinates, Γµρ σis the affine or Christoffel connection and λ is the affine parameter.

2.1.3 Interpretation

The two equations of General Relativity can be explained in a very simple way due to [3] and Baez (private communication):

Assume a small ball of freely falling test particles which are initially at rest with respect to each other. The rate at which the ball begins to shrink (by following the geodesics in curved space-time) is proportional to the sum of the energy enclosed (the source of space-time-time curvature or gravity) in the infinitesimal ball and the pressure acting on it along the three coordinate directions.

If the particles are treated as markers for space-time points, then it can be concluded that it is space-time that is actually being warped (given by the Einstein Equation) and the test particles are merely following this curvature (given by the Geodesic Equation).

2.1.4 The Geodesic Deviation Equation

It is useful at this point to introduce the concept of geodesic deviation which is an essential component of the Ray-Tracing Method used to simulate supernovae observations. The geodesic deviation equa- tion measures the change in separation of neighbouring geodesics. Since space-time is curved, two geodesics starting out as parallel and infinitesimally separated will deviate. This implies that two ini- tially co-moving particles in “free-fall” will accelerate relative to one another in a manner determined by the curvature of space. The Geodesic Deviation Equation (2.7) then determines the rate of relative acceleration of two particles moving forward on neighbouring geodesics.

d2ηα

2 = −Rαβ γ δkβkηδ. (2.7)

Here, ηα is the deviation between two geodesics corresponding to the same affine parameter λ , kαand kare tangent to the two geodesics under consideration and Rα β γ δ is the Riemann tensor defined above.

As mentioned earlier, the Riemann Tensor can be decomposed in terms of the Ricci Tensor and the Weyl Tensor. In the geodesic deviation equation, the Ricci tensor component determines the rate of change of convergence and the Weyl tensor component gives the rate of change of shear.

(17)

2.2 Cosmological Models

Cosmology has the modest aim of trying to understand the entire universe. Since such a task is ex- tremely difficult at best; to simplify the task at hand, the structure and dynamics of the universe are systematically analyzed at a number of levels of abstraction. A number of simplifying assumptions and/or approximations are made, which lead to a cosmological models. The cosmological model at the highest level of abstraction is now derived.

2.2.1 Basic Assumptions

To arrive at a zeroth approximation for a model of the universe, the following two assumptions need to be made:

• The Cosmological Principle: the matter content of the universe is homogeneous (constant den- sity) and isotropic (same in all directions) for all observers. Modern cosmological observations (Sky Surveys, Cosmic Microwave Background Radiation etc.) have shown this to be true at the largest scales.

It immediately imposes the condition that space-time is conformally flat (that is, angles are preserved) which in turn implies that space has constant curvature. It further allows us to de- fine a notion of cosmological time, since all co-moving observers (that is observers at rest with respect to local matter distribution) can synchronize their clocks with the homogeneous mat- ter density [29]. All kinematical quantities must be locally isotropic [15]. Using the notion of conformal flatness, cosmological time and isotropy, it is possible to show that space-time of the universe is described only by the Friedmann-Lemaître-Robertson-Walker (FLRW) metric. The invariant interval under this metric takes the form:

ds2= −c2dt2+ a2(t)dΣ2. (2.8)

where Σ ranges over a 3-dimensional space of uniform curvature and a(t) the scale parameter which characterizes the proper distance between two co-moving observers as a function of the cosmological time [40].

In rectangular coordinates (X ,Y, Z), the spatial metric has the form

2= dX2+ dY2+ dZ2, (2.8a)

whereas, in reduced-circumference polar coordinates (r, θ , φ ), it has the form dΣ2= dr2

1 − kr2+ r2(dθ2+ sin2θ dφ2). (2.8b) where k is the Gaussian curvature.

• Weyl’s Postulate: matter and radiation in the universe behaves like an ideal fluid. The energy momentum tensor takes the form of equation (2.9) in its rest frame, which is specified by just two parameters, mass-energy density ρ and pressure p.

(18)

2.2.2 The Friedmann Equations

For a co-moving observer in space-time described by the FLRW metric, who is in the rest frame of the ideal fluid, the Einstein Equation reduces to the two Friedmann Equations,

 ˙a a

2

= 8πGN

3 ρ −kc2

a2 , (2.10a)

¨ a

a = −4πGN

3

 ρ +3p

c2



. (2.10b)

The two Friedmann Equations inform us that under the above assumptions the universe is expanding or contracting, depending on whether the scale parameter a increases or decreases (that is, if ˙ais positive or negative) and the rate is determined by the mass-energy density of the universe and its curvature.

The second Friedmann Equation, also known as the Acceleration Equation, further informs us that the progression of the universe is decelerated with greater matter-energy density or if matter exerts greater pressure. It also rules out the possibility of static universe in the presence of only ordinary matter, since

¨ a6= 0.

Equation of State

The Friedmann Equations do not include information about the nature of matter and its interactions which is necessary to relate energy density and pressure. It is not surprising that there are only two Friedmann Equations with three unknown variables: the scale factor a(t), energy density ρ(t) and pressure p(t). To solve for these as a function of cosmic time, a third equation is needed that informs us of the nature of particle interactions.

One such possibility is the Equation of State. While this can in general be very complicated, in cosmology we deal with dilute gases, which allows us to make the approximation that the relationship is linear, that is

p= αρ, (2.11)

where α is the state parameter. In particular, it takes the value of 1/3 for radiation, 0 for non-relativistic matter (assumed pressureless) and -1 for Vacuum Energy (negative pressure).

The conservation of the energy-momentum tensor yield a third equation - the Continuity Equation. The continuity equation is not independent of the Friedmann Equations because the Einstein Equation (from which the Friedmann Equations are derived) implicitly includes Energy-Momentum conservation. It has the form:

ρ = −˙ 3 ˙a

a (ρ + p). (2.10c)

2.2.3 The Cosmological Constant

Einstein’s interest in finding a static solution for the universe led him to modify his equations by intro- ducing a free parameter called the cosmological constant,

Gµ ν+ Λgµ ν=8πGN

c4 Tµ ν, (2.11)

where Gµ ν+ Λgµ νis the most general, local, coordinate invariant divergenceless, symmetric, two-index tensor that can be constructed from the metric and its first and second derivatives. The Friedmann Equa- tions with the cosmological constant take the form:

 ˙a a

2

= 8πGN

3 ρ −kc2 a2

3, (2.12a)

¨ a

a = −4πGN

3

 ρ +3p

c2

 +Λ

3. (2.12b)

(19)

These equations admit a static solution (known as ‘Einstein’s Static universe’) with positive spatial cur- vature and all parameters ρ(t), p(t) and Λ non-negative. However, Hubble’s discovery of an expanding universe eliminated the need for the cosmological constant with Einstein regretting it as the "biggest blunder" of his life (comment attributed to Gamov). However, the theory does not forbid it and since it has not been ruled by observation, the cosmological constant has not been abandoned. Recent observa- tions have in fact, served to revive interest in the cosmological constant.

2.2.4 The Friedmann-Lemaître-Robertson-Walker Model

Once the interaction of matter components is specified, the Friedmann Equations determine the dy- namics of the universe on the largest scales. Any relativistic cosmological model so obtained, that is, by assuming a homogeneous isotropic universe composed of matter that behaves like an ideal fluid, is sometimes referred to as the Friedmann-Lemaître-Robertson-Walker Model.

A priori, even this zeroth approximation of the universe can provide a wealth of information about the universe, such as its global content & structure and predict the evolution of the universe, its origins &

possible fates under different circumstance. Moreover, it allows one to successfully predict and explain a number of observable phenomena, as was first done by Lemaître, when he predicted that the universe is expanding and thus conjecture the Big-Bang and the Hubble Law [26].

As a zeroth order approximation for the evolution the universe the FLRW model is simple to calculate, yet it can predict many of the essential features of the universe. For this reason, the standard models of cosmology are based on the FLRW models. Such a model can be be subsequently extend to include structure. We shall come across one such example while explaining the Stochastic Universe Method.

2.2.5 Cosmological Observations

A standard model of cosmology, must be able to explain a number of cosmological observations. Some such observations, relevant to the current investigations, are discussed below:

‘Type Ia’ Supernovae

‘Type Ia’ supernovae are stellar explosions that are believed to occur due to the accumulation of matter on the surface of the white dwarf stars. If the temperature and density of the star rises sufficiently as the star approaches the Chandrashekhar limit (to within 1%), a thermonuclear reaction is triggered due to carbon detonation. This manifests as a supernova explosion, which at an absolute magnitude of about -19.3 are extremely bright and can out shine entire galaxies [27].

This luminosity, which is generated by the radioactive decay of nickel-56 through cobalt-56 to iron-56, follow a characteristic light curve (the evolution of luminosity as a function of time) after the explosion.

Moreover, upon caliberation, the peak luminosity of the light curve is believed to be consistent with small intrinsic dispersions, since they all have about the same mass at the time of explosion. This allows them to be used as a secondary “standard candles” to measure the distance to their host galaxies.

While the flux measured from the supernovae determines the absolute distance light has traveled, the redshift of its spectral lines acts as a measure of the rate of expansion of the universe. These measures can be compared to determine the rate of expansion of the universe. Observations over 500 ‘Type Ia’

supernovae have been made with sufficient accuracy for cosmological fitting, over a period spanning more than a decade. The accumulated luminosity-redshift data has been the prime driver that have required the reconsideration of cosmological model as discussed in the introduction.

Cosmic Microwave Background Radiation

Cosmic Microwave Background Radiation is a faint electromagnetic radiation, the afterglow of the

(20)

peaks in the microwave region of the spectrum at of 160.2 GHz/1.9 mm. It arose due to the decoupling of matter and radiation at about 400,000 years after the Big-Bang. This phenomena occurred due to adiabatic cooling of the universe as a result of the expansion, which in turn led to the recombination of protons and electrons into neutral atoms; Further, as the energy of the photons fell further, they stopped interacting with neutral matter, and have been propagating around essentially free in the universe ever since.

Through the Cosmic Microwave Background it is possible to observe the free electrons that readily scattered the background radiation at the "surface of last scatter". As a result, the properties of the radiation contains information about the physical conditions in the early universe. Further this radiation is affected by the evolution of the universe which is described (at the zeroth approximation at least) by the cosmological parameters in the Friedmann Equations. As a result, these cosmological parameters can be constrained using the statistical properties of the Cosmic Microwave Background observations.

The Cosmic microwave background observation indicate that the early universe was homogeneous and isotropic on the largest scales. Anisotropies are found at scale of 10−5, which correspond to cosmolog- ical perturbations and the resulting sound waves (See BAO below) that arise soon after the Big-Bang.

These anisotropies can be quantified in terms of the power spectrum (a plot of the amount of fluctu- ation in temperature against the angular/linear size). The size of the perturbations are determined by the eigenmodes of the sound wave, which depends on the size of the universe at the “surface of last scattering”. A comparison of the apparent size of the fluctuations to known actual size, together with a knowledge of the Hubble constant determines the geometry of the universe. Measurements made by the Wilkinson Microwave Anisotropy Probe (WMAP) satellite indicate that the universe is almost flat [25].

Baryon Acoustic Oscillations

The cosmological perturbations in the relativistic plasma of the early universe produced excitations in the form of sound waves. The recombination to a neutral gas (described above) led to an abrupt decrease in the sound speed and effectively ended the wave propagation. In the time between the formation of the perturbations and the epoch of recombination, modes of different wavelength completed a number of oscillation periods, which translated the characteristic time into a characteristic length scale and produced a harmonic series of maxima and minima in the anisotropy power spectrum. Due to the large number of baryons in the universe, the acoustic oscillations in the plasma were imprinted onto the late- time power spectrum of the baryonic matter.

BAO matter clustering provides a "standard ruler" for the length scale in cosmology. The length of this standard ruler can be measured by looking at the large scale structure of matter using astronomical surveys. The Sloan Digital Sky Survey examined the clustering of these galaxies by calculating a two- point correlation function using a sample of 46,748 galaxies that covered 3816 square degrees out to a redshift of z = 0.47 [13]. The correlation function is a function of the comoving galaxy separation distance and describes the probability that one galaxy will be found within a given distance bin of another. The BAO signal shows up as a bump in the correlation function at a comoving separation equal to the sound horizon. The evolution of the universe determines the ratio of this length scale at an epoch compared to that of the sound horizon.

Sach’s-Wolfe Effect

The gravitational redshifting of the cosmic microwave background radiation is the Sach’s-Wolfe effect.

It occurs in two varieties, the non-integrated Sach’s-Wolfe effect, which is caused by gravitational red- shift occurring at the surface of last scattering and the Integrated Sach’s-Wolfe effect which is caused by gravitational redshift as the photons propagate outward from the surface of last scattering. The in- tegrated Wolfe-Sach’s effect is further divided into an early time and late time effects, which occur at radiation and matter dominated eras respectively. The latter is the subject of focus here.

The Late-Time Integrated Sach’s-Wolfe effect causes the photons to gain or lose energy, as they en- counter gravitational potentials, such as clusters and voids, while traveling from the surface of last

(21)

scattering to the observer. The photons gain energy when they descend into a potential well, and lose it when they climb out. These changes will cancel out unless these potential wells evolve in time. The effect due to first order density perturbations vanishes in a flat universe with only matter.

The late-time Integrated Sach’s-Wolfe effect will manifest as a non-zero cross-correlation between the galaxy density and the temperature of the Cosmic Microwave Background in this scenario. Such a correlation has been indeed been identified, for example, by Granett et. al. using the Sloan Digital Sky Survey Data [19].

2.2.6 The Standard Model of Cosmology

The Einstein - de Sitter Universe

In 1932, Einstein and de Sitter jointly proposed a flat (k = 0) and non-relativistic matter dominated (p = α = 0) FLRW model for the universe with no cosmological constant (Λ = 0), which now carries their names. As the simplest possible model, it is usually employed as a foundation for cosmologi- cal investigations and for a long period was considered as the best candidate for a standard model of cosmology, especially on philosophical grounds.

The Einstein - de Sitter Model, however, is unable to account for all cosmological observations. It can be demonstrated, that in such a model, the Universe will undergo decelerated expansion, since gravity is attractive. ‘Type Ia’ Supernova observation, provided the first concrete piece of evidence against this model, since the luminosity at a high redshift z& 0.5 is dimmer than expected for a universe where the expansion is decelerating. The length scales, as indicated by the BAO power spectrum, is also larger than expected in the case of matter dominated universe. Finally, the CMBR power spectrum and the Late Time Integrated Sach’s-Wolfe effect show that the Einstein - de Sitter model cannot account for the observed geometry, that is a flat universe, with the known matter content (including Dark-Matter).

TheΛCDM Universe

One possible solution to accommodate the above-mentioned cosmological observations, is to postulate that the universe is still well described by a FLRW model, but undergoes an accelerated expansion.

This is done by introducing an energy with a repulsive force in the form of Dark-Energy. This is also equivalent to modifying gravity, by assigning the cosmological constant a small positive value.

Cosmological observations also lead us to postulate Dark-Matter, that is undetectable by its emitted radiation, but whose presence can be inferred from gravitational effects on visible matter. A thorough discussion may be found in Bertone [7].

Including Cosmological Constant and Dark-Matter into the Einstein - de Sitter model produces the ΛCDM Model. It is widely accepted as the “standard model of cosmology”. It is a model with the simplest assumptions that is physical and consistent with observations. However, as was discussed in the introduction, it is plagued with a number of theoretical problems.

2.3 Effect of Inhomogeneities

The assumption of a homogeneous and isotropic universe in cosmological models, need not be valid at late times due to the formation of structure. On the scales of supernova observations and sky surveys, this should indeed be the case. A cosmological model represents an abstraction of the universe, map- ping its structure and dynamics to fewest possible degrees of freedom, such that the essential features are retained. Since structure formation is non-linear, it is plausible that an FLRW model, may not rep- resent the appropriate abstraction in late epochs and is unable to accurately account for cosmological

(22)

Inhomogeneities can possibly arise on local or large scales and need to be treated differently. There are three main physical effects that could be missing in assuming the cosmological principle [8]:

1. The “overall” dynamics of an inhomogeneous universe could be significantly different.

2. The light propagation would be affected by the inhomogeneities in the universe. Since, all conclusions about the dynamics are based on observations of light, it is possible to observe

‘apparent’ variation, even if the dynamics were approximately the same.

3. The presence of only one observer may bias results, as the observer position become significant.

As reviewed by Celerier [11], the dynamics of the universe with small scale inhomogeneities have been studied using a number of different strategies which fall into two classes. The first involves adapting Einstein’s equations such that they are valid for averaged quantities over some large scale. The second is to treat inhomogeneities using perturbation theory and subsequently averaging over the results; the effects on light propagation in this scenario have primarily been studied through exact toy models and simulations. Large scale inhomogeneities are usually treated using exact models.

(23)

3. The Project

The goal of this project is to study, by means of numerical simulations, the evolution of light beams propagating through an inhomogeneous universe and how this affects the observation of ‘Type Ia’ su- pernovae. Of particular interest is to examine if the trend of supernovae observations which suggest Dark-Energy can be replicated in these simulations.

In principle, the following phenomena affect the light beam as it traverses through the universe:

1. The change in the beam luminosity and frequency as it traverses through different regions of the universe because of their differing mass densities.

2. The dispersion in observed source luminosity due to gravitational lensing effects.

3. The effect of dust and other exotics leading to extinction of source luminosity.

Numerical Simulations based on the Monte-Carlo Ray-Tracing are employed to examine the effect of these phenomena on observed light beams originating from such cosmological sources. The universe is divided into a number of cells, inside which the matter distribution is idealized. The assumption of an idealized matter distribution lends to simplified numerics. The light beam traveling between a source and an observer pass through these cell, which influences the beam luminosity and frequency. Statistics of magnitude-redshift distribution can be built up by the simulation of a large number of such synthetic supernovae, which is used to obtain a Hubble diagram. These results can be compared with observations and used to constrain cosmological parameters.

This method was first suggested and employed to study gravitational lensing effects on light originating from cosmological sources (referred to as the Stochastic Universe Method) by Holz and Wald [22]. This is discussed in Section 3.1.1. The work of Holz and Wald was extended by Bergström et al. [5] to include luminosity extinction effects and also generalizing the matter distribution in the cells over the Stochastic Universe Method. This led to the SuperNova Observation Cosmology (SNOC) simulation package by Goobar, et al. [17]. However, both these studies assumed the matter density to be homogeneous across cells and hence, a uniform evolution of the universe. In particular, the first of the above-mentioned phenomena becomes irrelevant in a homogeneous universe.

However, if one considers a more realistic matter distribution, which is inhomogeneous across cells, it can readily be observed that the cells will have different evolution profiles, leading to non-trivial lumi- nosity and frequency evolution. The first objective of the project is to study the evolution of light beams originating from synthetic ‘Type Ia’ supernovae, that arises under such circumstances. Emphasis is on the first of the above-mentioned three phenomena, which arises due to the inhomogeneous mass distri- bution in the universe. Further, the effect of lensing is incorporated into the beam evolution. Provisions are made to examine the effect of dust and exotics; however, these are not studied in this project. The second objective is to accumulate statistics for the redshift-magnitude distribution for a large number of such synthetic supernovae in order to constrain cosmology and identify trends mimicking Dark-Energy.

The Monte-Carlo Ray-Tracing Method used to simulate light beam evolution through the universe is now examined in detail. The underlying assumptions of the method are discussed in section 3.2. A general description of the Ray-Tracing scheme, in section 3.3, is followed by the details of the method employed in section 3.4. These studies are carried out using the ‘Inhomogeneous Universe Supernova Cosmology Observation Calculator’ (iSNOC), a FORTRANprogram, which was specifically developed during this project. The detailed implementation of the Monte-Carlo Ray-Tracing method in iSNOC is discussed finally in section 3.5.

(24)

3.1 Previous Work

3.1.1 Stochastic Universe Method

The Stochastic Universe Method is a Monte-Carlo Ray-Tracing method for simulating the effects of gravitational lensing in an inhomogeneous universe on the light beams originating from distant cosmo- logical sources, and the consequent changes in the observed images vis-à-vis a homogeneous universe.

It aims to simulate the statistical distribution of images due to cumulative lensing effects produced by an inhomogeneous matter distribution. This is as opposed to the use of lens systems models. As mentioned earlier, the method was first proposed and employed by Holz and Wald [22].

In this method, the universe is considered to be homogeneous on extremely large scales, corresponding to the Hubble radius. The universe is divided into a number of cells at much a smaller scale, of the order of ~1 Mpc. Each cell is assumed to have an average matter density defined by the underlying FLRW model of the universe. However, within each cell matter might be distributed inhomogeneously according to one of several distribution profiles. The authors compare this to a “Swiss Cheese model”

of the universe, except that the cheese has been completely eliminated and matter distribution inside the cells need not be spherically symmetric.

A beam of light from a cosmological source to the observer is now allowed to traverse these cells.

The geodesic deviation equation is used to track the relationship between infinitesimally separated null geodesics, that is, the path taken by a light beam (comprising of infinitesimally separated light rays).

This is used to extract statistical distributions of luminosity, shear and rotation of observed images, with the assumption that light originates from sufficiently small sources that are “Standard Candles”. This method is similar to the Ray-Shooting method, where the masses are projected on lens planes and the lens equation is used to follow light beams. The Ray-Shooting method is arguably more complicated and suffers from artifacts that arise due to the matter projections.

The results obtained by the Stochastic Universe Method for the inhomogeneous matter distribution are compared to the homogeneous universe, thereby determining the difference in luminosity magnitude of the source at a particular redshift in the two cases.

The method employed in this project shares many of the assumptions and attributes of the Stochastic Universe Method, as it follows the same Monte-Carlo scheme and also incorporates the lensing of light beams. These are discussed in greater detail in the subsequent sections within the framework of the project.

3.1.2 SuperNova Observation Calculator

The Stochastic Universe Method was generalized and extended by Bergström et. al. [5]. The most sig- nificant of these generalizations was the extension of matter content of the model, which is dominated by non-relativistic matter, to include a number of perfect fluids with non-vanishing pressure. Other gener- alizations included the use of a more realistic inter-cell mass distribution profile, based on the Schechter function (however, the cell sizes vary with the mass to maintain a constant density) and the use of more realistic mass distribution profile within the cells, in this case the Navarro-Frenk-White distribution based on N-body simulations.

This led to the ‘SuperNova Observation Calculator’ (SNOC) package by Goobar et al. [17], which aims to produce realistic synthetic samples of supernovae observation. Apart from including the generaliza- tions suggested by Bergström et. al. [5], it allowed one to examine the effect of dust along the line of sight and exotic effects like hypothetical photon-axion oscillations.

SNOC implements a generalized version of the Stochastic Universe Method. For this reason, SNOC is used as the platform on which the simulation software used in this project is developed. The exact modalities are discussed ahead.

(25)

3.2 Cosmological Model

The discussions begin with the cosmological model employed to describe the universe and the reasons for the same. First, a global cosmological model is used to describe the universe on largest scales. Next, a local model is used to justify the division of the universe into cells and the independent examination of beam evolution through these cells.

3.2.1 Global Model

Since the above discussed methods essentially have the same structure, the global model developed in SNOC [5] is used, which in turn is developed along the lines of the one used in Stochastic Universe Method [22]. The model is now examined in detail.

It is assumed that the universe is comprised of perfect fluids, and that the departures from homogeneity and isotropy are small on the largest distance scales in the universe. Under these circumstances, it may be shown that the variation in gravitational field on these scales is small and that space-time in the universe can be approximated by the ‘Newtonianly Perturbed FLRW metric’,

ds2= −(1 + 2φ ) dt2+ (1 − 2φ ) a2(t) dΣ2, (3.1) where, φ is a first order scalar perturbation, in the form of a Newtonian Gravitational Potential, on the FLRW metric, specified in equation (2.8), which over large distance scales (of the order of the Hubble radius RH= H−1) is subject to the condition,

|φ |  1, (3.2)

and that the spatial average of φ vanishes (a spatially constant part of φ may be absorbed into t and a).

Next, it is assumed that the inhomogeneities are significant. Hence, the time derivatives are much smaller than the spatial derivatives,

∂ φ

∂ t

2

 a−2(dΣ2)abDaφ Dbφ . (3.3)

The right hand side of the expression is simply (∇φ )2with the Covariant Derivative (deriva- tives unaffected by co-ordinate transformations), where the inverse of the spatial metric en- ables the dot product Dφ · Dφ . Thus, the expression reads,

∂ φ

∂ t

 |∇φ | ,

which simply states that the variation in gravitational field in space is more significant than that over time. In other words, the inhomogeneities are essentially static and persist.

Moreover, the second order derivatives are assumed to dominate over the first order ones,



2ab

Daφ Dbφ

2

 dΣ2ab

2cd

DaDbφ DcDdφ . (3.4) or equivalently,

(∇φ )2 

2φ

(26)

Now, substituting the Newtonianly Perturbed FLRW metric (3.1) and the Ideal Fluid Energy Momen- tum tensor (2.9) into the Einstein Equation (2.3) and making the approximations of (3.2)-(3.4), a more general form of the Friedmann Equations is obtained,

3 ˙a a

2

= 8πGN

i

ρi− 3kc2

a2 − 2a−2(dΣ2)abDaDbφ , (3.5) 3a¨

a = −4πGN

i



ρi+3pi

c2



+ a−2(dΣ2)abDaDbφ , (3.6) where the summation is over the ith fluid element. As in the FLRW case, the Hubble parameter is completely determined by the energy densities and the curvature, whereas, the deceleration depends on the pressure exerted by the fluid elements. The Friedmann Equations for the FLRW metric are recovered upon the substitution of densities and pressures with their spatial averages.

Comparing the Friedmann Equations for the Newtonianly Perturbed FLRW metric and FLRW metric, it can be shown that perturbation satisfies the Poisson Equation,

a−2(dΣ2)abDaDbφ = 4π δρ , where δρ ≡ (ρ − ρ) . (3.7) Remember, the Poisson Equation is of the form,

2Φ = 4π ρ .

It is important to note at this juncture, that it is possible to have large variations in δρ locally, without violating the assumptions made; this is, in fact, essential to explain structures that exist in the universe.

The Poisson Equation implies that the gravitational potential φ and hence the metric is uniquely deter- mined by the matter distribution. Since (3.7) is a non-local equation, the local curvature and thus the gravitational lensing effects on the light beam, can in principle depend on matter distribution in arbi- trarily distant parts of the universe. However, it might be demonstrated as below, that under the above stated cosmological assumptions, only the matter distribution within the Hubble Radius RH is relevant for the determination of curvature.

First, it is assumed that in the underlying FLRW model, the distance scale that is set by the spatial curva- ture is at least as large as the Hubble radius. Consequently, the spacetime curvature of the FLRW model is of the order of 1/RH2. Values much smaller than that can be neglected as they will be inconsequential.

It was already demonstrated in section 2.1.2 that the curvature depends on the second derivative of the metric. With the Newtonianly Perturbed FLRW metric it is obvious that that the additional contribution to the curvature, over the underlying FLRW metric, is due to the perturbation in the form of Newtonian potential φ . It can shown that under the approximations (3.2)-(3.4), the additional contribution to the curvature is given in terms of the second spatial derivatives DaDbφ .

Inhomogeneous differential equations, like the Poisson Equation, can be solved using the Green’s Func- tion Method. This is now employed to demonstrate that the contribution to the curvature given by DaDbφ depends only on the matter contribution in the region RHaround that point. Consider a sphere S of ra- dius RH and volume V around the point on which φ is to be calculated. For the Poisson Equation (3.7), the Greens Function is defined as,

a−2habDaDbG x, x0 = −4πδ x, x0 . (3.8) Greens Identity with Dirichlet Boundary Conditions now gives,

φ (x) = − ˆ

V

G x, x0

δ ρ x0 dV0− 1 4π

ˆ

S

φ (x0)ˆr0aD0aGD x, x0 dS0. (3.9)

(27)

Taking the second derivative, DaDbφ (x) = −

ˆ

V

DaDbG x, x0

δ ρ x0 dV0− 1 4π

ˆ

S

φ (x0)ˆr0aD0aDaDbGD x, x0 dS0. (3.10)

The surface term is the order of |φ | /R2H and can be neglected since φ is a perturbation as given in equation (3.2) and that the spacetime curvature for the underlying FLRW model is of the order 1/R2H, as asserted above. Therefore, in this model of the universe, it follows that the curvature depends on the scale RHand the matter distribution within this scale.

3.2.2 Local Model

For the local model, it is further assumed that there is a scaleR  RH such that no strong correlations of matter occur on these scales (atleast over the time intervals it takes for light to travel on these scales).

In these circumstances, the curvature and the expansion rates may be approximately determined by the matter distribution in R. The claim is now justified in both these contexts. Unlike the Stochastic Universe Model, the scale is not considered to be co-moving, which is essential to together study all the phenomena affecting the evolution of the light beam.

It is already explained that the curvature of space-time is given by the Riemann Tensor, which has two components, the Ricci Tensor and the Weyl tensor. The Ricci tensor at a point, say x, governs the local curvature through the expansion rate, according to the Einstein equation; it is determined by only the local matter distribution. To calculate the Weyl tensor at x, the volume on the scale RH is divided into a spheres of RadiusR. Excluding the sphere of radius R at the center, that is, around the point x and assuming a flat geometry, each region makes a contribution of the order m/D3 to the Weyl Tensor at x, where m is the mass enclosed in the region and D is the distance from x . Further it is assumed that there is no correlation between the regions. A random walk estimate gives the contribution to the Weyl tensor, as determined from trace free part of equation (3.10), from all these spheres to be of the order of m/R3∼ ¯ρ . These estimates do not change significantly in case of a curved FLRW metric or a cosmological constant, since the curvature will only vary significantly in these cases over distance scales of RH, (R by assumption). A randomly fluctuating Weyl tensor of this magnitude will add some noise to the curvature, with negligible effect on the beam distortion and area over long distances.

Thus, it is possible to estimate the curvature, without significant errors, by considering the matter distri- bution only within a regionR. The expansion rate is completely governed by the matter distribution in R.

Jargon

Variation in Matter Distribution occurs at global or the local scales, that is, across different cells or within each cell respectively. For the sake of clarity,

Matter Distribution is described as

Across Cells Homogeneous or Inhomogeneous Inside Each Cell Uniform or Non-Uniform

(28)

3.3 Scheme

The Ray-Tracing scheme, that is employed in this project, is now described in its most general form. The basic scheme is examined first, followed by the application of Monte-Carlo method to the Ray-Tracing scheme.

3.3.1 Ray-Tracing Scheme

For the Ray-Tracing scheme, consider an event O, wherein an observer receives a beam of photons, from a cosmological source at some redshift z, in the direction ki in the sky. A light beam received by the observer from a “sufficiently small” source can be treated as comprising of infinitesimally separated light rays. These light rays have traversed on null geodesics, infinitesimally separated from ki, along the past light cone of O .

The light beam is traced back from event O, while deconstructing the effects of various phenomena that influence the beam. As the light beam is traced back, the constituent rays deviate as they follow infinitesimally separated geodesics. This deviation determines the relationship between the observed image and the actual structure of the source. The beam frequency also changes due to the local evo- lution of the universe, which relates the source and image frequencies. The presence of dust and other exotic effects further influence the beam intensity, which affects how the observed luminosity relates to intrinsic brightness of the source.

Since it is, at best, extremely difficult to calculate these effects analytically, the Ray-Tracing scheme is carried out iteratively. The region between the source and the observer is divided into a number of cells, which are considered to be independent, at least for the period when the beam traverses through them.

The beam deviation in each cell is obtained by integration of the geodesic deviation equation through that cell. The evolution of the cell size determines the change in the beam frequency. Each cell may also have a dust profile affecting the beam intensity. The cumulative effect of these phenomena over many iterations determines the structure of the source.

Figure 3.1:An artists illustration of the Ray-Tracing Scheme. The figure (bottom left) shows a light beam origi- nating at a ‘Type Ia’ supernova (bottom right). An expanded section of beam evolution is in A (top). Due to the inhomogeneous expansion of the universe, the equal time area cross-sections, pink disks are unequally spaced.

Furthermore, the light beam is bent as a result of lensing due the surrounding matter distribution.

(29)

3.3.2 Iterative Monte-Carlo Method

In the Ray-Tracing Scheme, a light beam is propagated backwards from the observer to the source at some pre-determined redshift. We begin the Iterative Monte-Carlo Method as applied to Ray-Tracing scheme by setting the time and the redshift to zero corresponding to the epoch today. The beam area is set to zero, corresponding to an exactly focused beam. Upon many iterations, the beam will have a finite area and distortion at the source.

The first step in each iteration of this method is to generate a new cell and determine the Monte-Carlo parameters of that cell. These parameters include the mass of the cells based on the specified distribution, relative orientation in the form of impact angles which determine the point of impact of the beam on the cell and those corresponding to various dust and exotic effects that are considered. These parameters together with constraints that are imposed by the choice of model, determine the evolution of the cells.

This shall be clarified subsequently in section 3.5.

The generated cell is now allowed to evolve backwards in time until it reaches the epoch which corre- sponds to the time when the beam exits the cell (remember, the evolution is backwards in time). This determines the size of the cell at that epoch.

Next the Monte-Carlo generated impact angles are used to determined the position of the current cell relative to the previous cell, or effectively the point of beam impact on the current cell. The azimuthal angle also determines the length and hence the time of beam traversal through the cell, since the beam follows a light-like geodesic.

The mass in each cell and the mass distribution (lens) profile are together used to determine the Newto- nian Potential and hence the corresponding curvature. The geodesic equation is then used to determine the effect of lensing on the beam using the Newtonian Potential and the propagation time. The exact method by which these lensing effects are calculated is described in Holz and Wald [22]. Further, any beam extinction due to dust and exotic effects can be considered. Those effects considered in iSNOC are described in Goobar et. al. [17].

The cell is then allowed to evolve up until the time when the ray exits a cell, which was calculated above. This determines the scale factor of the cell at the time the beam exits. With the initial and final scale factors know, the exit beam frequency relative to the entry frequency can be evaluated.

This iterative process is repeated up until the desired redshift at which the source is deemed to reside.

(30)

3.4 Method

The Monte-Carlo Ray-Tracing method to study the evolution of a light beam through an inhomoge- neous universe is described. Unlike the previous work, the cells are allowed to have a variation in the matter density, which lends to a more realistic modeling of the beam evolution through the universe. As mentioned in section 2.3, the presence of anisotropy and inhomogeneities affect both the dynamics of the universe and the observations in such a universe; as a result, the simulations proceed in two steps.

However, this must be preceded by initializing a model of the universe by specifying the parameters for the underlying FLRW universe and mass distribution profile.

The first step in the Monte-Carlo Ray-Tracing method is to simulate the dynamics of the universe. To this end, a simplified model is employed to simulate the dynamics of the universe, in which each cell is assumed to evolve independently; consequently, structure formation in the universe is not accounted for. This can be remedied by the substitution of a more realistic model or observational information, as it becomes available.

In the second step, synthetic observations are obtained by tracing simulated light rays through such a universe. The dynamics of the cells, itself, affect the luminosity and redshift evolution of the beam.

The inhomogeneities also affect the luminosity of the light beam through gravitational lensing. The simulation of lensing effects is based on the implementation within the SNOC package, hence, an in- depth discussion is postponed to Appendix B. Finally, the beam luminosity might be affected due to the presence of dust and other exotic effects. While, the simulation of these effects is available in these simulations, they were not studied. Therefore, no discussion about these effects is undertaken; suitable references are provided instead.

3.4.1 Model Specification Mass Distribution Function

Mavg ~ 2.725x108 Mpc ~ 6.125x1011 M¤

05101520Probability Density (x107 ) [Mpc1 ]

2e−10 2e−09 2e−08 2e−07 2e−06 Cell Mass [Mpc]

Figure 3.2:Assumed Probability Distribution of masses in a cell of radius 1 Mpc.

The Ray-Tracing scheme requires that the universe is parameterized based on the models specified in Section (3.2). The Global model is established by specifying the FLRW Parameters:

• A Global Hubble Constant (H0).

• Rescaled Density Parameters (Ω0i). The first of these is always the Rescaled Mass Density Ω0m

and the second is an optional Dark-Energy Den- sity Ω.

• Equation of State Parameter (α0i) corresponding to each Rescaled Density Parameter. The equa- tion of state parameter corresponding to non- relativistic matter is α0m= 0 and to Dark-Energy is α= −1.

Further to establish the local model, three more parame- ters need to be specified:

• The radius of the Cells (r0∼R) in which the matter distribution is uncorrelated and idealized.

• The variance of matter distribution (σ8) in patches of size 8h−10 Mpc at z = 0.

h0= H0/100 Km/s/Mpc is the Dimensionless Hubble Constant.

• A Mass Band Factor f , see below.

These parameters are used to determine the mean (mavg) and standard deviation (σ ) for the mass distri- bution in each cell, which forms the input to the random number generator. A Log-Normal distribution for the mass in each cell is chosen as it is a simple single ended distribution, though it is possible in principle to replace it with any other single ended distribution. The distribution is restricted to a band of (mavg/ f , mavg × f ) where f ≥ 1, for computational reasons, though it is reasonable to expect such

References

Related documents

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i

Detta projekt utvecklar policymixen för strategin Smart industri (Näringsdepartementet, 2016a). En av anledningarna till en stark avgränsning är att analysen bygger på djupa

DIN representerar Tyskland i ISO och CEN, och har en permanent plats i ISO:s råd. Det ger dem en bra position för att påverka strategiska frågor inom den internationella

Sedan dess har ett gradvis ökande intresse för området i båda länder lett till flera avtal om utbyte inom både utbildning och forskning mellan Nederländerna och Sydkorea..

Aaltos universitet för fram att trots att lagändringen löst vissa ägandefrågor och bidragit till att universiteten har fått en struktur på plats som främjar kommersialisering

The Indian government too, has stepped up its support for startups, launching a special fund to invest in startups as well as offering startups tax breaks and