• No results found

Gaussian Bridges: Modeling and Inference

N/A
N/A
Protected

Academic year: 2022

Share "Gaussian Bridges: Modeling and Inference"

Copied!
32
0
0

Loading.... (view fulltext now)

Full text

(1)

UPPSALA DISSERTATIONS IN MATHEMATICS 86

Department of Mathematics Uppsala University

UPPSALA 2014

Gaussian Bridges - Modeling and Inference

Maik Görgens

(2)

Dissertation presented at Uppsala University to be publicly examined in Häggsalen, Ångströmlaboratoriet, Lägerhyddsvägen 1, Uppsala, Friday, 7 November 2014 at 13:15 for the degree of Doctor of Philosophy. The examination will be conducted in English.

Faculty examiner: Professor Mikhail Lifshits (St. Petersburg State University and Linköping University).

Abstract

Görgens, M. 2014. Gaussian Bridges - Modeling and Inference. Uppsala Dissertations in Mathematics 86. 32 pp. Uppsala: Acta Universitatis Upsaliensis. ISBN 978-91-506-2420-5.

This thesis consists of a summary and five papers, dealing with the modeling of Gaussian bridges and membranes and inference for the α-Brownian bridge.

In Paper I we study continuous Gaussian processes conditioned that certain functionals of their sample paths vanish. We deduce anticipative and non-anticipative representations for them. Generalizations to Gaussian random variables with values in separable Banach spaces are discussed. In Paper II we present a unified approach to the construction of generalized Gaussian random fields. Then we show how to extract different Gaussian processes, such as fractional Brownian motion, Gaussian bridges and their generalizations, and Gaussian membranes from them.

In Paper III we study a simple decision problem on the scaling parameter in α-Brownian bridges. We generalize the Karhunen-Loève theorem and obtain the distribution of the involved likelihood ratio based on Karhunen-Loève expansions and Smirnov's formula. The presented approach is applied to a simple decision problem for Ornstein-Uhlenbeck processes as well. In Paper IV we calculate the bias of the maximum likelihood estimator for the scaling parameter and propose a bias-corrected estimator. We compare it with the maximum likelihood estimator and two alternative Bayesian estimators in a simulation study. In Paper V we solve an optimal stopping problem for the α-Brownian bridge. In particular, the limiting behavior as α tends to zero is discussed.

Maik Görgens, Department of Mathematics, Analysis and Probability Theory, Box 480, Uppsala University, SE-75106 Uppsala, Sweden.

© Maik Görgens 2014 ISSN 1401-2049 ISBN 978-91-506-2420-5

urn:nbn:se:uu:diva-232544 (http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-232544)

(3)
(4)
(5)

List of papers

This thesis is based on the following papers, which are referred to in the text by their Roman numerals.

I M. Görgens. Conditioning of Gaussian processes and a zero area Brownian bridge. Manuscript.

II M. Görgens and I. Kaj. Gaussian processes, bridges and membranes extracted from selfsimilar random fields. Manuscript.

III M. Görgens. Inference forα-Brownian bridge based on Karhunen-Loève expansions. Submitted for publication.

IV M. Görgens and M. Thulin. Bias-correction of the maximum likelihood estimator for theα-Brownian bridge. Statistics and Probability Letters, 93, 78–86, 2014.

V M. Görgens. Optimal stopping of anα-Brownian bridge. Submitted for publication.

(6)
(7)

Contents

1 Introduction . . . .9

1.1 Gaussian processes . . . .9

1.1.1 The Brownian bridge. . . . 10

1.1.2 Representation of Gaussian processes . . . . 11

1.1.3 Series expansions of Gaussian processes . . . .11

1.2 Models for Gaussian bridges and membranes . . . .13

1.2.1 Generalized Gaussian bridges. . . .13

1.2.2 Gaussian selfsimilar random fields. . . .15

1.3 Inference forα-Brownian bridges . . . . 16

1.3.1 Estimation . . . . 18

1.3.2 Hypothesis testing . . . . 19

1.3.3 Optimal stopping . . . . 20

2 Summary of Papers . . . .22

2.1 Paper I . . . . 22

2.2 Paper II . . . .23

2.3 Paper III . . . . 24

2.4 Paper IV . . . . 25

2.5 Paper V . . . . 25

3 Summary in Swedish. . . .27

Acknowledgements . . . .29

(8)
(9)

1. Introduction

This is a thesis in the mathematical field of stochastics. This field is often divided into the areas probability theory, theoretical statistics, and stochastic processes. The boundaries between these areas are not sharp but intersec- tions of them exist. Paper I and Paper II contribute to the theory of stochastic modeling of Gaussian bridges and membranes and belong to the intersection of probability theory and stochastic processes, whereas in Papers III – V we study inference for a continuous time stochastic process, and those papers thus belong to the intersection of the areas theoretical statistics and stochastic pro- cesses. Moreover, throughout the thesis we make use of functional analytical tools.

The term Gaussian bridges is to be understood in a broad sense. While orig- inally introduced to describe Gaussian processes which attain a certain value at a specific time almost surely1, it was later (with the prefix “generalized”) used to denote Gaussian processes conditioned on the event that one or more func- tionals of the sample paths vanish2. Paper I contributes to the theory of such generalized Gaussian bridges. In Paper II we present a general method to con- struct selfsimilar Gaussian random fields and study how to extract Gaussian processes, bridges, and membranes from them. In Papers III – V we consider another generalization of Gaussian bridges – the α-Brownian bridges – and study problems of inference for the scaling parameterα and optimal stopping of such bridges. In all five papers of this thesis the Brownian bridge occurs at least as a special case3.

In this first chapter we give a short introduction to Gaussian processes in general and to the topics studied in this thesis in particular. In Chapter 2 we summarize the included papers and in Chapter 3 we give an outline of the thesis in Swedish.

1.1 Gaussian processes

Among all probability distributions the normal distribution is of particular im- portance since, by the central limit theorem, sums of independent and iden- tical distributed (i.i.d.) random variables with finite variance behave roughly

1See for example [24].

2We refer to [1] and in particular to [43].

3A plot of the Brownian bridge is given on the cover page.

(10)

like normal random variables. The central limit theorem has its functional ana- logue as well: Random walks S= (Sn)n∈Nof the form Sn= ∑nk=1Xk, where the Xk’s are i.i.d. random variables with finite variance, behave (suitably scaled) roughly like Brownian motion. The Brownian motion is the unique continu- ous stochastic process on the real line with i.i.d. and symmetric increments.

It serves as a building block for many other Gaussian and non-Gaussian pro- cesses.

Gaussian processes are not only of particular importance, but also very ac- cessible for investigation, since their finite dimensional distributions are solely determined by their covariance and, moreover, Gaussian random variables are independent whenever they are orthogonal in the Hilbert space spanned by them.

This importance and treatability makes the class of Gaussian processes an object of intensive study. Here we just mention the monographs [12], [27], [29], and [33].

1.1.1 The Brownian bridge

If we consider standard Brownian motion W= (Ws)s∈R(i.e., Brownian motion scaled to fulfillEW0= 0 and EW12= 1) and tie it down to 0 at time 1 we obtain (restricted to the interval [0,1]) the Brownian bridge. As mentioned before, this process appears in all papers included in this thesis.

The Brownian bridge B= (Bs)s∈[0,1]is a continuous centered Gaussian pro- cess uniquely defined by its covariance function EBsBt = s(1 − t) for 0 ≤ s ≤ t ≤ 1. It is of particular importance in asymptotic statistics (cf. [23]):

Given n i.i.d. random variables with a continuous distribution function F, con- sider their empirical distribution function Fn. By the law of large numbers, Fn(s) → F(s) almost surely as n → ∞. In 1933, Glivenko [25] and Cantelli [15]

showed that this convergence is uniform on the real line. Now, by the central limit theorem,

√n(Fn(s) − F(s)) −→d B(F(s)), as n→ ∞. (1.1) Kolmogorov showed in [31] that4, as n→ ∞,

√n sup

s∈R|Fn(s) − F(s)| −→dsup

s∈R|B(F(s))| = sup

0≤s≤1|B(s)|. (1.2) Moreover, he proved that the law of the left hand side in (1.2) is independent of F and studied the distribution of the right hand side in (1.2) (now known as the Kolmogorov distribution). These results, together with the work of Smirnov in [41], form the theoretical foundation of the Kolmogorov-Smirnov goodness-of-fit tests.

4All three papers [15], [25], and [31] were published in 1933 (with almost the same title) in the same issue of the Italian Giornale dell Istituto Italiano degli Attuari.

10

(11)

1.1.2 Representation of Gaussian processes

After the aforementioned work of Kolmogorov and others, the Brownian bridge was studied in more detail. In doing so, it was fruitful to work with different representations of it. Introducing the Brownian bridge B as a Brownian motion conditioned to end at 0 at time 1 leads immediately to

Bs= Ws− s W1, 0≤ s ≤ 1. (1.3) This representation is anticipative in the sense that, in order to compute Bswe use W1– a random variable “not available” at time s< 1.

An alternative representation of the Brownian bridge is given by the stochas- tic differential equation

dBs= dWs Bs

1− sds, B0= 0, 0 ≤ s < 1. (1.4) The solution of (1.4) is

Bs= s

0

1− s

1− xdWx, 0≤ s < 1, (1.5) and one has lims1Bs= 0. Clearly, the stochastic processes defined by (1.3) and (1.5) are different (for example they induce different filtrations). How- ever, they induce the same probability law on C([0,1]) – the Banach space of continuous functions on[0,1] equipped with the supremum norm.

Another way to represent Gaussian processes is by means of series expan- sions which we discuss in the following section.

1.1.3 Series expansions of Gaussian processes

Gaussian processes X = (Xs)s∈[0,T]with continuous sample paths may be rep- resented as a random series of the form

Xs=d

n=1ξnfn(s), 0≤ s ≤ T, (1.6) wheren)n=1is a sequence of i.i.d. standard normal random variables. While there exist many such series representations we present two of them in more detail.

Operator generated processes

Given a separable Hilbert space H and a linear and bounded operator u : H−→

C([0,T]) we can define a Gaussian process X = (Xs)s∈[0,T]via Xs=

n=1ξn(uen)(s), (1.7)

(12)

where(en)n=1is an orthonormal basis in H. The convergence in (1.7) is almost surely. However, the null-set for which the right hand side does not converge depends in general on s and thus the convergence in (1.7) is in general not uniform.

Now assume that X = (Xs)s∈[0,T]is a stochastic process with almost surely continuous paths. Then (see Theorem 3.5.1 in [12]) there exists a separable Hilbert space H and an operator u : H−→ C([0,T]) such that X can be written as in (1.7) almost surely. In particular, in this case the convergence is uniform for all s∈ [0,T]. The operator u is called the generating operator (or the asso- ciated operator) of X (note that the different choices of u and H are equivalent only up to isomorphisms).

The generating operator u : H−→ C([0,T]) encapsulates all information on the distribution of a Gaussian process X . In particular, changing the orthonor- mal basis in (1.7) does not change the distribution of X .

In order to give an example we state that the Brownian bridge B on [0,1]

is generated by the operator u : L02([0,1]) −→ C([0,1]), where L02([0,1]) is the orthogonal complement of the function f(x) ≡ 1 in L2([0,1]) and

(ue)(s) = s

0

e(x)dx, 0≤ s ≤ 1, e ∈ L02([0,1]).

In particular, the orthonormal basis{√

2 cos(nπx) : n ≥ 1} in L02([0,1]) yields the representation

Bs= 2

n=1ξn

sin(nπs)

. (1.8)

Karhunen-Loève expansions

Another important series representation of a continuous Gaussian process X= (Xs)s∈[0,T]is given by its Karhunen-Loève expansion. Let R be the covariance function of X , R(s,t) = EXsXt, and let μ be a finite measure on [0,T]. Let L2([0,T], μ) be the space of square integrable measurable functions on [0,T]

with respect to the measureμ, and define the covariance operator of X, AR: L2([0,T], μ) −→ L2([0,T], μ), by

(ARe)(t) = T

0

R(t,s)e(s)μ(ds), e∈ L2([0,T], μ).

Then AR is a linear and bounded, compact and self-adjoint, and non-negative definite operator. Hence, its eigenvalues n)n=1 are real and non-negative.

Now, an application of a generalized version of the Karhunen-Loève Theorem (see Theorem 34.5.B in [36] for the classical Karhunen-Loève Theorem and Theorem 2 of Paper III for its extension) yields the following series expansion of X :

Xs=

n=1

Znen(s) with Zn= T

0

Xsen(s)μ(ds), (1.9)

12

(13)

where (en)n=1is the sequence of corresponding orthonormalized continuous eigenfunctions of the eigenvaluesn)n=1, and(Zn)n=1is a sequence of inde- pendent normal random variables with mean 0 and varianceλn. The conver- gence in (1.9) is almost surely and uniform in s for all s in the support of the measureμ. Setting fn=

λnenwe obtain a series representation of X of the form (1.6).

The advantage of the Karhunen-Loève expansion is that it gives an orthog- onal decomposition of X since the eigenfunctions (en)n=1are orthogonal in L2([0,T], μ). Moreover, if we sort the eigenvalues in descending order, then truncations based on the Karhunen-Loève expansion minimize the total mean square error, i.e., for all n∈ N the expected L2([0,T], μ)-norm of the sum

k=n+1Zkek is minimal among all series expansions of the form (1.6).

Calculating the Karhunen-Loève expansion of the Brownian bridge with re- spect to the Lebesgue measure on[0,1] yields the eigenvalues λn= 1/(n2π2) and the normalized eigenfunctions en(s) =√

2 sin(nπs), n ≥ 1, which eventu- ally leads to the same representation of the Brownian bridge as in (1.8).

1.2 Models for Gaussian bridges and membranes

In Paper I and Paper II we study the modeling of Gaussian bridges and mem- branes. In Paper I, generalized Gaussian bridges are obtained by conditioning Gaussian processes on the event that certain functionals of their sample paths vanish. In Paper II, Gaussian bridges and their higher dimensional analogue, Gaussian membranes, are extracted from certain selfsimilar Gaussian random fields.

1.2.1 Generalized Gaussian bridges

In (1.1) we have seen that

√n(Fn(s) − s) −→d B(s), as n→ ∞, (1.10) where B= (Bs)s∈[0,1] is the Brownian bridge on [0,1] and Fn is the empiri- cal distribution function of the first n elements in the sequence U1,U2,... of independent and uniformly distributed random variables on[0,1]. In fact, by Donsker’s Theorem, the probability measure induced be the left hand side of (1.10) converges weakly to the probability measure induced by Brownian bridge on the Skorokhod space D([0,1]). Now, for n ∈ N, let U1n,U2n,...,Unn be independent and uniformly distributed random variables on [0,1], condi- tioned that∑ni=1Uin= n/2 and let Gnbe the empirical distribution function of the random variables U1n,...,Unn. From the conditioning of the Uin’s it follows

that 

1

0 (Gn(s) − s)ds = 0.

(14)

0.0 0.2 0.4 0.6 0.8 1.0

0.60.40.20.00.20.40.6

Figure 1.1. A realization of a zero area Brownian bridge. (figure taken from Paper I).

We may thus expect that

n(Gn(s)−s) converges, at least in the sense of finite dimensional distributions, to the Brownian bridge conditioned that its integral over [0,1] vanishes. We call this process the zero area Brownian bridge. A typical sample path is given in Figure 1.1 (taken from Paper I).

The zero area Brownian bridge is one example of a generalized Gaussian bridge (or, as we call it in Paper I, conditioned Gaussian process): given a continuous Gaussian process X = (Xs)s∈[0,T], T > 0, and a finite subset A ⊂ C([0,T])from the dual space of C([0,T]), let P(A)X be the conditioned measure

P(A)X (·) = PX



·

a∈A

a−1(0)

 ,

wherePX is the induced measure of X on C([0,T]). Every continuous Gaus- sian process X(A) whose induced measure PX(A) on C([0,T]) coincides with P(A)X is called a generalized Gaussian bridge (or conditioned Gaussian process of X with respect to A).

The Brownian bridge on [0,1] is the standard Brownian motion on [0,1]

conditioned by A= {δ1}, and the zero area Brownian bridge appears by condi- tioning the Brownian bridge on[0,1] further by A = {a}, where a ∈ C([0,1]) is Lebesgue measure on[0,1], or alternatively, by conditioning standard Brow- nian motion on[0,1] by A = {δ1,a}.

The random variables Xs, 0≤ s ≤ T, and a(X), a ∈ A, are centered Gaus- sian random variables. Hence, conditioning becomes orthogonal projection in the Gaussian Hilbert space spanned by the random variables Xs, 0≤ s ≤ T.

In particular, anticipative representations of generalized Gaussian bridges are obtained easily. For example the anticipative representation of the Brownian bridge B as a conditioned Brownian motion W is, as in (1.3), Bs= Ws− sW1

for 0≤ s ≤ 1.

14

(15)

Finding non-anticipative representations for the conditioned process X(A), i.e., representations where the filtrations induced by the processes X and X(A) coincide (such as for example the representations (1.4) and (1.5) for Brownian bridge), is more involved. In general, additional assumptions on X and A are required. Generalized Gaussian bridges and special cases of them were studied before, for example in [1], [8], [9], [19], [24], and [43]5. In particular, in the recent work [43] non-anticipative representations for generalized bridges of a wide class of Gaussian processes were found.

Generalized bridges as described in this section were used in connection with insider trading, where the additional information of an insider is mod- eled by functionals of the price process of some financial derivative. In [8]

and [43] the additional expected utility for the inside trader is calculated for different models. Recently, in [17], generalized Gaussian bridges arising by conditioning Gaussian processes on the event that their first coordinates in the Karhunen-Loève expansion vanish were considered in the context of partial functional quantization.

In Paper I we present another approach to the study of generalized Gaussian bridges and show how this approach extends to the conditioning of Gaussian random variables with values in arbitrary separable Banach spaces.

1.2.2 Gaussian selfsimilar random fields

A stochastic process X = (Xs)s∈R on the real line is called selfsimilar with Hurst index H if, for all c> 0, the processes (Xcs)s∈Rand(cHXs)s∈Rcoincide in distribution. It is said to have stationary increments if the distribution of Xt− Xsdepends solely on the length t− s for all s,t ∈ R. The only continuous Gaussian selfsimilar process with stationary increments is, up to constants, the fractional Brownian motion BH= (BHs)s∈Rwhich has covariance function

E BHs BtH=1 2

|s|2H+ |t|2H− |s −t|2H

, s,t ∈ R.

Moreover, the self-similarity index H needs to be restricted to 0< H < 1.

In the particular case H = 1/2 we obtain standard Brownian motion. Frac- tional Brownian motion is widely used in applications, for example in statisti- cal physics, telecommunications, financial mathematics and many more.

The generalization of Gaussian processes, Gaussian random fields, are usu- ally defined as Gaussian probability measures on the space of distributionsS, the dual space of the Schwartz functionsS on Rd. A Gaussian random field is called selfsimilar with index H if, for all c> 0, the random fields (X(ϕc))ϕ∈S and(cHX(ϕ))ϕ∈S coincide in distribution, where the dilation ϕc ofϕ is de- fined byϕc(x) = c−dϕ(c−1x), x ∈ Rd. For r∈ N, it is said to have stationary

5In [1] and [43] under the name generalized Gaussian bridges.

(16)

increments of order r if its restriction to Sr is invariant under translations, whereSr⊂ S is defined as

Sr=

ϕ ∈ S :

Rdxjϕ(x)dx = 0 for all | j| < r

⊂ S,

where j= ( j1,..., jd) is a multi-index, | j| = ∑dk=1jk, and xj = ∏dk=1xkjk for x= (x1,...,xd) ∈ Rd.

In [20], Dobrushin gave a complete characterization of the covariance func- tionals of stationary selfsimilar Gaussian random fields. In particular, he has shown that, for H < r, the covariance of all H-selfsimilar Gaussian random fields with stationary increments of order r equals

E X(ϕ)X(ψ) =

Sd−1

 0

ϕ(rx) ˆψ(rx)rˆ −2H−1drσ(dx),

where ˆϕ and ˆψ denotes the Fourier transform of ϕ ∈ S and ψ ∈ S and σ is a finite, positive, and reflection invariant measure on the unit sphere Sd−1of Rd. For example, choosingσ = ϖ, where ϖ is the uniform measure on Sd−1, and H = −d/2 yields, up to constants, Gaussian white noise M on S with covariance functional

E M(ϕ)M(ψ) =

Rdϕ(x)ψ(x)dx, ϕ,ψ ∈ S.

Selfsimilar and fractional random fields have been studied from different perspectives. The monograph [16] gives a good account of the theory. For ex- ample, it was shown that Gaussian selfsimilar random fields appear as scaling limits of certain Poisson random ball models (confer for example [10] and the references mentioned therein). Moreover, Gaussian random fields were ex- tended fromS to a suitable subset of the space of signed measures with finite total variation, and it was described how to extract fractional Brownian mo- tion BH from certain Gaussian random fields X via BHs = X(μs) for suitable choices of measuress)s∈Rd.

1.3 Inference for α-Brownian bridges

In Papers III–V we study problems of inference for theα-Brownian bridge.

Inference for continuous time stochastic processes has been studied for a long time. One of the first systematic treatments of such problems was given in [26]. However, the approach described in [26] includes a reduction of the continuous time sample paths to a collection of countably many random vari- ables. How this collection is chosen is a non-trivial problem, that, however, is very relevant for the power of the deduced estimates and tests. Further work (for example [11] and [14]) studied parameter estimation for continu- ous time stochastic processes without this reduction, but under the assumption 16

(17)

of stationarity or ergodicity (or both). Statistical inference for stochastic pro- cesses based on continuous observations has been studied extensively ever since. Here we just mention the sources [34] and [35].

We next motivate the introduction of theα-Brownian bridges by an exam- ple: assume that Sweden decides to join the European monetary union (EMU).

In order to do this, at some date before the planed entrance, the exchange rate at which Swedish crowns will be exchanged to Euro needs to be fixed at some level K. Then, in the time between this rate becomes public and the date of the entrance to the EMU, currency dealers will tend to change Euro to Swedish crowns if the current rate is below K, and to change Swedish crowns to Euro if the current rate is above K. Moreover, this effect will be the stronger the closer the date of entrance is. Considering the exchange rate as a function of time, we obtain thus a mapping, which, at the day of entrance to the EMU, attains the fixed value K. α-Brownian bridges have been used as a building block in [42] and [44] to model such behavior.

Given a standard Brownian motion W= (Ws)s∈[0,1]and a real numberα > 0, consider the stochastic differential equation

dXs(α)= dWs−α Xs(α)

1− s ds, X0(α)= 0, 0≤ s < 1. (1.11) The unique strong solution of (1.11) is

Xs(α)= s

0

1− s 1− x

α

dWx, 0≤ s < 1, (1.12) and is called theα-Brownian bridge. It fulfills lims1Xs(α)= 0 almost surely and the parameterα determines how the process returns to 0 at time 1. Hence, X(α) has a continuous extension on [0,1] and therefore the term ”bridge“ is justified. The (usual) Brownian bridge is covered as the special caseα = 1.

In (1.11) and (1.12) we may allow non-positive values for α as well. Then Brownian motion is included as the special caseα = 0. However, for α ≤ 0, we do not have lims1Xs(α) = 0 any longer. In order to visualize the ef- fect of the parameter α graphically, we give a plot of the ”expected future“

E

Xt(α)|(Xx(α))x∈[0,s]



, 0≤ s ≤ t ≤ 1, for different values of α in Figure 1.2 (taken from Paper III).

α-Brownian bridges were introduced in [13] for the modeling of riskless profit given some future contracts in the absence of transaction costs. This extended the earlier work [2], where the arbitrage opportunity is derived from a model including the Brownian bridge. Later it was used in economical ([42]

and [44]) and biological ([28]) contexts. In particular, in [44] a very similar situation to our motivating example was studied: the conversion of the ex- change rates between the Greek Drachma and the Euro to a fixed exchange rate on January 1st, 2001.

(18)

0 s 1

α =0 0< α <1 α =1 α >1

Figure 1.2. The influence ofα to the “expected future” for different values of α (figure taken from Paper III).

The first more theoretical investigation of α-Brownian bridges was given in [37]. In this reference it was, among other results, shown that the α- Brownian bridge is not a bridge of a Gaussian Markov process in the sense of Section 1.2.1 unless α = 1. In [5] sample path properties were studied and in [3] the Karhunen-Loève expansion of X(α) (under the Lebesgue mea- sure) was computed. Some further references are given in subsequent sec- tions. Here, we only remark that generalizations have been discussed in the literature, where the constantα was replaced by a mapping s → α(s) [4], and where the Brownian motion in (1.11) was replaced by fractional Brownian motion [22].

Returning to the scenario of Sweden joining the European monetary union, assume that at some time close to the entrance to the EMU, a currency dealer holds Swedish crowns and has to decide whether to sell them or not. Sup- pose that she works under the assumption that the exchange rate follows an α-Brownian bridge with an unknown parameter α. Then she will have to es- timate α based on the past exchange rates (we study hypothesis testing and estimation forα-Brownian bridges in Paper III and Paper IV) and, once she found a good estimate, she will have to find the best selling strategy given the now fully defined model (we consider this problem in Paper V).

1.3.1 Estimation

An application of Girsanov’s Theorem yields the log-likelihood function ofα given a sample path of X(α)until T ,

lnL

α(Xs(α))s∈[0,T]

= −α T

0

Xs(α)

1− sdXs(α)−α2 2

 T 0

(Xs(α))2

(1 − s)2ds. (1.13) 18

(19)

It follows that the maximum likelihood estimator forα equals αˆMLE(T) = −  T

0

Xs(α)

1− sdXs(α)

  T 0

(Xs(α))2

(1 − s)2ds. (1.14) In [7] it was shown that ˆαMLE is a strongly consistent estimator forα, that is, that limT1αˆMLE(T) = α almost surely. Moreover, in [6] (for the first case) and [7] (for the second and third cases) it was shown that, as T  1,

Iα(T)( ˆαMLE(T) − α) −→d

⎧⎪

⎪⎨

⎪⎪

ζ, forα < 1/2,

22W121−1

0Ws2ds, for α = 1/2, ξ, forα > 1/2,

where ζ denotes a standard Cauchy-distributed random variable, ξ a stan- dard normal random variable, and Iα(T) is the Fisher information. More- over, in [45], it was proven that under the assumptionα > 1/2, the maximum likelihood estimator ˆαMLE satisfies the large deviation principle with speed

|ln(1 − T)| and good rate

J(x) =

⎧⎨

(α−x)2

2(2x−1), if x≥ (1 + α)/3,

2α−4x+1

2 , if x < (1 + α)/3.

All the aforementioned results are only of asymptotic nature in the sense that they describe the behavior of ˆαMLE(T) as T  1. The aim of Paper III and Paper IV is to give precise results for all values of T smaller than 1. In Paper IV we show that ˆαMLE(T) is a heavily biased estimator for α unless T is very close to 1 and we propose a bias-corrected estimator forα.

1.3.2 Hypothesis testing

Hypothesis testing for theα-Brownian bridge was studied in [46]. More pre- cise, the simple statistical decision problem

H0:α = α0 vs. H1:α = α1, (1.15) whereα01≥ 1/2 was considered. The decision should be based on an ob- served trajectory until time T < 1 and should be in a way such that the prob- ability of making an error of the second kind is minimized, and at the same time bounding the probability of making an error of the first kind from above by some value p< 1 (usually p = 0.1, p = 0.05 or p = 0.01).

The Neyman–Pearson Lemma provides us with the (in the just described sense) optimal test: we have to reject the null hypothesis whenever

ϕα01(T) :=L

α1(Xs(α))s∈[0,T]

L

α0(Xs(α))s∈[0,T]

> cα01,T(q). (1.16)

(20)

Here, the constant cα01,T(q) is to be chosen such that P0)α01(T) > cα01,T(q)) = q,

that is, in order to find the best decision in (1.15) we need to know the dis- tribution of the likelihood ratio ϕα01(T) under the null hypothesis. In [46]

approximations for this distribution were given for T close to 1 by means of large deviations.

In Paper III we consider the same problem (1.15) but with the less restric- tive assumptionα0+ α1≥ 1. Applying Smirnov’s formula to the Karhunen- Loève expansion of X(α)under a certain measureμ allows us to determine the distribution ofϕα01(T) under the null hypothesis exactly for all T < 1 (see Section 2.3).

1.3.3 Optimal stopping

Optimal stopping has its roots in sequential analysis, where the size n of the data (X1,X2,...,Xn) on which decisions and estimates are based is not pre- defined but depends on some stopping rule η. This rule is chosen such that the costs of collecting the data is as small as possible while still providing the required level of significance for the inference.

This idea is embedded into a continuous setting in the following way: Given a stochastic process Y = (Ys)s∈[0,T] and a progressively measurable function G :[0,T] × R[0,T]−→ R we consider

V = sup

0≤τ≤TE G(τ,Y),

where the supremum is taken over all stopping times, and where the term progressively measurable means that the value of G(t,Y) is solely based on t and(Ys)s∈[0,t]. However, often the simpler problem

V = sup

0≤τ≤TE Yτ (1.17)

is studied, i.e., the value of G at time t is just Yt. A solution of the optimal stopping problem (1.17) consists of the value V and a stopping time τ for which the supremum is attained (if such a stopping time exists).

If Y is a (possibly time-inhomogeneous) Markov process, one usually con- siders the augmented problem

V(x,t) = sup

t≤τ≤TEx,t Yτ, (1.18) where Ex,t denotes expectation under the assumption that Yt = x. Then the solution to (1.17) follows via V =V(Y0,0). If we assume that Y is a continuous process, it appears natural to continue the observation as long as Yt < V(Yt,t) 20

(21)

and to stop immediately as soon as Yt = V(Yt,t). Hence, we expect that the stopping time

τ= inf{t ≥ 0 : Yt = V(Yt,t)}

is optimal in (1.17). In fact, this is true under some regularity conditions on Y (see Theorem 2.4 in [39]). Finding the value function V(x,t) for time- inhomogeneous Markov processes (such asα-Brownian bridges) is non-trivial and different approaches are discussed in the literature.

In [21] the problem

V= sup

0≤τ≤1E Xτ(1),

i.e., the optimal stopping problem for the Brownian bridge, was solved. We extend these results in Paper V, by replacing the Brownian bridge X(1)by the α-Brownian bridge X(α)for arbitraryα ≥ 0.

(22)

2. Summary of Papers

In this chapter we give a short summary of each paper included in the thesis.

2.1 Paper I

The first paper deals with conditioned Gaussian processes as introduced under the name generalized Gaussian bridges in Section 1.2.1. Let X = (Xs)s∈[0,T]

be a continuous Gaussian process and let A ⊂ C([0,T]) be finite (we call elements in A conditions). Denote by X(A)the conditioned Gaussian process of X with respect to the set of conditions A, and letPX andPX(A)be the induced measures of X and X(A)on C([0,T]).

Let u : H −→ C([0,T]) be a generating operator of X as introduced in Sec- tion 1.1.3. We show that the conditioned process X(A) admits a series expan- sion of the form

Xs(A)=

n=1ξn(u fn)(s), (2.1) where n)n=1 is a sequence of i.i.d. standard normal random variables and ( fn)n=1is an orthonormal basis in the Hilbert space

H(A)= {h ∈ H : a(uh) = 0 for all a ∈ A} ⊂ H.

From the series expansion (2.1) we deduce an anticipative representation for the conditioned process X(A), i.e., we express X(A)in terms of X , where, in general, the complete realization of X(ω) is required in order to compute Xs(A)(ω), 0 ≤ s ≤ 1. Moreover, the series expansion (2.1), together with an application of the Cameron-Martin Theorem, leads to a simple criterion for determining the equivalence of the measuresPX andPX(A).

Next, we study non-anticipative representations of X(A). We show that, whenever X is a solution of a stochastic differential equation of the form

dXs= αdWs+ β(s,X)ds, X0= 0, 0 ≤ s < T,

where(Ws)s∈[0,T]is standard Brownian motion andβ a progressively measur- able functional, then X(A)solves a stochastic differential equation

dXs(A)= αdWs+ δ(s,X(A))ds, X0(A)= 0, 0 ≤ s < T, 22

(23)

for some progressively measurable functionalδ. Moreover, if X is a Markov process we determineδ explicitly.

After giving examples (e.g. the zero area Brownian bridge mentioned in Section 1.2.1) we finally study extensions to arbitrary separable Banach spaces and consider conditioning of Gaussian processes on[0,∞) and conditioning of Gaussian random measures.

2.2 Paper II

In Paper II we present a unified framework for the construction of selfsimilar generalized Gaussian random fields onRd. These fields are driven by Gaus- sian random balls white noise Mβ defined as Gaussian random measures on Rd× R+ with control measuresνβ(dz) = νβ(dx,du) = dxu−β−1du for some β > 0.

Given a point z= (x,u) ∈ Rd× R+ and a function h :Rd−→ R we define the shift and scale mapτzh :Rd−→ R by τzh(y) = h((y − x)/u). For a signed measureμ on Rd and an m> 0, let (−Δ)−m/2μ be the absolutely continuous measure with density

((−Δ)−m/2μ)(x) =

Rd|x − y|−(d−m)μ(dy), x∈ Rd.

Denote the evaluation of a function g :Rd −→ R with respect to a signed measureη on Rd by

η,g =

Rdg(y)η(dy) and consider the Gaussian random field

X(μ) =

Rd×R+ (−Δ)−m/2μ,τzh Mβ(dz). (2.2) The notion of stationarity and self-similarity carries over from generalized Gaussian random fields defined onS as in Section 1.2.2 to generalized Gaus- sian random fields defined on spaces of measures in a natural way. We ana- lyze for which choices of the parameters m andβ, of the measure μ, and of the shot noise function h, the random field (2.2) is well defined, Moreover, we study their self-similarity properties in Theorem 2 of Paper II. Modifications of (2.2), where the driving Gaussian random measure is replaced by Gaussian white noise onRd, are studied in Theorem 3 of Paper II.

We then show how to extract Gaussian processes and Gaussian bridges from these generalized Gaussian random fields. For example, we discuss the extrac- tion of fractional Brownian motion in different representations, the extraction of generalized Gaussian bridges in the sense of Section 1.2.1, and the extrac- tion of Gaussian bridges and membranes on bounded domains D⊂ Rd, which are Gaussian random fields X= (Xs)s∈ ¯Dsuch that Xs→ 0 as s → s0∈ ∂D.

(24)

In a final section we study a second attempt to the construction of Gaussian membranes through a modification of the control measure of the driving Gaus- sian random measures. This yields random fields which are not selfsimilar in a global sense but in a local sense as shown in Theorem 4 of Paper II.

2.3 Paper III

The statistical decision problem (1.15) is considered in Paper III, i.e., given an observed trajectory of an α-Brownian bridge X(α) with unknown scaling parameterα until time T < 1, we want to test

H0:α = α0 vs. H1:α = α1.

We assume thatα0+ α1≥ 1. As pointed out in Section 1.3.2, in order to find the optimal test, it is crucial to know the distribution of the likelihood ratio ϕα01(T) (as defined in (1.16)) under the null hypothesis.

We show thatϕα01(T) can be recast to (Proposition 1 of Paper III) ϕα01(T) = exp

0− α1)(ψα01(T) + ln(1 − T))/2

, (2.3) where

ψα01(T) = (XT(α))2

1− T + (α0+ α1− 1) T

0

(Xs(α))2

(1 − s)2ds. (2.4) We then generalize the Karhunen-Loève Theorem (Theorem 2 of Paper III) and calculate the Karhunen-Loève expansion of X0)under the positive mea- sure

μ = μα01,T(ds) =δT(ds)

1− T +0+ α1− 1)I(s ≤ T)ds (1 − s)2

(Theorem 3 of Paper III), whereδT denotes the point measure at T andI the indicator function. This yields

Xs0)=

n=1

Znen(s), (2.5)

where the convergence in (2.5) is almost surely and uniform in s∈ [0,T] (see also Section 1.1.3). Here, (Zn)n=1is a sequence of centered normal random variables with variance λn, with n)n=1 being the decreasing sequence of eigenvalues in the Karhunen-Loève expansion of X0), and (en)n=1 is the sequence of corresponding normalized continuous eigenfunctions. In partic- ular, the sequence(en)n=1forms an orthonormal system in the Hilbert space L2([0,1], μ). From this fact, (2.4) and (2.5) it follows under the null hypothesis

ψα01(T) = X0)2μ=

n=1

Zn2=d

n=1λnξn2, (2.6) 24

(25)

wheren)n=1is a sequence of i.i.d. standard normal random variables. Ran- dom sums of the form (2.6) were studied by Smirnov [40] and Martynov [38]

and concise formulas for their distributions were given. Based on the distri- bution ofψα01(T), the distribution of ϕα01(T) is obtained via (2.3) (Theo- rem 1 of Paper III).

In a final Section we apply the presented method to hypothesis testing for Ornstein-Uhlenbeck processes (Theorem 4 and Theorem 5 of Paper III).

2.4 Paper IV

In Paper IV we study the bias of the maximum likelihood estimator for α given an observation of a sample path of the α-Brownian bridge X(α) until time T < 1. From (2.3) and (2.4) it can be deduced that

αˆMLE(T) = − (XT(α))2 2(1 − T)IT(α)+1

2ln(1 − T)

2IT(α) , where IT(α)= T

0

(Xs(α))2 (1 − s)2ds.

The moment generating function of a random variable does not only contain information about positive but also about negative moments of that random variable (provided that they exist). In particular, in [18], formulas for the expected value of quotients of random variables based on their joint moment generating function are derived. The joint Laplace transform of (XT(α))2 and IT(α) was computed in [7]. Based on the mentioned results from [7] and [18]

we compute the expected valueEα[ ˆαMLE(T)] of ˆαMLE(T) (Proposition 1 of Paper IV), and show thatα → Eα[ ˆαMLE(T)] is, as a mapping from R into R, surjective (Proposition 2 and Proposition 3 of Paper IV).

Finally, we propose a bias corrected maximum likelihood estimator for α and compare its bias and mean squared error with those of the maximum like- lihood estimator and two further Bayesian estimators in a simulation study.

2.5 Paper V

In Section 1.3 we presented a currency dealer who wants to change Swedish crowns to Euro under the assumption that the exchange rate follows an α- Brownian bridge X(α) with unknown parameterα. After an estimation of α she will have to solve the optimal stopping problem

V(α) = sup

0≤τ≤1E Xτ(α), (2.7)

where the supremum is taken over all stopping timesτ with 0 ≤ τ ≤ 1 almost surely. In Paper V we solve this problem by following the classical steps in

(26)

optimal stopping theory described for example in [39]. First, we augment problem (2.7) as in (1.18) and consider

V(x,t,α) = sup

t≤τ≤1Ex,t Xτ(α). (2.8) Then we formulate a two-dimensional free boundary problem for the value function V(·,·,α) – the solution of which is computed in Theorem 1 of Paper V and is a candidate for problem (2.8). The final step would be to verify that the found candidate is actually the correct solution of (2.8). However, this can be done in exactly the same way as in [21] (i.e., by an application of the Itô formula together with the optional sampling theorem).

Of particular interest is the limiting behavior of V(α) as α  0. Since X(0) is a Brownian motion (and thus a martingale) we have V(0) = 0. On the other hand, if we consider the stopping time

τ =



1/2, if X1(α)/2 > 0, 1, otherwise, then there exists a constant c> 0 such that

E Xτ(α)= E

X1(α)/2 I(X1(α)/2 > 0)

> c

for all α < 1. Hence, we can not expect that V(·) is continuous at 0. The details of the limiting behavior are given in Theorem 2 of Paper V.

26

(27)

3. Summary in Swedish

Denna avhandling studerar modellering av Gaussiska stokastiska processer, särskilt Gaussiska bryggor och membran, och inferens för den α-Brownska bryggan. Arbetet tillhör området matematisk statistik men vi använder genom- gående även verktyg från funktionalanalys.

Ett viktigt specialfall i samtliga fem artiklar är den Brownska bryggan som uppstår genom betingning på att Brownska rörelsen antar värdet 0 vid tiden 1.

Processen är särskilt viktig i asymptotisk statistik eftersom empiriska fördel- ningsfunktioner av oberoende stokastiska variabler asymptotiskt beter sig som den Brownska bryggan. Speciellt ligger en detaljerad analys av supremum av den Brownska bryggan till grund för Kolmogorov-Smirnov testet som är en central metod inom teoretisk statistik. I samtliga artiklar studerar vi olika generaliseringar av den Brownska bryggan.

I artikel I behandlar vi kontinuerliga Gaussiska processer vars realiseringar kontrolleras av en given betingningsfunktional A. Vi ger serieutvecklingar för den obetingade processen X och den betingade processen X(A). Genom att tillämpa Cameron-Martins sats bevisar vi med hjälp av dessa utvecklingar ett ekvivalenskriterium för de fördelningar som induceras av X och X(A). Under vissa villkor på processen X och funktionalen A härleder vi explicita kanon- iska representationer för X(A). Slutligen diskuterar vi generaliseringar till Gaussiska stokastiska variabler som tar värden i separabla Banachrum.

I artikel II presenterar vi en enhetlig ram för att konstruera själv-similära Gaussiska stokastiska fält. Dessa fält indexeras av Schwartz-funktioner eller en bredare klass av signerade mått på Rd som kan parametriseras med ett så kallat Hurst index. Vi visar hur man med hjälp av en extraktionsmetod kan konstruera Gaussiska processer på Rd genom att betrakta dessa fält för en familj av mått indexerade av Rd. Speciellt visar vi hur man kan erhålla olika representationer av fraktionell Brownsk rörelse, generaliserade Gaus- siska bryggor och Gaussiska membran. Dessa är Gaussprocesser definierade på en begränsad domän, och som antar värdet noll på randen. Vi studerar även lokal själv-similaritet för sådana membran.

Den α-Brownska bryggan är en generalisering av den Brownska bryggan som använder en skalningsparameterα ≥ 0 vilken bestämmer graden av kon- vergens mot 0 vid tiden 1. Inferens för skalningsparameternα baserad på en realisering fram till tiden T < 1 har tidigare studerats i litteraturen, men endast för värden T nära 1.

I artikel III betraktar vi det statistiska beslutsproblemet H0 :α = α0 vs.

H1:α = α1 forα-Brownska bryggor. Vi visar att den relevanta likelihood- kvoten kan skrivas som en kvadrerad L2-norm av processen under ett visst

(28)

måttμ. Vi generaliserar Karhunen-Loèves sats och beräknar Karhunen-Loéve utvecklingen av den α-Browska bryggan under måttet μ. Baserat på denna utveckling erhåller vi fördelningen för likelihoodkvoten genom en tillämp- ning av Smirnovs formel. Detta leder till optimala test för alla 0< T ≤ 1.

Vi diskuterar också generaliseringar av denna ansats till Ornstein-Uhlenbeck processer.

I artikel IV beräknas bias av ML-estimator förα. Det visar sig att estima- torn har ett avsevärt väntevärdesfel när T inte ligger nära 1. Därför föreslår vi en bias-korrigerad ML-estimator och jämför den med den okorrigerade och med två alternativa Bayesianska estimatorer i en simuleringsstudie.

I den avslutande artikeln V betraktar vi optimala stopptidsproblem för den α-Brownska bryggan. Vi följer det klassiska konceptet i optimal stopptidste- ori genom att studera en motsvarande PDE med fri rand. Genom att lösa det fria randvärdesproblemet erhålls en kandidatlösning för det optimala stopp- tidsproblemet som man därefter visar att den är den sökta. Vi studerar också hur icke-kontinuitet i den α-Brownska bryggan vid tiden 1 när α går mot 0 påverkar lösningen till det optimala stopptidsproblemet.

28

(29)

Acknowledgements

I would like to express my deepest gratitude to my supervisor Ingemar Kaj for his support and encouragement throughout my graduate studies. It was only due to his patient guidance, that my chaotic first drafts finally turned into (hopefully) coherent papers.

I am also indebted to my second supervisor Svante Janson, who, with his tremendous knowledge, always got me back on track whenever I got stuck (in particular with Paper I).

I would like to thank Allan Gut for always caring about me like a mentor, for reading most of my manuscripts, and for pointing my attention to refer- ence [46] which eventually led to Paper III.

Katja, thank you for your friendship and for sharing your lunch breaks with me. I wish you all the best in the final year of your graduate studies and beyond.

Måns, you have been a great friend, office mate, and co-author. Thank you very much for your companionship. I wish you only the best as well.

I would like to thank the past and present members of the MatStat group for creating a great working atmosphere. I enjoyed uncounted cakes with Fredrik, Ioannis, Jesper, Saeid, Silvelyn, and all the others.

I think the decision about which subject one specializes in is to a large extent influenced by the teachers one has. I am very grateful to Werner Linde from the University of Jena for awakening my interest in stochastic processes.

Finishing this thesis is only the preliminary end of a long journey. I would like to thank my parents and my sister for their love and support over all the years.

Finally, this thesis is dedicated to Bine and Milo. Being with you is my greatest joy and with you I can be myself completely. I am glad to have you by my side for everything to come.

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Uppgifter för detta centrum bör vara att (i) sprida kunskap om hur utvinning av metaller och mineral påverkar hållbarhetsmål, (ii) att engagera sig i internationella initiativ som

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

instance, the thesis presents (a) the importance of emphasizing stochastic agent be- havior, (b) negative service time effects of geographically distancing agents from its customer,

This robust model allowed traffic routing strategies to be evaluated and also assisted managers of the emergency call center into a strategical shift in the late

In Section 3 we present our main results on Gaussian shot noise random fields as Theorem 2, devoted to fields generated by a wide range of pulse functions and random balls white noise

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating

The EU exports of waste abroad have negative environmental and public health consequences in the countries of destination, while resources for the circular economy.. domestically