• No results found

Optimal Stopping with Discrete Costly Observations

N/A
N/A
Protected

Academic year: 2021

Share "Optimal Stopping with Discrete Costly Observations"

Copied!
47
0
0

Loading.... (view fulltext now)

Full text

(1)

U.U.D.M. Project Report 2018:33

Examensarbete i matematik, 30 hp

Handledare: Erik Ekström

Examinator: Kaj Nyström

Juli 2018

Department of Mathematics

Uppsala University

Optimal Stopping with Discrete Costly

Observations

(2)
(3)

Contents

1 Introduction 1

1.1 An Overview of Optimal Stopping Theory . . . 2

1.2 Discrete Time: Markovian Approach . . . 3

2 Stopping Problem with Discrete Costly Observations 7 2.1 Formulation of the Problem . . . 7

2.2 A Fixed Point Approach . . . 8

2.3 How to Find the Fixed Point . . . 12

2.3.1 Existence of the Fixed Point . . . 12

2.3.2 An Example . . . 14

2.4 When Number of Observations Is Restricted . . . 25

2.5 Stopping Between Observation Times Is Allowed . . . 27

3 The Quickest Detection Problem 31 3.1 The Classical Quickest Detection Problem . . . 31

3.2 Quickest Detection with Discrete Costly Observations . . . 32

3.2.1 Formulation of the Problem . . . 33

3.2.2 Properties of Πt . . . 34

3.2.3 A Fixed Point Approach . . . 37

(4)

Abstract

We study the optimal stopping problem where one can only observe the underlying process at discrete random time points, and each of these observations comes with a constant cost. One should decide how to distribute the future observation times and when to stop the process. To solve this problem, we define an associated operator and prove that its unique fixed point characterizes the value function. We then provide an optimal strategy where one chooses how to distribute the future observations and when to stop in terms of the fixed point. We then use an iterative procedure to reach the fixed point, and provide a specific example of this procedure which has an exponential convergence rate.

We conclude that when we can make at most finitely many observations, the value functions can be characterized by the elements in the sequence constructed by the iterative procedure. We further prove that when one can stop the process at any time, it reduces to the previous problem.

(5)

Acknowledgements

I would like to give my deepest thanks to my supervisor Professor Erik Ekstr¨om for intro-ducing me to this exciting topic, and for his invaluable guidance, stimulating discussions, constant help and encouragement whenever I needed it.

(6)

Chapter 1

Introduction

In the general optimal stopping problems we often consider choosing a stopping time in order to maximize the expected gain or minimize the expected loss. In this study we consider such a case where we can only get information from making discrete observations with costs, and we decide two things: when to observe and when to stop. The question then arises: How do we maximize our expected gain in this case?

There are some discussions on similar topics. In their book [1] Peskir and Shiryaev discuss a similar case where the observation times are deterministic, and one pays for the information in the future. In our case we pay for the information today and we observe immediately, and the observation times are random. In their article [3], Bayraktar and Kravitz discuss a quickest detection problem under discrete observations with cost. This can be regarded as an example in our problem. In their article [2], Dyrssen and Ekstr¨om solve the sequential testing problem under discrete observations with cost. They show that the value function equals the unique fixed point of an associated operator, which can be constructed using an iterative procedure. We follow their approach when solving our problem but prove the uniqueness of the fixed point in a different way.

(7)

with our conclusions.

In Section 2.4 we follow the approach in [2], and conclude that in the sequence constructed by iterating from the lower bound, every element can be interpreted as the value function when one can make at most finitely many observations. It is natural to ask what happens if if we were allowed to stop the process continuously when observing the underlying process discretely, in Section 2.5 we show that it reduces to the problem we solve in Section 2.2. We do this by using the strong Markov property and constructing two optimal stopping times. In Chapter 3 we discuss the quickest detection problem under discrete costly observations and the property of its underlying process, and show that it can also be solved by our approach.

1.1

An Overview of Optimal Stopping Theory

Suppose that we have a filtered probability space (Ω, Ft, (Ft)t≥0, P), and an arbitrary

stochas-tic process G = (Gt)t≥0 defined on it, where Gt is interpreted as the gain if observation is

stopped at time t. We interpret Ft as the information available up to time t, and we ask Gt

is Ft-measurable. We can define the value function as:

V = sup

τ ∈mEGτ

where m is the collection of the random variables τ such that P(τ (ω) < ∞) = 1. The fact that Gt is adapted to Ftmakes each τ a stopping time.

Here we would also assume that G∞= 0, and that [1]:

E(sup

t≥0

|Gt|) < ∞

The optimal stopping problem involves two goals: (i) Find the value function V ;

(ii) Find an associated stopping time τ where we should stop the underlying process. For general optimal stopping problems, we have two main approaches: martingale approach and Markovian approach. When choosing which method to use, one should consider the prob-abilistic structure of the stochastic processes which underly the problem. The two structures can be considered as ”unconditional” and ”conditional”, respectively [1].

When we determine the probabilisitic structure of a process by its unconditional finite dimen-sional distribution, we refer to the martingale approach. As the techniques and theories to the solution of such a problem usually based on the results from the theory of martingales. In the martingale approach we can use backwards induction for discrete time finite horizon cases, and essential supremum for discrete and continuous cases, with finite or infinite horizon. For details, see Chapter I in [1].

(8)

approach. As one can use the powerful tools from the theory of Markov processes. In the Markovian approach we assume that Gt(ω) has Markovian representation, which means there

exists a Markov process Xtsuch that Gt(ω) = G(t, Xt(ω)).

One can choose one of the two methods depending on the features of the underlying process of the problem. We can think of a Markov process as a special case of the processes determined by its unconditional distribution, or, if we take the state space to be large enough, we can obtain a Markovian representation for any process, for some measurable function G. For details of the two methods, see [1]. In this project we use the idea of Markovian approach in discrete time, which we will give a brief introduction in Section 1.2.

1.2

Discrete Time: Markovian Approach

In this section let us recall the standard Markovian approach when the time is discrete. Let us consider a time-homogeneous Markov chain X = (Xn)n≥0 defined on a filtered probability

space (Ω, Ft, (Fn)n≥0, P) and taking values in the measurable space (Rd, B(Rd)). X starts

from x ∈ Rd under P, and that x 7→ Ex(Z) is measurable for each Z.

Let us assume we have a measurable function G : Rd→ R which satisfies: Ex(sup

n

|G(Xn)|) < ∞

Let us consider the finite horizon first, where we consider the value function: VN(x) = sup

0≤τ ≤NE

xG(Xτ)

where τ is a stopping time with respect to the natural filtration FnX = σ(Xk : 0 ≤ k ≤ n).

Let us introduce a sequence of random variables (SnN)0≤n≤N defined recursively as:

SnN = G(XN), when n = N,

SnN = max(G(Xn), E(Sn+1|Fn)), for n = N − 1, . . . , 0.

and the key identity to this problem is:

SnN = VN −n(Xn)

To prove this identity we first prove by induction that: SnN = Ex(G(XτN

n )|Fn)

where:

(9)

Then by the Markov property we have: SnN = Ex(G(XτN n)|Fn) = Ex(G(Xn+τN −n 0 ◦θn)|Fn) = Ex(G(XτN −n 0 )◦θn|Fn) = EXnG(XτN −n 0 ) = VN −n(Xn)

Let us now let:

Cn= {x : VN −n(x) > G(x)}

Dn= {x : VN −n(x) = G(x)}

Let us define:

τD = inf{0 ≤ n ≤ N : Xn∈ Dn}

And the operator T :

T F (x) = Ex[F (X1)]

where F is measurable and F (X1) is integrable with respect to P.

Then we have the following conclusions [1]: (i) The value function Vn satisfies:

Vn(x) = max(G(x), T Vn−1(x)) for n = 1, . . . , N , where V0 = G.

(ii) The stopping time τD is optimal.

(iii) If τ∗ is an optimal stopping time, then τD ≤ τ∗ P-a.s. for all x.

(iv) The sequence (VN −n(Xn)){0≤n≤N } is the smallest supermartingale which dominates

(G(Xn)){0≤n≤N } under P for x fixed.

(v) The stopped sequence (VN −n∧τD(X

n∧τD)){0≤n≤N } is a martingale under P for every x.

The proof of (i) follows from the Markov property:

VN −n(Xn) = max(G(Xn), Ex(VN −n−1(Xn+1)|Fn)) = max(G(Xn), Ex(VN −n−1(X1) ◦ θn|Fn)) = max(G(Xn), EXn(V N −n−1(X 1)|Fn)) = max(G(Xn), T VN −n−1(Xn)

(10)

To prove (ii), first one needs to prove SnN ≥ Ex(Xτ|Fn) for all stopping times n ≤ τ ≤ N .

Taking expectation and supremum on both sides, we have Ex(SNn) ≥ VnN, we also have

Ex(SnN) = Ex(G(XτD)) ≤ V

N

n , which implies the equality of (ii).

To prove (iii), one needs to prove that τ∗being an optimal stopping time implies SτN∗ = G(Xτ∗)

P-a.s.. If we assume the contrary, then we get the strict equality ExG(Xτ∗) < VnN by optional

sampling theorem, which contradicts the fact that τ∗ is optimal. In the proof of (iv), the supermartingality follows directly from SN

n = max(G(Xn), E(Sn+1|Fn)).

Suppose there is another supermartingale ( ˜VN −n(Xn)){0≤n≤N }which dominates (G(Xn)){0≤n≤N },

one can prove by induction that ( ˜VN −n(Xn)){0≤n≤N }≥ (VN −n(Xn)){0≤n≤N } almost surely.

To prove (v), fix k and use an indicator. To see the details of theses proofs, see [1]. If we introduce an operator Q defined by:

QF (x) = max(G(x), T F (x)) then we can write the value function Vn(x) as:

Vn(x) = QnG(x)

This recursive relation forms a constructive method for finding VN(x).

Now let us consider the infinite horizon where we write the value function as: V (x) = sup

τ Ex

G(Xτ)

Similarly, we define the continuation set and the stopping set: C = {x : V (x) > G(x)} D = {x : V (x) = G(x)} Then we have the following conclusions [1]:

(i) The value function V satisfies:

V (x) = max(G(x), T V (x)) where the operator T is defined as:

T F (x) = ExF (X1)

(ii) Assume that Px(τD < ∞) = 1, where:

τD = inf{t ≥ 0 : Xt∈ D}

then the stopping time τD is optimal.

(iii) The value function V is the smallest superharmonic function (Dynkin’s Characterization) (T V ≤ V ) which dominates the gain function G.

(11)

(v) We have a constructive method for finding the value function: V (x) = lim

n→∞Q nG(x)

The key identity here is Sn= V (Xn) which can be proved by taking N → ∞. The rest of the

(12)

Chapter 2

Stopping Problem with Discrete

Costly Observations

2.1

Formulation of the Problem

Let us assume that we want to solve an optimal stopping problem where the underlying stochastic process (Xt)t≥0 is a (strong) Markov process. We can choose to stop this process

at some time point, and then get a payoff which depends on the value of the process at the time when we stop. However, instead of observing the underlying process continuously, we can only observe it at discrete time points, and only stop the process at those observation times. Let us assume that we can choose the sequence of time points at which we observe. However, the observations are not for free, but every observation comes with a cost, which is a constant in our case. Therefore, we cannot make infinitely many observations to get the most sufficient information, as we need to make as few observations as possible, so as to determine when to stop the process. Is there an optimal strategy based on the current value of our underlying process, such that we choose a certain sequence of observing times, as well as a certain time point that we stop the process and exercise the option, such that we could achieve the maximum discounted payoff at time 0? Our goal is to find such a strategy to optimize our value function accordingly.

Let us first define the observation times. Let us take an infinite sequence of random times: τ1 ≤ τ2 ≤ τ3 ≤ . . . , such that: τi is σ{Xτ1, Xτ2, . . . , Xτi−1, τ1, τ2, . . . , τi−1} measurable. We

can see that τ1 is deterministic, as it is measurable with respect to F0ˆτ.

Let us write the sequence ˆτ = {τk}k≥1, and define the information associated with this infinite

sequence ˆτ up to time t as:

(Gtτˆ)t≥0= σ{Xτ1, Xτ2, . . . , Xτi, τ1, τ2, . . . , τi; i = max{j : τj ≤ t}}

We can see from the definition that this continuous time filtration (Gtτˆ)t≥0 satisfies that,

∀j ≥ 1, if τj ≤ t < τj+1, then Gtτˆ = Gττˆj. In other words, the information up to time t comes from all the observations before it.

(13)

Definition 2.1.1. Define:

Tˆτ = {Gτˆ - stopping times},

and Sˆτ = {τ ∈ Tˆτ : P(∃i, τ (ω) = τi(ω)) = 1}

Remark 2.1.1. The set Tτˆ is the collection of all Gτˆ-stopping times. The set Sτˆ is the collection of Gτˆ-stopping times taking values in the sequence ˆτ . Obviously Sτˆ⊂ Tˆτ.

Let us denote the payoff function by g. If we choose to stop the process at time t, the payoff we get at the moment we stop is g(Xt). Let g be a real-valued non-negative function bounded by

M < ∞: g : R+→ R+, g ≤ M . Let V be our value function, which optimizes our discounted

payoff at time 0:

Definition 2.1.2. Value Function V

V : [0, +∞) → [0, +∞) V (x) = sup ˆ τ sup τ ∈SτˆE[e −rτg(X τ) − ∞ X i=1 ce−rτi1 {τi≤τ }]

Where r ≥ 0 is the discounting rate, c > 0 is the cost for each observation.

Remark 2.1.2. Here we do not ask the stopping times to be finite, in other words, τ can take the value ∞ with positive probability. Let us assume that we have made n observations before τ , and the optimal strategy is to never stop the process, i.e. τ = τn+1 = ∞. However,

we will not make another observation at ∞. Therefore when looking at the value function, if we assume one must first pay the cost and then stop the process, it is sufficient to consider all the finite stopping times.

Remark 2.1.3. From Definition 2.1.2 we can see that, by taking τ = 0, V is bounded by g(x) from below. It is also bounded above by M from above.

2.2

A Fixed Point Approach

In our formulation of the problem we only allow stopping the process at the observation times. i.e. we can either exercise the option immediately after making an observation, or continue observing for a deterministic time and make another observation. Let us assume that we have an optimal strategy where we can optimize the value function. Then standing on each observation point of this optimal sequence, our next step should always be optimal. Note that this optimal strategy might not be unique, but such a sequence would provide an optimal strategy. Therefore, it is natural to define an operator associated with this problem, so as to characterize our options at each step. We will now specify our choice, define the operator and solve the problem step by step.

(14)

(i) To decide to stop observing and stop the process immediately;

(ii) To decide to make another observation and face the same choices after making the next observation.

We now characterize these choices. Let us first define a family F of functions: Definition 2.2.1. Set F

F := {f Borel-measurable : [0, +∞) → [0, +∞), g ≤ f ≤ M } Let us now introduce an operator J to characterize our choices as described above: Definition 2.2.2. Operator J

Let J be an operator acting on F , defined by: (J f )(x) = max(g(x), sup

t≥0Ex

[e−rt(f (Xt) − c)])

Let us assume that ˆV is a fixed point of the operator J (We will prove the existence in Section 2.3.1), i.e. ˆV = J ˆV . We can thus write:

ˆ V : [0, +∞) → [0, +∞) ˆ V (x) = max(g(x), sup t≥0E x[e−rt( ˆV (Xt) − c)])

Now we will prove that ˆV = V , where V is defined as in Definition 2.1.2, and then provide an optimal strategy. It thus follows that the fixed point, if it exists, is unique.

Let us define a discrete-time filtration Fτˆ indexed by k, by setting Fkτˆ= Gττˆ

k.

Lemma 2.2.1. Let ˆτ be any sequence of stopping times, then k → e−rτkV (Xˆ

τk)−c

P

1≤i≤ke−rτi

is a supermartingale with respect to Fkτˆ. Proof. We have ˆV = J ˆV , so:

(15)

For k = 1, 2, . . . : Ex[Yk+1|Fkτˆ] = Ex[e−rτk+1V (Xˆ τk+1) − c X 1≤i≤k+1 e−rτi|Gτˆ τk] = Ex[e−rτk+1( ˆV (Xτk+1) − c)|F ˆ τ k] − c X 1≤i≤k e−rτi = Ex[e−r(τk+(τk+1−τk))( ˆV (Xτk+1) − c)|F ˆ τ k] − c X 1≤i≤k e−rτi = e−rτk Ex[e−r(τk+1−τk)( ˆV (Xτk+1) − c)|F ˆ τ k] − c X 1≤i≤k e−rτi, since τ k ∈ Gτˆτk = e−rτk EXτk[e−r(τk+1−τk)( ˆV (Xτ1) − c)] − c X 1≤i≤k

e−rτi, by strong Markov property

≤ e−rτkV (Xˆ τk) − c X 1≤i≤k e−rτi, by (2.1) = Yk

Therefore we conclude that {Yk}∞k=0 is a supermartingale with respect to Fkτˆ.

To prove the following lemma, let us first define the continuation set C and stopping set D: Definition 2.2.3. Continuation and Stopping set

C := {x : ˆV (x) > g(x)} D := {x : x /∈ C}

Then define a specific sequence of observation time ˆτ∗ = τ1∗, τ2∗, ..., which we will prove to be an optimal strategy:

Definition 2.2.4. Observation Times

Define the observation times by the following recursive construction: t∗(x) =    inf{t : ˆV (x) = Ex[e−rt( ˆV (Xt) − c)]}, x ∈ C ∞, x ∈ D Define: τ0∗ := 0 τk+1∗ := τk∗+ t∗(Xτk∗), k ≥ 0

Let m ∈ 0, 1, ... be the index of the last observation time, i.e.: m = max{k : τk∗ < ∞} i.e. τm∗ < ∞, τm+1∗ = ∞, ...

Lemma 2.2.2. Let ˆτ∗, m be defined as in Definition 2.2.4, then k → e−r(τk∗∧τm∗)V (Xˆ

τk∧m) −

cP

1≤i≤(k∧m)e−rτ

i is a martingale with respect to Fτˆ∗

(16)

Proof. Define Sk:= e−r(τ ∗ k∧τm∗)V (Xˆ τk∧m) − c P 1≤i≤(k∧m)e−rτ ∗ i, for k = 0: Ex[S1] =Ex[e−r(τ ∗ k+1∧τ ∗ m)V (Xˆ τk+1∧m) − c X 1≤i≤(k+1∧m) e−rτi∗] =Ex[e−r(τ ∗ 1∧τ ∗ m)V (Xˆ τ1∧m) − c X 1≤i≤(1∧m) e−rτi∗] =1m≥1Ex[e−rτ ∗ 1( ˆV (Xτ 1) − c)] + 1m<1V (x)ˆ =1m≥1V (x) + 1ˆ m=0V (x)ˆ = ˆV (x) (2.2) For k = 1, 2, . . . : Ex[Sk+1|Fˆτ ∗ k ] =Ex[e−r(τ ∗ k+1∧τ ∗ m)V (Xˆ τk+1∧m) − c X 1≤i≤(k+1∧m) e−rτi∗|Fτˆ ∗ k ] =1m>kEx[e−rτ ∗ k+1V (Xˆ τk+1∧m) − c X 1≤i≤k+1 e−rτi∗|Fτˆ∗ k ] + 1m≤k(e−rτ ∗ mV (Xˆ τm) − c X 1≤i≤m e−rτi∗) =1m>k(e−rτ ∗ kEx[e−r(τ ∗ k+1−τ ∗ k)( ˆV (Xτ k+1∧m) − c)|F ˆ τ∗ k ] − c X 1≤i≤k e−rτi∗) + 1 m≤k(e−rτ ∗ mV (Xˆ τm) − c X 1≤i≤m e−rτi∗) since τk∗ ∈ Fkˆτ∗ =1m>k(e−rτ ∗ kEX τ ∗ k [e−rτ1∗( ˆV (X τ1) − c)] − c X 1≤i≤k e−rτi∗) + 1 m≤k(e−rτ ∗ mV (Xˆ τm) − c X 1≤i≤m e−rτi∗)

by strong Markov property =1m>k(e−rτ ∗ kV (Xˆ τ k) − c X 1≤i≤k e−rτi∗) + 1m≤k(e−rτ ∗ mV (Xˆ τm) − c X 1≤i≤m e−rτi∗), by (2.2) =e−r(τk∗∧τ ∗ m)V (Xˆ τk∧m) − c X 1≤i≤(k∧m) e−rτi∗ =Sk

Therefore we conclude that Sk = e−r(τ

∗ k∧τ ∗ m)V (Xˆ τ k∧m) − c P 1≤i≤(k∧m)e−rτ ∗ i is a martingale with respect to Fkτˆ∗.

Theorem 2.2.3. Optimal Strategy

Assume that ˆV is a fixed point of J , then V = ˆV . Moreover, ˆτ = {τ1∗, τ2∗, . . . }; τ = τm∗ provides an optimal strategy.

Proof. First we prove that ˆV ≥ V .

(17)

Taking supremum over all stopping time τ ∈ Sττˆk, we have: sup τ ∈Sτˆ τk Ex[e−rτg(Xτ) − c X τi≤τ e−rτi] ≤ sup τ ∈Sτˆ τk Ex[e−rτV (Xˆ τ) − c X τi≤τ e−rτi] ≤ ˆV (x) =⇒ sup ˆ τ sup τ ∈Sˆτ τk E[e−rτg(Xτ) − ∞ X i=1 ce−rτi1 {τi≤τ }] ≤ ˆV (x) =⇒ V (x) ≤ ˆV (x)

by the Optional Sampling Theorem and the supermartingale property as proved in Lemma 2.2.1.

Next we prove that ˆV ≤ V . By the martingale property as in Lemma 2.2.2 and the Optional Sampling Theorem, we have:

ˆ V (x) = Ex[e−rτ ∗ mV (Xˆ τ∗ m) − c X 1≤i≤m e−rτi∗] = Ex[e−rτ ∗ mg(X τ∗ m) − c X 1≤i≤m e−rτi∗] ≤ sup ˆ τ sup τ ∈Sτˆ τk Ex[e−rτg(Xτ) − c X 0<τi≤τ e−rτ] ≤ V (x)

Hence the fixed point ˆV equals the value function V . Furthermore, the strategy ˆτ = {τ1∗, τ2∗, . . . }; τ = τm∗ is an optimal strategy.

Remark 2.2.1. In the proof of 2.2.3, we take supremum only over finite stopping times when applying the Optional Sampling Theorem. It follows from Remark 2.1.2 that it is sufficient to consider all finite stopping times.

Corollary 2.2.3.1. Uniqueness of the Fixed Point The fixed point ˆV is unique.

Proof. Suppose ˆV1 and ˆV2 are two fixed points of J , then: ˆV1 = V and ˆV2 = V , by the

uniqueness of the supremum V , we have ˆV1 = ˆV2.

2.3

How to Find the Fixed Point

2.3.1 Existence of the Fixed Point

We have proven in the previous sections that if a fixed point of J exists, then it is unique. In this section we will prove the existence of such a fixed point, and we will further suggest one way to find the fixed point, as well as how fast this procedure would be.

Let f , F be defined as in Definition 2.2.1, we will now prove that we can use iteration from the lower bound to reach the fixed point.

(18)

Proof. J f1(x) = max(g(x), sup t≥0E x[e−rt(f1(Xt) − c)]) ≤ max(g(x), sup t≥0 Ex[e−rt(f2(Xt) − c)]) = J f2(x)

by monotonicity of the expectation operator.

Now we define a sequence of functions recursively by: f0= g(x)

fn+1= J fn, n ≥ 1

Lemma 2.3.2. {fn}n≥0 is an increasing sequence.

Proof. Clearly,

f1 = J f0 ≥ g(x) = f0

Now assume fk ≥ fk−1 for some k, then: fk+1 = J fk ≥ J fk−1 = fk. Thus the statement

follows by induction.

Lemma 2.3.3. ∀f ∈ F, J f ∈ F . Define: f∞:= limn→∞fn, then f∞ exists and f∞∈ F .

Proof. Taking t = 0, we have supt≥0Ex[e−rtg(Xt)] ≥ g(x), so (J f )(x) ≥ g(x). The upper

bound is obvious.

The sequence {fn}n≥0 is bounded and thus it has a finite limit, for the upper and lower

bounds, we have:

f∞≥ f0 = g(x)

f∞= lim

n→∞fn= supfn≤ M

And thus f∞∈ F .

Theorem 2.3.4. The Existence of the Fixed Points The function f∞∈ F is a fixed point of J .

Proof. ∀n, we have f∞≥ fn+1≥ fn. By Lemma 2.3.1:

J f∞≥ J fn+1≥ J fn= fn+1

Taking limit on both sides, we have:

(19)

So J f∞≥ f∞.

For the other direction, fix x and let:

t∞= inf{t : Ex[e−rtf∞(Xt)] − c attains its maximum}

Then: fn+1(x) = J fn(x) = max(g(x), sup t≥0E x[e−rt(fn(Xt) − c)] ≥ max(g(x), Ex[e−rt∞(fn(Xt∞) − c)])

Taking limit on both sides, by DCT, we have: lim n→∞fn+1(x) = f∞≥ max(g(x), n→∞lim Ex[e −rt∞(f n(Xt∞]) − c)]) = max(g(x), Ex[e−rt∞(f∞(Xt∞]) − c)]) = J f∞

So we have f∞≥ J f∞, and thus f∞ is a fixed point of J .

Corollary 2.3.4.1. f∞ is the unique fixed point of J , f∞= V

Proof. This corollary follows directly from Theorem 2.5.2 and Theorem 2.3.4. Remark 2.3.1. We can also start the iteration from the upper bound, i.e.:

f0 = M

fn+1 = J fn, n ≥ 0

Then f∞:= limn→∞fn is also the fixed point of J . The proof follows similarly.

Moreover, by monotonicity of J , starting the iteration from any function f ∈ F , we will eventually reach the fixed point.

2.3.2 An Example

In this section we take the perpetual American put option for example. In this specific example let us assume that the underlying asset (Xt)t≥0follows a geometric Brownian motion

which solves:

(

dXt = µXdt + σXdWt

X0 = x

(2.3) where Wt is a standard Brownian motion, µ > 0, σ > 0, and |µ − 12σ2| < ∞. Our goal is to

find an optimal strategy which will maximize the discounted payoff at time 0. Let us define a collection of functions Q such that:

(20)

where the upper bound is the strike price. Note that here we ask K > c > 0, otherwise the problem becomes trivial. If we look at the associated optimal stopping problem we can see that the value function has the expression:

V (x) = sup ˆ τ sup τ ∈Sτˆ E[e−rτ(K − Xτ)+− ∞ X i=1 ce−rτi1 {τi≤τ }]

Let us write the value of a perpetual American put option under the physical measure under continuous information as VAme = supτEx[e−rτ(K − Xτ)+], then V is bounded above by

VAme. Note that to find the fixed point we can start the iteration here from both the lower and the upper bound. Let us start from the lower bound for instance. We define:

q0 = (K − x)+

qn= J qn−1, n ≥ 1

Property of the Value Function

Proposition 2.3.1. ∀q ∈ Q, J q ∈ Q, q∞:= limn→∞qn exists and q∞∈ Q. q∞ is the unique

fixed point of J in Q.

The proof for the fixed point follows directly from Corollary 2.3.4.1.

Lemma 2.3.5. The operator preserves monotonicity: qn decreases in x for all n ≥ 0, q∞

decreases in x.

Proof. Let us assume 0 < x1 < x2, then q0(x1) ≥ q0(x2).

We can prove this property by a simple argument according to the Markov property: with probability 1, the sample path starting from x2 dominates the sample path starting from x1,

otherwise if they ever intersect, they stay the same. So for 0 < x1< x2, we have: Ex1[e −rt(q 0(Xt) − c)]) ≥ Ex2[e −rt(q 0(Xt) − c)]) And thus: q1(x1) = J q0(x1) = max((K − x1)+, sup t≥0Ex1 [e−rt(q0(Xt) − c)]) ≥ max((K − x2)+, sup t≥0 Ex2[e −rt ((q0(Xt) − c)]) = J q0(x2) = q1(x2)

Similarly, qn(x1) ≥ qn(x2) by induction. Taking limit on both sides, we see the fixed point

decreases in x. Therefore the operator J preserves monotonicity.

Lemma 2.3.6. The operator preserves convexity: qn is convex in x for all n ≥ 0, q∞ is

(21)

Proof. For n = 0, we have supt≥0Ex[e−rt((K − Xt)+− c)] convex, since the second derivative

of an European option w.r.t x is positive, obviously (K − x)+ is convex, we also have the function max(·, ·) convex, therefore q1 is convex. For n ≥ 1 we have Ex[e−rt(qn(Xt) − c)]

convex since the expectation does not change the convexity. Therefore we have qn convex in

x for all n. Taking limit on both sides, the pointwise limit of a convex function is convex, therefore we have q∞ convex.

Lemma 2.3.7. qn is uniformly Lipschitz (1) on R+ for all n ≥ 0.

Proof. It is obvious that q0is Lipschitz (1). By Lemma 2.3.6, the convexity of qn implies that

qn is locally Lipschitz for all n:

Assume 0 < a < x < y < b < c, then by convexity of qn:

qn(a) − qn(0) a − 0 ≤ qn(y) − qn(x) y − x ≤ qn(c) − qn(b) c − b =⇒ |qn(y) − qn(x) y − x | ≤ max(| qn(a) − qn(0) a − 0 )|, | qn(c) − qn(b) c − b |) Since qn decreases in x, max(|qn(a) − qn(0) a − 0 )|, | qn(c) − qn(b) c − b |) = | qn(a) − qn(0) a − 0 )| for a < K: |qn(y) − qn(x) y − x | ≤ | qn(a) − qn(0) a − 0 )| ≤ |(K − a) − K a − 0 )| = 1

By the fact that qn is bounded by (K − x)+ and K, and that qn(0) = K for all n. Therefore

we can see that the Lipschitz constant is 1 globally.

Proposition 2.3.2. q∞ converges to VAme as the cost c goes down to 0, and q∞ converges

to (K − x)+ as the cost c goes up to the strike K.

Proof. First we claim that when c = 0, V = VAme, as we are exposed to the continuous filtration. Let us first define a sequence {Cn}n≥1= Kn. As n → ∞, Cn→ 0. Let:

Vn(x) = max((K − x)+, sup t Ex

[e−rt((K − x)+− Cn)]) Take  > 0, choose n0 ∈ N such that n0 = K, then for all n > n0 we have:

|Vn(x) − VAme(x)| =| max((K − x)+, sup

(22)

We notice that as n → ∞, K − Cn→ K. Let:

Vn(x) = max((K − x)+, sup t Ex

[e−rt((K − x)+− (K − Cn)])

Take  > 0, for all n > n0 we have:

|Vn(x) − (K − x)+| =| max((K − x)+, sup t Ex [e−rt((K − x)+− (K − Cn))]) − (K − x)+| =| max((K − x)+, sup t Ex [e−rt((K − x)+− K + K n)]) − (K − x) +| ≤|e −rt∗ K n | < K n0 =  so the proof is complete.

We can see that the fixed point q∞, as well as all the intermediate iterations are decreasing,

convex, bounded below by (K − x)+ and above by the value of the perpetual American put option by definition. In addition, the sequence qn is an increasing sequence.

A Numerical Example

As we can also see from the numerical example shown below, these properties are indeed what we observe when starting the iteration from the lower bound.

140 150 160 170 180 190 200 210 220 230

Current Price of the Underlying Asset x 0 1 2 3 4 5 6 7 8 9 10 Value Function

Iteration from the Lower Bound

fixed point g(x)=(K-x)+

Continuation Stopping

Stopping

Figure 2.1: Finding the fixed point: K = 150, x = 100, µ = 0.15, r = 0.1, σ = 0.1, c = 0.01, T = 10, Number of iterations N = 40, Number of spatial steps Nx = 100, Number of time steps

(23)

Property of the Optimal Strategy

We can see from Figure 2.1 that, when the current value of the underlying process x is small, we are in the stopping area where we do not make observations. As x exceeds some certain threshold, we are in the continuation area and therefore make an observation. However, as x grows larger, we are in the stopping area again as the gap between the fixed point and the lower bound cannot compensate the cost of observation anymore. Therefore, for the particular choice of parameters used in Figure 2.1, the structure of the optimal strategy with respect to the starting point x should be to stop, to continue, and to stop.

However, does the structure of optimal strategy always behave like this, or it is only for this specific choice of parameters? We will prove later the structure depends on the drift term of the underlying process and the constants K and c.

We will also prove that the first optimal observation time τ1 is always strictly larger than

some  > 0. Then we will further prove the rate of convergence for this iteration.

For each step of the iteration, we can find a deterministic solution for the next step. We have: qn(x) = J qn−1(x) = max((K − x)+, sup t≥0Ex [e−rt(qn−1(Xt) − c)]) = max((K − x)+, sup T ≥0 (u(T, x) − e−rTc) where u(t, x) solves the following PDE by the Feynman–Kac formula :

ut(t, x) − µxux(t, x) − 1 2σ 2x2u xx(t, x) + ru(t, x) = 0 u(0, x) = qn−1(x)

At every step, starting from the known function qn−1, we take the largest u(T, x) over all

possible time horizon T , and further take the maximum with (K − x)+ to get the next qn.

Note that there exists a unique non-negative solution for each iteration. Note that this is also the method we use to perform our numerical examples.

Now let us prove that there is always a lower bound for the continuation set for all iterations. Lemma 2.3.8. For all n ≥ 1, there exist an : 0 < an ≤ K, such that ∀x ∈ C continuation

set C, x ≥ an.

Proof. For x < K, we have x strictly larger than some an> b0, where b0 is the point where

the value function of the perpetual American option under continuous observation under the physical measure starts to differ from (K − x). This follows from the fact that qn is bounded

above by VAme by definition.

It follows from the convexity and monotonicity property, and the fact that qn is bounded

(24)

Lemma 2.3.9. The upper bound of the continuation set depends on the drift term of the underlying process Xt:

Case 1: µ −12σ2 > 0:

For all n ≥ 1, there exist bn : bn ≥ K, such that ∀x ∈ C, x ≤ bn. That is to say, C = {x :

an≤ x ≤ bn}.

Case 2: µ −12σ2 < 0:

Such bn’s do not exist. C = {x : x ≥ an}.

Case 3: µ −12σ2 = 0:

When K > 2c, it is the same as in Case 2; when K ≤ 2c, it is the same as in Case 1. Proof. The behaviour of the functions depends on the asymptotic behaviour of the underlying process. Let us first claim that, by the law of the iterated logarithm, for the underlying process which solves (2.3):

1. when µ −12σ2 > 0, as t → ∞, Xt→ ∞ with probability 1.

2. when µ −12σ2 < 0, as t → ∞, Xt→ 0 with probability 1.

3. when µ −12σ2 = 0, as t → ∞, Xtdoes not have a limit with probability 1.

For any x0 > 0, lim t→∞P(Xt> x0) = limt→∞P(xe (µ−12σ2)t+σW t > x 0) = lim t→∞P((µ − 1 2σ 2)t + σW t> ln x0 x ) = lim t→∞P( Wt t > lnx0 x − (µ − 1 2σ 2)t σ√t ) = lim t→∞Φ((µ − 1 2σ 2)t −ln x0 x σ√t) =      1, if µ −12σ2 > 0 0, if µ −12σ2 < 0 1 2, if µ − 1 2σ 2 = 0

where Φ is the CDF of the standard normal distribution.

For x > K, We let Ex[e−rT(qn(XT) − c)] > 0 for some T , which is Ex[qn(XT)] > c. As proven

above, by monotonicity of qn, if there exists some bn, such that Exb[qn(XT)] = c. Then for

x > bn, we have qn = 0. In other words, we want to know if there exists some bn such

that, starting from bn, the expectation of the payoff minus cost at any time in the future in

negative.

When µ −12σ2> 0, the Brownian motion increases in time, we have q

n(Xt) − c ≤ VAme(Xt) −

c = −c, as Xt → ∞ with probability 1, since the term (µ − 12σ2)t dominates σWt. We can

write:

(25)

since VAme(x) − c is a supermartingale. Take x sufficiently large such that VAme(x) < c, we can write: lim t→∞E[e −rt(q x(Xt) − c)] < 0

Which means by the monotonicity property, there exists such bn. Therefore the structure of

the optimal strategy would be D → C → D.

When µ − 12σ2 < 0, the Brownian motion decreases in time, we have limtqn(Xt) − c ≥

limtg(Xt) − c = limt→∞(K − Xt)+− c = K − c, as Xt → 0 with probability 1. By DCT

the function will converge to 0 but will never reach 0. Therefore there is no stopping region coming after the continuation region, we always observe once x exceeds some point. The structure of the optimal strategy would be D → C.

When µ −12σ2 = 0, the process Xtdoes not have a limit with probability 1. For t large, with

probability 12, it will end up being ∞ and with probability 12 it would be 0. And thus the function would have a bound like:

e−rt((K − Xt)+− c) ≤ e−rt(qn(Xt) − c) ≤ e−rt(VAme(Xt) − c)

=⇒ E[e−rt((K − Xt)+− c)] ≤ E[e−rt(qn(Xt) − c)] ≤ E[e−rt(VAme(Xt) − c)]

=⇒ e−rt(1 2K − c)] ≤ E[e −rt(q n(Xt) − c)] ≤ e−rt( 1 2K − c) =⇒ E[e−rt(qn(Xt) − c)] = e−rt( 1 2K − c)

We can see that, when K > 2c, the limit e−rt(12K − c) would be positive, so it would be the same as in Case 2. When K < 2c, it would be the same in Case 1 that there exist some t such that E[e−rt(qn(Xt) − c)] is negative. When K − 2c, the continuation area and the stopping

area will coincide after the first time qn(x) − c = 0.

Now let us assume that we are in the continuation set, then how does the next observation time t change with x? Let us define:

t∗(x, qn) = inf{t : qn(x) = Ex[e−rt(qn−1(x) − c)]}

t∗(x) = inf

n t ∗(x, q

n)

As we know from Lemma 2.3.7, all qn’s are uniformly Lipschitz (1), which means the initial

condition cannot grow faster than linearly, so that the speed that u grows in t cannot be too fast either: for a small T − t, we know that in this backward heat equation |u(t, x) − u(T, x)| is bounded by ˜C√T − t, for some constant ˜C. However, for a fixed x, to make sure that x is in the continuation region, u at some optimal stopping time t∗(x, qn) has to exceed the

discounted one step cost ce−rt∗(x,qn). And thus we obtain a lower bound for the optimal

stopping time t for all x and all qn. We can also motivate this result as follows:

Lemma 2.3.10. ∀x ∈ C, the optimal stopping time t∗(x, qn) satisfies:

t∗(x, qn) ≥ (

c ˜ Cx)

2

(26)

Proof. Let us assume x ∈ C. Fix some time horizon T , let: uT(0, x) = Ex[e−rTqn−1(XT)] We have: |uT(0, x) − u0(0, x)| =|Ex[e−rTqn−1(XT)] − qn−1(x)| ≤Ex[|e−rTqn−1(XT) − qn−1(x)|] =Ex[|e−rTqn−1(XT) − e−rTqn−1(x) + e−rTqn−1(x) − qn−1(x)|] ≤e−rTEx[|qn−1(XT) − qn−1(x)|] + qn−1(x)|1 − e−rT|

where the second term qn−1(x)|1 − e−rT| ≤ ˜C1T for some ˜C1 for small T . Let us take a look

at the first term. By the Lipschitz property of qn−1, we have:

Ex[|qn−1(XT) − e−rTqn−1(x)|] ≤Ex[|XT − x|]

=xE[|e(µ−12σ

2)T +σW T − 1|]

By Cauchy-Schwartz inequality, we can write: E[|e(µ− 1 2σ2)T +σWT − 1|] ≤(E[|e(µ−12σ2)T +σWT − 1|]2)12 =(E[|e(2µ−σ2)T +2σWT − 2e(µ−12σ2)T +σWT + 1)|]12 =(1 − (µ −1 2σ 2)T − 2(1 − (µ −1 2σ 2)T ) + 1)12 ≤ ˜C2 √ T

for some ˜C2. Let t∗ be the smallest T that uT(0, x) − e−rTc attains its maximum, then:

ce−rt∗(x,qn)≤ |ut∗(x,qn)(0, x) − u0(0, x)| ≤ x( ˜C2e−rt ∗(x,q n)pt(x, q n) + ˜C1t∗(x, qn)) ≤ x ˜Cpt∗(x, qn) =⇒ t∗(x, qn) ≥ ( c ˜ Cx) 2

for small t∗(x, qn), for some constant .

Lemma 2.3.11. On set C, t∗(x) > , for some  > 0.

Proof. We know from Lemma 2.3.9 that if the continuation area has an upper bound bn, the

first optimal observation time t∗(x, qn) ≥ (bcn)2.

When the continuation area does not have an upper bound, let us assume the process starts from a large x >> K. Define function f : t → E[e−rt(qn(Xt) − c)]. We can see that when

t = 0, f (t) = c < 0. As t increases, by the almost sure continuity of the sample path, f (t) increases until it reaches 0 at some t0(x). The optimal first observation time t∗(x) > t0(x). Define (x) = infn(x, qn), we obtain the lower bound for all t∗(x, qn). It is obvious that

(27)

Lemma 2.3.12. J is a contraction mapping on ({qn}n≥0, k·k∞).

Proof. Let qi, qj ∈ {qn}n≥0, we claim that:

d(J qi, J qj) ≤ βd(qi, qj)

for all x ∈ R+, and for some β ∈ [0, 1).

Fix x, we have: d(J qi, J qj) =d(max((K − x)+, sup t Ex [e−rt(qi(Xt) − c)]), max((K − x)+, sup t Ex [e−rt(qj(Xt) − c)])) ≤ksup t Ex [e−rt(qi(Xt) − c)] − sup t Ex [e−rt(qj(Xt) − c)]k∞ =kEx[e−rt ∗ i(qi(Xt∗ i) − c)] − Ex[e−rt ∗ j(q j(Xt∗j) − c)]k∞

where t∗(x, qi) is the first time Ex[e−rt(qi−1(Xt) − c)] attains its maximum, define t∗(x, qj)

similarly. Without loss of generality, let us assume t∗(x, qi) ≤ t∗(x, qj):

kEx[e−rt ∗(x,q i)(q i(Xt∗(x,q i)) − c)] − Ex[e −rt∗(x,q j)(q j(Xt∗(x,q j)) − c)]k∞ ≤kEx[e−rt ∗(x,q i)(q i(Xt∗(x,q i)) − qj(Xtt∗(x,qi)))]k∞ ≤e−rt∗(x,qi)kq i(x) − qj(x)k∞≤ e−rkqi(x) − qj(x)k∞

where  > 0 is the lower bound for all t∗(x).

Remark 2.3.2. qn is Cauchy and its limit is also in {qn}n≥0. By the Banach fixed point

theorem, J is a contraction mapping on ({qn}n≥0, k·k∞) with modulus β = e−r < 1. The

fixed point exists and is unique, which agrees with our claim before. Furthermore, we can start the iteration from an arbitrary element qn∈ Q to obtain the fixed point.

Remark 2.3.3. We can also apply Blackwell’s sufficient conditions for a contraction that J satisfies monotonicity and the discounting conditions.

Blackwell’s sufficient conditions: Let X ⊂ Rland B(X) is the space of bounded functions f : X → R, T is a contraction with modulus β if:

1. [Monotonicity] f, g ∈ B(X), and f ≤ g for all x ∈ X. =⇒ T f ≤ T g for all x ∈ X. 2. [Discounting] There exists some β ∈ (0, 1), such that T (f + a)(x) ≤ T f (x) + βa for all f ∈ B(X), a ≥ 0, x ∈ X.

As we have proven the monotonicity in Lemma 2.3.5, it suffices to check the discounting condition here. Let a ≥ 0:

J (qn+ a) − J (qn) = max((K − x)+, sup t Ex [e−rt(qn(Xt) + a − c)]) − max((K − x)+, sup t Ex [e−rt(qn(Xt) − c)]) ≤ sup t Ex [e−rt(qn(Xt) + a − c)] − sup t Ex [e−rt(qn(Xt) − c)] = e−rt ∗(x) a ≤e−ra

(28)

Theorem 2.3.13. Rate of convergence

The sequence qn converges uniformly to q∞, and the rate of convergence is exponential.

Proof. First we want to show that q1− q0 is bounded by some constant β0:

q1− q0= max((K − x)+, sup t Ex [e−rt(q0(Xt) − c)]) − q0(x) = max((K − x)+, sup t Ex [e−rt((K − Xt)+− c)]) − (K − x)+ ≤ e−rt∗(K − c) = β0

Since (K − x)+≥ 0 and (K − Xt)+− c ≤ K − c, and t∗ is the first time E

x[e−rt(K − Xt)+]

attains its maximum. We now claim that:

qn≤ q∞≤ qn+

β0

1 − ββ

n

where β is the modulus of the contraction mapping J . We know that qn is an increasing

sequence, so the left inequality holds. We can write: q∞− qn= q∞− J qn−1 = J q∞− J qn−1 ≤ β(q∞− qn−1) = β(q∞− qn+ qn− qn−1) =⇒ q∞− qn≤ β 1 − β(qn− qn−1) ≤ β n 1 − β(q1− q0) ≤ β0 1 − ββ n

Therefore we say that the rate of convergence is exponential. A Numerical Example

(29)

140 150 160 170 180 190 200 Starting Point x 0 0.5 1 1.5 2 2.5 Optimal Stoppin t c=0.02 c=0.04 c=0.06 c=0.08 c=0.1 c=0.3 c=0.5 c=0.8 Reference line t*=0.1

Figure 2.2: Optimal t∗ for the next observation on set C We can see from the figure above that:

(i) On set C, the optimal time for the next observation is bounded below by some positive number;

(ii) when the cost c grows larger, the optimal time also grows larger; (iii) when the cost c grows larger, the continuation area shrinks.

(30)

0 10 20 30 40 50 60 70 80 iteration number n -40 -35 -30 -25 -20 -15 -10 -5 0 Log(f inf -fn )

Log(finf-fn) against n

Figure 2.3: Rate of convergence

We can see that n1ln(q∞− qn) appears to be linearly, which means the sequence is of

expo-nential convergence rate. Which agrees with Theorem 2.3.13.

2.4

When Number of Observations Is Restricted

It is natural to ask what would the optimal strategy be if we were restricted on the number of observations we can make. We will now prove that we can still use fixed point iteration to find the optimal strategy in this case, depending on the maximal number of observations we can make.

Let us define a sequence Vnwhen the underlying process can be observed for at most n times:

Definition 2.4.1. V0(x) = g(x) Vn(x) = sup ˆ τ sup τ ∈Sˆτ,τ ≤τn E[e−rτg(Xτ) − ∞ X i=1 ce−rτi1 {τi≤τ }], n ≥ 1 Theorem 2.4.1. Vn= fn

(31)

ˆ

τ , τ , let τk0 = τk+1− τ1, and τ0 = τ − τ1. Then:

Vn(x) = sup (ˆτ ,τ ),τ ≤τn E[e−rτg(Xτ) − ∞ X i=1 ce−rτi1 {τi≤τ }] = sup (ˆτ ,τ ) E[e−r(τ ∧τn)g(Xτ ∧τn) − ∞ X i=1 ce−rτi1 {τi≤(τ ∧τn)}] We have: Ex[e−r(τ ∧τn)g(Xτ ∧τn) − ∞ X i=1 ce−rτi1 {τi≤(τ ∧τn)}] =Ex[1{τ =0}(e−r(τ ∧τn)g(Xτ ∧τn) − ∞ X i=1 ce−rτi1 {τi≤(τ ∧τn)})] +Ex[1{τ ≥τ1}e −rτ1 EXτ1[e−r(τ 0∧τ0 n−1)g(Xτ ∧τ n) − ∞ X i=1 ce−rτi010 i≤(τ0∧τn−10 )}]] −1{τ ≥τ1}ce−rτ1 =1{τ <τ1}g(x) + Ex[1{τ ≥τ1}e −rτ1 EXτ1[e−r(τ 0∧τ0 n−1)g(Xτ ∧τ n) − ∞ X i=1 ce−rτi010 i≤(τ0∧τ 0 n−1)}− c]] =1{τ ≥τ1}g(x) + 1{τ ≥τ1}e −rτ1 Ex[Vn−1(Xτ1) − c] =1{τ ≥τ1}g(x) + 1{τ ≥τ1}e −rτ1 Ex[fn−1(Xτ1) − c] ≤ max(g(x), sup t>0Ex [e−rt(fn−1(Xt) − c)]) =J fn−1(x) =fn(x)

Take supremum over ˆτ , τ on both sides, we have Vn≤ fn.

For the other direction, let us fix x, let tn= t(x, fn−1) be the optimal t for step n, on set C,

we have J fn−1(x) = Ex[e−rtn(fn(x) − c)]. For given  > 0, let τ be -optimal in Vn−1(Xtn),

such that: τ ≤ τn−1 Vn−1(Xtn) ≤ EXtn[e −rτ g(Xτ) − ∞ X i=1 ce−rτi1 {τi≤τ }] +  Let τ0 = tn+ τ , τk+10 = tn+ τk: J fn−1(x) −  = Ex[[e−rtn(fn−1(Xtn) − c)] −  = Ex[e−rtn(Vn−1(Xtn) − c)] −  ≤ Ex[e−rtn(EXtn[e −rτg(X τ) − ∞ X i=1 ce−rτi1 {τi≤τ }] − c)]] = Ex[e−rτ 0 g(Xτ0) − ∞ X i=1 ce−rτi1 {τi≤τ0}] ≤ Vn(x)

(32)

So when the maximal number of observations is fixed as n, we can still use the fixed point iteration to find our value function: it equals to the nth iteration fn. And similarly, after

doing n iterations, we already have an corresponding optimal strategy.

2.5

Stopping Between Observation Times Is Allowed

In practice, we usually face problems where one observes the underlying process discretely but stops the process continuously. The question being: is it possible find the optimal strategy in a similar way? We will now prove that, in the case where we allow stopping between observation times, we can always reduce it to the case where we do not. We will now characterize this situation and motivate our strategy.

After making an observation, we take a look at the information we have up to now, and decide whether to continue observing the underlying process. We have 3 choices:

(i) To decide to stop observing and stop the process immediately;

(ii) To decide to stop observing and stop the process in a deterministic future time;

(iii) To decide to make another observation and face the same choices after making the next observation.

Let us define the value function W which is associated with the case where stopping is allowed anytime: W : R+→ R+ W (x) = sup ˆ τ sup τ ∈Tτˆ E[e−rτg(Xτ) − ∞ X i=1 ce−rτi1 {τi≤τ }]

Let set F be defined as in Definition 2.2.1. Now we introduce a new function h: Definition 2.5.1. Function h

h : R+→ R+

h(x) = sup

t≥0

Ex[e−rtg(Xt)]

Let us now introduce an operator H which acts on F : Definition 2.5.2. Operator H

Let be H be an operator acting on F , such that ∀f ∈ F : (Hf )(x) = max(h(x), sup

t>0Ex

[e−rt(f (Xt) − c)])

Let us assume that ˆW is a fixed point of the operator H (the existence comes in the later section), i.e. ˆW = H ˆW . We can thus write:

(33)

Now we will prove that ˆW = W , and again we will provide an optimal strategy. To generalize the previous case, first we will prove the following lemma:

Lemma 2.5.1. For a bounded function g : R+ → R+, define function h, such that h(x) =

supt≥0Ex[e−rtg(Xt)]. Let ˆτ = {τ1, τ2, . . . } be given, then:

sup τ ∈Tτˆ Ex[e−rτg(Xτ) − ∞ X i=1 ce−rτi1 {τi≤τ }] = sup τ0∈SˆτEx [e−rτ0h(Xτ0)] − ∞ X i=1 ce−rτi1 {τi≤τ0}]

Where Tτˆ, Sτˆ are defined as in Definition 2.1.1

Proof. Define t0 := inf{t : EXτ 0[e−rtg(Xt)] attains its maximum}.Let τopt = inf{τ ∈ Tˆτ :

Ex[e−rτg(Xτ) −Pi=1∞ ce−rτi1{τi≤τ }] attains its maximum}, define now τ

0 = τ k, where τk ≤ τopt< τk+1. Then: Ex[e−rτ 0 h(Xτ0) − ∞ X i=1 ce−rτi1 {τi≤τ0}] =Ex[e−rτksup t≥0E Xτk[e−rtg(Xt)] − ∞ X i=1 ce−rτi1 {τi≤τk}] =Ex[e−rτkEXτk[e−rt 0 g(Xt0)] − ∞ X i=1 ce−rτi1 {τi≤τk}] =Ex[Ex[e−r(t 0 k)g(X t0 k)|G ˆ τ τk] − ∞ X i=1 c1{τi≤τk}] =Ex[e−r(t 0 k)g(X t0 k) − ∞ X i=1 ce−rτi1 {τi≤τk}] ≤Ex[e−rτ opt g(Xτopt) − ∞ X i=1 ce−rτi1 {τi≤τk}] = sup τ ∈Tτˆ Ex[e−rτg(Xτ) − ∞ X i=1 ce−rτi1 {τi≤τ }]

By the strong Markov property. For the other direction, define now τopt0 = inf{τ ∈ Sτˆ : Ex[e−rτ

0

h(Xτ0)] −P∞

(34)

Define t0 := inf{t : EXτ opt0[e−rtg(Xt)] attains its maximum}, then: Ex[e−rτg(Xτ) − ∞ X i=1 ce−rτi1 {τi≤τ }] =Ex[e−r(τ opt0+t0) g(Xτopt0+t0) − ∞ X i=1 ce−rτi1 {τi≤τ }] =Ex[e−rτ opt0 EX τ opt0[e −rt0 g(Xt0)] − ∞ X i=1 ce−rτi1 {τi≤τ }] =Ex[e−rτ opt0 h(Xτopt0) − ∞ X i=1 ce−rτi1 {τi≤τ }] ≤Ex[e−rτ opt0 h(Xτopt0) − ∞ X i=1 ce−rτi1 {τi≤τopt0}] = sup τ0∈Sτˆ Ex[e−rτ 0 h(Xτ0)] − ∞ X i=1 ce−rτi1 {τi≤τ0}]

Now we can conclude that LHS=RHS by construction, and we have the optimal τ and τ0 defined as in the proof. Note that the optimal τ0 needs to be the last observation time before the optimal τ .

This is the key property for proving that the 3-choice problem reduces to the 2-choice problem. Let us take the fixed point of H, ˆW , as defined in Definition 2.5.2, then using the supermartin-gale and martinsupermartin-gale property proved in Lemma 2.2.1 and Lemma 2.2.2 respectively, we can prove ˆW = W , and give an optimal strategy.

Let us again define the continuation set C0 and stopping set D0: Definition 2.5.3. Continuation and Stopping set

C0 := {x : ˆW (x) > sup

t≥0Ex

[e−rtg(Xt)]}

D0:= {x : x /∈ C}

Then define a sequence of observation time ˆτ∗0 = τ1∗0, τ2∗0, ... and a stopping time τ∗0, such that:

Definition 2.5.4. Observation Time and Stopping Time

(35)

Let m ∈ 0, 1, ... be the index of the last observation time, i.e.: m = max{k : τk∗0 < ∞} i.e. τm∗0< ∞, τm+1∗ 0 = ∞, ... Then: τ∗0:= τm∗0+ inf{t : ˆW (Xτ∗ m0) = EXτ ∗m0[e −rtg(X τ∗ m0+t)]}

Theorem 2.5.2. Allowing Stopping at Anytime

The fixed point ˆW equals W . Moreover, {ˆτ∗0, τ∗0} provides an optimal strategy.

Proof. The proof of ˆW ≥ W is the same as in Theorem 2.2.3. The other direction follows directly from Lemma 2.5.1. Knowing the fact that the observation sequence in Definition 2.5.4is defined as in Definition 2.2.4, and that the stopping time is defined as in Lemma 2.5.1, the claim that {ˆτ∗0, τ∗0} provides an optimal strategy follows from Theorem 2.2.3 and Lemma 2.5.1.

Remark 2.5.1. We can easily see from the definition that ∀f ∈ F, J f ≤ Hf , which means that based on the same information, our payoff would be higher if given more freedom on choosing the stopping time.

Also, by simply substituting g in the operator J by h, we construct H and thus can easily prove that ˆW = W using the same reasoning in Section 2.2. The difference is that the space would now be bounded below by h, which is obvious from the definition of H.

(36)

Chapter 3

The Quickest Detection Problem

In this chapter we will take a look at the quickest detection problem with discrete costly observations. We will first discuss the classical quickest detection problem, then formulate it under our settings. We will see that it can be regarded as a specific example of the previous case, which we can still use the fixed point approach to solve, and find an optimal strategy accordingly.

3.1

The Classical Quickest Detection Problem

In the classical quickest detection problem [1], one observes the trajectory of a Brownian motion (Xt)t≥0, whose drift changes from 0 to µ at some random time θ, and thus solves:

dXt= µ1{θ≤t}dt + σdWt (3.1)

where µ > 0, σ > 0, and (Wt)t≥0 is a standard Brownian motion. The random time θ is

independent of Wt with a known distribution, usually it is assumed that θ takes value 0

with known probability π, and given θ > 0,it follows an exponential distribution with known parameter λ > 0. i.e.:

P(θ = 0) = π;

P(θ > t) = (1 − π)e−λt

(3.2) The random time θ is called the ”disorder time”. Our goal in the detection problem is to detect as quickly as possible when the change has happened, while try not to declare the change before it actually appeared. In other words:

(i) After time θ, the delay in declaration of disorder should be as short as possible;

(ii) A false alarm (declaring the disorder before it actually appears) should happen as rarely as possible.

(37)

By continuously observing the process Xt, our information is the natural filtration generated

by Xt: FtX = σ{Xs : 0 ≤ s ≤ t}, the goal is to identify a stopping time τ ∈ FtX, as our

declaration that the drift has changed. Let us introduce a posteriori probability process Πt:

Πt= P(θ ≤ t|FtX), Π0 = π

Naturally, we would ask this stopping time τ to be close to θ in some sense. We can for instance form the value function for the quickest detection problem in the Bayesian setting as:

V (π) = inf

τ (P(τ < θ) + cEπ[τ − θ] +)

where the term P(τ < θ) is the probability of a ”false alarm”, Eπ[τ −θ]+is the delayal penalty,

and c > 0 is a known constant represents the relative weight between those two terms in our valuation, or the running cost in our delay.

We would like to formulate the value function in terms of Πt:

V (π) = inf τ (P(τ < θ) + cEπ[τ − θ] +) = inf τ (Eπ[1 − Πτ] + cEπ[τ − θ] +) = inf τ (Eπ[1 − Πτ] + cEπ[1τ >θ Z τ θ dt]) = inf τ (Eπ[1 − Πτ] + cEπ[ Z ∞ 0 1τ >t1θ≤tdt]) = inf τ (Eπ[1 − Πτ] + cEπ[ Z τ 0 Πtdt]) = inf τ Eπ[1 − Πτ+ c Z τ 0 Πtdt]

It is verified in [1] that the posteriori probability process Πt is a diffusion process.

3.2

Quickest Detection with Discrete Costly Observations

In this section we will formulate the quickest detection problem with discrete costly obser-vations. Let us assume that our underlying process (Xt)t≥0 is a Brownian motion with drift

changing from 0 to µ at a random time θ, where the dynamics of Xt and distribution of θ

are as described in Section 3.1. Our goal is to provide a best estimation of the unknown parameter . Suppose that P(θ = 0) = π is know at time 0, also the starting point of Xt is

know, X0 = x.

Similarly, instead of observing the process Xt continuously, we can now only observe it at

discrete random time points. Based on the information we have, we can find an optimal time to stop observing the process and declare the disorder, i.e. the best estimation of θ. Note that the information we have depends on the sequence of observing times we choose.

(38)

3.2.1 Formulation of the Problem

Let us assume that the underlying process Xt follows (3.1) and θ satisfies (3.2).

Let us define a sequence of observation times ˆτ = {τk}k≥1 the same way as in Chapter 2,

such that each element of the sequence is measurable with respect to the filtration generated by all the previous observations. Let us then define the filtration associated with the specific sequence ˆτ up to time t as:

Gtˆτ = σ{Xτ1, Xτ2, . . . , Xτi; τ1, τ2, . . . , τi; τi ≤ t}

Let us define the set of stopping times Sτˆ and Tτˆ as in 2.5.3.

Let us introduce the posteriori probability process Πt under discrete observations as:

Πt= P(θ ≤ t|Gtτˆ), Π0 = π

Let us assume that whenever making an observation, we have a fixed cost d > 0. We want to choose a sequence of observation times, as well as a stopping time, so that when we stop, we can keep the probability of a ”false alarm” as small as possible, and the delay in making a late declaration as short as possible, and we also want to make as few observations as possible since our cost increases with number of observations. We can thus write the value function below as U , which minimizes the summation of all three factors described. According to the reformulation in the previous section, we can write the value function as a function of the known parameter π:

Definition 3.2.1. Value Function U U : [0, 1] → [0, 1] U (x) = inf ˆ τ τ ∈SinfτˆEπ[1 − Πτ+ c Z τ 0 Πtdt + d ∞ X i=1 1{τi≤τ }]

Where c represents the relative weight between a false alarm and a delay, or to interpret as a running cost of delay, and d > 0 is the observation cost.

Remark 3.2.1. Again we do not ask the stopping time τ to be finite, in other words, τ = ∞ can be an optimal stopping time. However, one will never make an observation at infinity. Therefore, in the detection problem, we can consider the cases where the optimal stopping time is finite and infinite separately. In the following sessions it is sufficient to consider when all stopping times are finite.

Remark 3.2.2. From Definition 3.2.1 we can see that, U is bounded by 0 and above by 1 − π. Remark 3.2.3. We can see from Definition 3.2.1 that the filtration is generated by the discrete observation of Xt, while the value function is defined in terms of Πt, which means

that our information set in the value function comes from another process, which is not Markovian. In the later sections we will prove that this specific problem can be solved using the fixed point approach, for the process Πt has the Markov property and is admissible to the

(39)

Remark 3.2.4. From Definition 3.2.1 we can see that we formulate the problem in the discrete case now, however, as we will see later the process Πt is piecewise continuous. This

is to keep the consistency with the main problem. Later we will see that this formulation indeed works, and give a general requirement for the continuous part of the underlying process. 3.2.2 Properties of Πt

We now study the properties of the posterior probability process Πt. First let us define the

conditional likelihood ratio process that θ has already appeared. For any sequence ˆτ : Definition 3.2.2. Conditional Likelihood Ratio Process

Φτtˆ = P(θ ≤ t|G ˆ τ t) P(θ > t|Gtτˆ) Φτ0ˆ = π 1 − π

By Appendix A in [3], we can derive a recursive formula for the posterior process Φτˆ at any time t: Φτtˆ=    j(∆τn, Φττˆn−1, ∆Xτn √ ∆τn) if t = τn eλ(t−τn−1)ˆτ τn−1+ 1) − 1 if τn−1≤ t < τn (3.3) where ∆τn= τn− τn−1, ∆Xτn = Xτn− Xτn−1, and: j(∆t, φ, z) = expµz √ ∆t + (λ −µ 2 2 )∆t φ + Z ∆t 0 λ exp(λ +√µz ∆t)u − µ2u2 2∆t du

Here we will follow the derivation of [3] to derive (3.3). Since we define this recursive relation for any sequence ˆτ , it is easier to assume that the sequence is deterministic. Taking conditional expectations iteratively, the result of observing at stopping times follows directly.

Suppose ˜X is a standard Brownian motion on some probability space ( ˜Ω, ˜F , ˜P), whose drift changes from 0 to µ at time ˜θ, where P(˜θ = 0) = π and ˜P(θ ∈ dt|˜˜ θ > 0) = λe−λtdt. Let 0 = t0 < t1 < ... be the deterministic infinite sequence of observation times. Let:

Lt(u, x0, x1, . . . ) := Y l≥1,tl≤t 1 p2π(tl− tl−1) t exp {(xl− xl−1− µ(tl− tl−1∨ u) +)2 2(tl− tl−1) } Then we have: ˜ P(X ∈ dx˜ l for all l ≥ 1, tl≤ t) = Lt(˜θ, x0, x1, . . . ) Y l≥1,tl≤t dxl

We can write the conditional likelihood of observations ˜Xt0, ˜Xt1, . . . given ˜θ = u as:

(40)

This process ˜X can be constructed by a change of measure from another process X which is a standard Brownian motion on another probability space (Ω, F , P), i.e. the drift never changes. Let us assume there is another random variable θ on the same probability space with P(θ = 0) = π, and P(θ ∈ dt|θ > 0) = λe−λt. Then we can write:

P(X ∈ dxl for all l ≥ 1, tl≤ t) = Lt(∞, x0, x1, . . . ) Y l≥1,tl≤t dxl = Y l≥1,tl≤t 1 p2π(tl− tl−1) t exp {(xl− xl−1) 2 2(tl− tl−1) } since the drift stays 0.

Let F = (Ft)tgeq0 be the discrete time filtration generated by observations of X at the infinite

sequence 0 = t0 < t1 < ..., and let G = (Gt)tgeq0, where Gt= Ft∧ σ(θ).

Define ˜P on G∞ by: d˜P dP = Zt(θ) := Lt(θ) Lt(∞) = exp { ∞ X l=1 1{tl≤t}( (Xtl− Xtl−1)µ(tl− θ ∨ tl−1) + tl− tl−1 −µ 2((t l− θ ∨ tl−1)+)2 2(tl− tl−1) )}

So we can see that, under ˜P, conditioning on θ, Xtl− Xtl−1 for l ≥ 1 are independent, and

normally distributed with mean µ(tl− θ ∨ tl−1)+ and variance tl− tl−1.

We can easily see from the expression that Z0(θ) = 1, and thus ˜P and P are identical on G0,

so θ has the same distribution on ˜P and P. Under ˜P, X has the distribution of a standard Brownian motion whose drift changes from 0 to µ at θ.

By Bayes’ theorem, we define:

Φt:= ˜ P(θ ≤ t|Ft) ˜ P(θ > t|Ft) = E P[Z t(θ)1{θ≤t}|Ft] EP[Zt(θ)1{θ>t}|Ft]

On the set {θ > t}, we have (tl− θ ∨ tl−1)+ = (tl− θ)+ = 0 for all l ≥ 1, tl ≤ t. So

(41)

We can write:

EP[Zt(θ)1{θ≤t}] = πZt(0) + (1 − π)

Z t

0

λe−λtZt(u)du

Let us assume that tn−1 ≤ t < tn for some n ≥ 1. Then Zt(u) = Ztn−1(u) for all u,

Ztn−1(u) = 1 for all tn−1≤ u < tn.

So we have: EP[Zt(θ)1{θ≤t}] = πZtn−1(0) + (1 − π) Z t 0 λe−λtZtn−1(u)du = πZtn−1(0) + (1 − π) Z tn−1 0 λe−λtZtn−1(u)du + (1 − π) Z t tn−1 λe−λtZtn−1(u)du = 1 − π eλtn−1Φtn−1+ (1 − π)(e −λtn−1− e−λt) It follows that: Φt= eλt 1 − πE P[Z t(θ)1{θ≤t}] = eλ(t−tn−1)Φ tn−1+ e λ(t−tn−1)− 1 = eλ(t−tn−1) tn−1+ 1) − 1

which would be the case where tn−1 ≤ t < tnin (3.3).

Now we will derive Φtn conditioning on Φtn−1. For tn−1≤ u, we have Ztn−1(u) = 1, so:

Ztn(u) =Ztn−1(u) exp { (Xtn− Xtn−1)µ(tn− u ∨ tn−1) + tn− tn−1 −µ 2((t n− u ∨ tn−1)+)2 2(tn− tn−1) )} So we can write: Φt= eλt 1 − πE P[Z t(θ)1{θ≤t}] = e λtn 1 − π[(πZtn−1(0) + (1 − π) Z tn−1 0

λe−λuZtn−1(u)du) exp (Xtn− Xtn−1)µ −

µ2

2 (tn− tn−1)) + (1 − π)

Z tn

tn−1

λeλuZtn−1(u) exp {

(Xtn− Xtn−1)µ(tn− u) tn− tn−1 ) − µ 2(t n− u)2 2(tn− tn−1) du}] = exp {(Xtn− Xtn−1)µ − µ2 2 (tn− tn−1)}e λ(tn−tn−1)Φ tn−1 + Z tn tn−1

λeλ(tn−u)exp {{(Xtn− Xtn−1)µ(tn− u)

tn− tn−1) − µ 2(t n− u)2 2(tn− tn−1) }du Apply a change of variables w = u − tn, we can write:

(42)

which would be the case where t coincides with the observation times in (3.3).

Lemma 3.2.1. For any sequence ˆτ , the posterior probability process Πτtˆ is piecewise deter-ministic Markovian.

Proof. By (3.3) we see that the current value of the process Πτˆ

τk, at τi’s, depends only on the

previous Πττˆk−1, the increment in ˆτ and the increment in X. It does not depend on the path coming before τk− 1. Between τi’s, the evolution of the process is exponential. Therefore,

the process has random jump points, between which it is deterministic. So it is a piecewise deterministic Markov process.

Note that this process is not time-homogeneous.

Lemma 3.2.2. Define a discrete-time filtration Fτˆ indexed by k, by setting Fkˆτ = Gττˆk. For any sequence ˆτ , ∀τk∈ ˆτ , Πττˆk ∈ F

ˆ τ

k. Furthermore, Πτtˆ∈ Fkτˆ, for all t < τk+1.

Proof. By (3.3), we can see that every Πττˆ

k can be written as:

Πττˆk = f (Πττˆk−1, τk− τk−1, Xτk− Xτk−1)

where f is a function that can be easily derived from (3.3). We see that Πττˆk−1 ∈ Fτˆ k−1, τk− τk−1∈ Fk−1τˆ , and Xτk− Xτk−1 ∈ F ˆ τ k. Therefore Πττˆk ∈ F ˆ τ k.

Based on the filtration Fτˆ, we will also know what τ

k+1 is. Since the evolution of Π between

τk and τk+1 is deterministic, we will also have full information to determine Πt, t < τk+1.

3.2.3 A Fixed Point Approach

Let us prove in this section that we can use a fixed point approach to find the value function of the detection problem with discrete costly observations, and provide an optimal strategy. Furthermore, we can provide the value function and an optimal strategy when stopping between observation times is allowed. Also when the number of observation is restricted, we can use the fixed point approach with limited iterations to solve the problem.

Let us first define a set of functions L on which we will define our operator: Definition 3.2.3. Set L

L := {l Borel − measurable : [0, 1] → [0, 1], 0 ≤ l(π) ≤ 1 − π} Let us now introduce an operator L to characterize our choices as described above: Definition 3.2.4. Operator L

Let L be an operator acting on L, such that ∀l ∈ L: (Ll)(π) = min(1 − π, inf

t≥0Eπ[l(Πt) + c

Z t

0

(43)

Let us assume that ˆU is a fixed point of the operator L, i.e. ˆU = L ˆU . We can thus write: ˆ U (π) = min(1 − π, inf t≥0Eπ[ ˆU (Πt) + c Z t 0 Πudu + d])

Similarly, we will now prove that if the fixed point ˆU exists, it equals the value function U . Furthermore, it is also the unique fixed point. We will then provide an associated optimal strategy. Finally, we will prove the existence of the fixed point and how to find it.

Let us first define the continuation and stopping set, and a sequence of specific observation times ˆτ∗ = τ1∗, τ2∗, ..., which we will prove to be an optimal strategy:

Definition 3.2.5. Continuation and Stopping set

C := {π : ˆU (π) < 1 − π} D := {π : π /∈ C}

Definition 3.2.6. Observation Times Define the observation times by the following recursive construction: t∗(x) =    inf{t : ˆU (x) = Eπ[ ˆU (Πt) + c Rt 0Πudu + d]}, π ∈ C ∞, π ∈ D Define: τ0∗:= 0 τk+1∗ := τk∗+ t∗(Πτk∗), k ≥ 0

Let m ∈ 0, 1, ... be the index of the last observation time, i.e.: m = max{k : τk∗ < ∞} i.e. τm∗ < ∞, τm+1∗ = ∞, ...

We will prove the uniqueness of the fixed point by the following claims: Claim 3.2.3. Let ˆτ be any sequence of stopping times, then k → ˆU (Πτk) + c

Rτk

0 Πudu + dk

is a submartingale with respect to Fkτˆ.

Claim 3.2.4. Let ˆτ∗, m be defined as in Definition 3.2.6, then k → ˆU (Πτk∧m∗ )+c

k∧m

0 Πudu+

dP

1≤i≤(k∧m) is a martingale with respect to Fτˆ

k .

Claim 3.2.5. ˆU = U , ˆU is the unique fixed point of L.

Remark 3.2.5. From the proof we can see that, we do not require the process which we observe to be Markovian, for any optimal stopping problem with discrete observations, whether it is to maximize or to minimize some payoff functions, if satisfying the following criterias, we can use a fixed point approach and the super/sub-margingale argument to find its unique fixed point:

(44)

(i) The underlying process Πτˆ formulated in the value function is Markovian;

(ii) The process Xτk which we observe, along with the sequence τ1, τ2, ..., τk generates a

fil-tration Fkˆτ, ∀k, Πˆττk ∈ Fτˆ k.

As we have already proven in Section 2.5, when stopping at anytime is allowed, the case is even simpler here. We know that the posterio probability process Πt is deterministic until

References

Related documents

The story highlights a number of important issues such as whether current large-scale farms that displace farmers of their land are able to maintain or

In other words, if the required return on senior debt does not adjust to the increase in volatility, both equityholders and bondholders may benefit from the decline of senior

In this thesis we first give a brief review of some general optimal stopping theory, its connection to free-boundary problems and we then extend the prob- lem of pricing the

undersöka i börsnoterade företag. Vidare är dessa företags årsredovisningar publika för allmänheten, något som också var en bidragande orsak till valet. Årsredovisningarna är

“As long as state debt exceeds half of the Gross Domestic Product, the Constitutional Court may, within its competence set out in Article 24(2)b-e), only review the Acts on the

F¨ or att inte beh¨ ova leta upp intressanta h¨ andelser i registrerad data manu- ellt ska ett verktyg designas och implementeras d¨ ar en anv¨ andare kan ange en fr˚ agest¨

Elin Karlsson, Marie Ahnström, Josefine Bostner, Gizeh Perez-Tenorio, Birgit Olsson, Anna- Lotta Hallbeck and Olle Stål, High-Resolution Genomic Analysis of the 11q13 Amplicon in

Linköping Studies in Science and