# American option prices and optimal exercise boundaries under Heston Model–A Least-Square Monte Carlo approach

## Full text

(1)

### DIVISION OF APPLIED MATHEMATICS

MÄLARDALEN UNIVERSITY SE-721 23 VÄSTERÅS, SWEDEN

(2)

### Division of Applied Mathematics

Bachelor thesis in mathematics / applied mathematics

Date:

15th June 2020

Project name:

American option prices and optimal exercise boundaries under Heston Model–A Least-Square Monte Carlo approach

Author(s):

Supervisor(s): Ying Ni Reviewer: Doghonay Arjmand Examiner: Anatoliy Malyarenko Comprising: 15 ECTS credits

(3)

Abstract

Pricing American options has always been problematic due to its early exercise characteristic. As no closed-form analytical solution for any of the widely used models exists, many numer-ical approximation methods have been proposed and studied. In this thesis, we investigate the Least-Square Monte Carlo Simulation (LSMC) method of Longstaff & Schwartz for pricing American options under the two-dimensional Heston model. By conducting extensive nu-merical experimentation, we put the LSMC to test and investigate four different continuation functions for the LSMC. In addition, we consider investigating seven different combination of Heston model parameters. We analyse the results and select the optimal continuation func-tion according to our criteria. Then we uncover and study the early exercise boundary for an American put option upon changing initial volatility and other parameters of the Heston model.

(4)

## Contents

1. Chapter 1 5 1.1. Introduction . . . 5 2. Chapter 2 7 2.1. Preliminaries . . . 7 2.1.1. Options . . . 7 2.1.2. Stochastic Processes . . . 8

2.2. The Heston Model . . . 8

3. Chapter 3 10 3.1. The Least Square Monte Carlo Simulation . . . 10

3.1.1. Continuation Functions . . . 11

3.1.2. Simulation steps . . . 12

3.1.3. Variance reduction methods . . . 13

4. Chapter 4 16 4.1. Numerical Results . . . 16

4.1.1. Early Exercise Boundary using LSMC . . . 22

5. Chapter 5 26 5.1. Conclusion . . . 26

Appendix .A. . . 29

Appendix .B. . . 33

(5)

## List of Figures

4.1. Plot of relative error against κ . . . 21

4.2. Early Exercise Boundary for American put option with T = 0.5, κ = 7, ρ = −0.8, when (KS) = 1 evaluated using 5 time steps left), 10 time steps (top-right), 50 time steps (bottom-left), and 100 time steps (bottom-right). (100,000

simulated paths) . . . 23

4.3. Early Exercise Boundary for American put option evaluated at four different

initial volatility V0with T = 0.5, κ = 7, ρ = −0.8, σv= 0.9, θ = 0.16 using

100 time steps and 100,000 simulated paths . . . 24

4.4. Early Exercise Boundary for American put option evaluated at four different

correlation (ρ) with T = 0.5, κ = 7, σv = 0.9, V0 = 0.0625, and θ = 0.16

using 100 time steps and 100,000 simulated paths . . . 24

4.5. Early Exercise Boundary for American put option evaluated at four different

long-term average variance (θ ), with V0= 0.0625 T = 0.5, κ = 7, ρ = −0.8,

σv= 0.9 using 100 time steps and 100,000 simulated paths . . . 25

4.6. Early Exercise Boundary for American put option evaluated at four different volatility of volatility values (σv), with V0= 0.0625 T = 0.5, κ = 7, ρ = −0.8,

(6)

## List of Tables

4.1. Heston Parameters . . . 16

4.2. American put price when κ = 7, T = 0.5, and ρ = 0.1. . . 17

4.3. American put price when κ = 25, T = 0.5, and ρ = 0.1. . . 17

4.4. American put price when κ = 7, T = 0.5, and ρ = −0.8. . . 18

4.5. American put price when κ = 25, T = 0.5, and ρ = −0.8. . . 18

4.6. American put price when κ = 0.01, T = 0.5, and ρ = −0.8. . . 19

4.7. American put price when κ = 100, T = 0.5, and ρ = −0.8. . . 19

4.8. American put price when κ = 7, T =2525 , and ρ = −0.8. . . 20

4.9. Estimated computational time for each continuation function conducted on a Lenovo Ideapad 330 with processor Intel Core i5-8250U and 8 Gb of RAM. . 22

(7)

## Chapter 1

### Introduction

Options have existed for more than 2000 years according to Aristotle [1]. Thales of Miletus (around 350 B.C) bought rights to use olive presses with expectations of unusually large har-vests. Options have been used quite widely for different purposes in the financial industry. In the 1970s, Fischer Black, Myron Scholes and Robert Merton had their breakthrough by deriving the Black-Scholes (BS) partial differential equation using no-arbitrage arguments with continuous time and obtained the closed-form BS formula, which changed the way we price derivatives using the underlying assets assuming that the stock price follows a Geometric Brownian Motion [2]. Louis Bachelier’s in the year 1900, was the first person to describe the fluctuations in stock price using Brownian Motion [3].

The BS model assumes a constant volatility over the life span of an option. The constant volatility assumption is based on the early perception that asset returns are characterised by the normal distribution. This assumption seems to be incorrect. One way to show this is, to set the market option price equal to the corresponding BS option price and solve for the volatility component of the BS model. By repeating this process for different options, it yields different implied volatilities; therefore contradicting the constant volatility assumption [2]. Another approach to show that the volatility is not constant and does not follow a normal distribu-tion was by [4], claiming that volatility changes over time and the changes are unpredictable. Scott also suggests that volatility has a tendency to revert to a long-run average [4]. The mean-reverting feature motivates researchers in attempting to price European options where the underlying asset is driven by stochastic mean reverting volatility processes. Among other researchers, Heston considered pricing European options under stochastic volatility driven by various types of mean reverting processes [5]. However, Heston model with stochastic volat-ility does not have an analytical solution for an American option.

Our object of study is pricing an American put option under the Heston model. What char-acterises American options is that they can be early exercised at any time up to maturity. At every time step, the holder of the option has to know whether it is optimal to early exercise or continue to hold the option. Theoretically, this is done by simply comparing the current im-mediate exercise payoff to what is called the continuation value (i.e. the value of the option if

(8)

it is held till the next time step). If the continuation value is higher than the immediate payoff then it is not optimal to early exercise and the option would be held, therefore the value of the option at that time step will be the continuation value, and vice versa.

There have been attempts to approximate the American option prices using numerical ap-proaches and simulation techniques. Implicit finite-difference scheme on underlying assets under influence of a stochastic volatility is one of the numerical approaches developed by [6]. This method uses a multigrid algorithm for the fast iterative solution of the resulting discrete linear complementarity problems. Simulation techniques have also been used to price options with multiple stochastic factors [7]. However, in most of the cases the options were European and from a computational point of view, it was impossible to use simulation methods to price American options [8]. It is due to the positive probability of early exercise of an American option implying that when pricing the option one must calculate the optimal early exercise policy. There is only one future path at any time along any path in the simulation scheme. These values lead to a biased result as the exercise strategy should be calculated recursively.

A simulation based method to price American options under the one-dimensional Brownian motion was proposed by Longstaff and Schwartz in 2001 [9]. This method estimates the con-ditional expectation of the payoff from keeping the option alive at each possible exercise point from a simple least squares cross-sectional regression using the simulated paths. In other words, they illustrate how to price different types of path dependent options using the Least Squares Monte Carlo (LSMC) approach. Although to our knowledge, there have not been many studies on the LSMC under the Heston model, one paper that followed the approach of Longstaff and Schwartz is [10], where they used the LSMC approach to uncover the early exercise boundary using a set of Laguerre polynomials for their continuation functions. The early exercise boundary is then employed in a new numerical American options pricing tech-nique that improves on the original LSMC of Longstaff and Schwartz in [9].

In this thesis, we intend to implement the LSMC approach for pricing American put option under the Heston model. Our work differs from [10] as it revolves solely around the LSMC technique. Our contribution consists of two main parts. First, we investigate the following four continuation functions within the LSMC: Laguerre polynomials used in [10], polynomial of S(t) and V (t) as explained in [11], polynomial of log(KS)/σ as explained in [12], and shifted Legendre polynomials as proposed by [13]. Then, we select the optimal continuation function by conducting extensive numerical experimental studies. The criteria for selections includes several factors such as bias, relative error and computational time. Second, we use LSMC to uncover the early exercise boundary of an American put option and investigate how changing some parameters affect the early exercise region.

In the next chapter, we review necessary preliminaries for later sections and we introduce and explain the Heston model. In chapter 3, we introduce the idea of LSMC, and we present the four continuation functions to be investigated and the variance reduction methods we use. Chapter 4 contains analysis of the numerical results that are obtained from the LSMC and the early exercise boundaries. At last we present our conclusions in chapter 5.

(9)

## Chapter 2

### Options

An option is a financial contract which gives the holder the right but not the obligation to exercise a contract. The right is usually to buy or sell an asset at a predetermined price, i.e. strike price K. There are two types of options:

• A Call Option gives the holder of the option the right to buy an asset by a certain date, namely maturity T , for a certain price.

• A Put Option gives the holder the right to sell an asset by a certain date, namely maturity T, for a certain price.

There are several types of options, but the most popular are American options and European options. American options have the freedom of exercise at any time up to the maturity date, whereas, European options are limited to exercising only at maturity date [2]. Due to this difference, the prices for a given asset with the same strike and maturity varies.

The payoff for the European options are the following:

ΦCall(S(T )) = (S(T ) − K)+= (

S(T ) − K, S(T ) ≥ K,

0, S(T ) < K, (2.1)

and for the European put:

ΦPut(S(T )) = (K − S(T ))+ =

(

K− S(T ), K ≥ S(T ),

0, K< S(T ). (2.2)

where S(T ) is the price of the asset at maturity time T .

For the American options the payoff is the same as above for 0 ≤ t ≤ T , given that Amer-ican options can be exercised at any time up to maturity date.

(10)

### Stochastic Processes

Definition 1. A stochastic process is defined as a mathematical tool to model random variable

movements in time. A random variable X (t) parametrised by time t ∈T is called a stochastic

process, where the set of time epochsT = [0,T], [14].

Definition 2. A stochastic process X = {Xt : t ≥ 0} is a Lévy process if it has the following

properties:

• X0= 0 with probability one;

• Independent increments: for any (0 ≤ t1< t2< · · · < tn< ∞), (Xt2−Xt1, Xt3−Xt2, . . . , Xtn−

Xtn−1) are independent;

• Stationary increments: the probability distribution of any increment Xt− Xs is only

de-pendent on the length t − s of the time interval and increments on equally long time intervals have an identical distribution.

• Continuity in probability: for any ε > 0 and t ≥ 0 it holds that lim

h→0P(|Xt+h− Xt| > ε) = 0

Wiener process is an example of Lévy processes.

Definition 3. Let z(t),t ≥ 0 be a stochastic process defined on the probability space (Ω, F, P), where Ω is a set (the sample space of possible outcomes), F is a σ -algebra of subsets of Ω (the events), and P is the probability measure. The process z(t) is called a Wiener process if

• it has independent increments,

• the increment z(t + s) − z(t) is normally distributed with mean 0 and variance s, inde-pendent of time t, and

• it has continuous sample paths and z(0) = 0.

### The Heston Model

The Heston model, is a popular stochastic variance model, first introduced by Steven Heston in 1993, [5]. The model treats the asset price and the variance as stochastic processes. In this thesis, we begin by assuming that we live in a risk-neutral world and therefore, the risk-neutral Heston model (1993) is given by the following stochastic differential equations:

dS(t) = rS(t)dt +pV(t)S(t)dW1(t). (2.3)

The variance V (t) follows the following square-root process proposed by Cox, Ingersoll, and Ross (1985), [15]

dV(t) = κ[θ −V (t)]dt + σv

p

V(t)dW2(t). (2.4)

(11)

• S(t) is the asset price at time t, • r is the risk-free interest rate, • V (t) is the variance at time t,

• θ is the long-term average of the variance,

• κ is the mean reversion rate of the variance process V (t), • σvis the instantaneous volatility of volatility,

• Wn(t) for n = 1, 2 are Wiener processes.

In addition, it is given that κ, θ , σv > 0, and the correlation between the two Wiener

processes is given by dW1(t)dW2(t) = ρdt. The parameter κ indicates the degree of volatility

clustering. The process specifying the variance is called a mean-reverting square root process. Mean reversion is a desired property for stochastic volatility. The first term on the right hand side of equation (2.4) indicates that when the current variance is larger (or smaller) than the long-term average of the variance, i.e. Vt> θ (Vt < θ ), the next variance will be decreased (or

(12)

## Chapter 3

### The Least Square Monte Carlo Simulation

The Least-Square Monte Carlo (LSMC) was first presented by [9] for approximating the value of American options under the Geometric Brownian Motion (GBM):

dS(t) = rS(t)dt + σ S(t)dW (t).

Contrary to European options, American options have no closed-form solution and are mostly priced using simulation techniques or semi-analytical methods. The problematic part in pri-cing American options using simulation, is the possibility to early exercise before maturity. This raises the question at each time step, whether to early exercise or hold the option longer. At each time step, the holder of an American option has to compare the continuation value to the early exercise value. The option would be exercised before maturity if the latter is higher than the expected payoff from the continuation value. The optimal exercise strategy is affected by the conditional expectation of the payoff from keeping the option alive. One intuitive but inefficient idea, is to estimate the continuation value by simulating I number of paths at each time step for each simulated path. However, this does not work, as it is computationally un-feasible when the total number of paths grows exponentially large.

That is where the LSMC comes into play. According to [9] the continuation value can be estimated using least-square regression. The payoff from continuation is regressed on a set of functions of the state variables S(t) as done in [9] and an estimate of the conditional expectation is given by the fitted value from the regression. By estimating this continuation value at every time step at which the option can be exercised, the optimal exercise strategy for each simulated path is then known by comparing the continuation value to the current exercise payoff, and the value of the option at that time step will be the higher value out of both.

In this thesis, we adapt the LSMC idea to the two-dimensional Heston Model given in equations (2.3) and (2.4) rather than the one-dimensional GBM which was originally used by [9], therefore the continuation functions we choose to study are ought to be functions of both state variables S(t) and V (t). In order to approximate the continuation function, Longstaff & Schwartz [9] assumed that the continuation function is square integrable, hence, it is in

Hilbert space. The norm in the Hilbert space is defined as h f (x), h(x)i =R

(13)

can be approximated by a linear combination of basis functions [16]. Thus we can write it as: C(S(t),V (t),tk) = ∞

### ∑

k=0 akΦk(S(t),V (t),tk), (3.1) where {Φk(S(t),V (t),tk)}∞

k=0form a basis in the space of continuation functions and {ak}∞k=0

are the coefficients to be estimated by linear regression. There are several candidates for the continuation functions to choose from. In this research thesis, we study the following four candidates of continuation functions.

### Continuation Functions

We consider an American put option under the Heston model (2.3) and (2.4). Let U (S(t),V (t),t) be the option price at time t with the underlying spot price S(t) and spot variance V (t). The option price at an arbitrary time t is given by:

U(S(t),V (t),t) = {C(S(t),V (t),t), Φ(S(t))}+. (3.2)

i.e. the maximum of the continuation value and the early exercise value. We propose the following four models for approximating the continuation functions.

1. The first continuation function that we considered is the weighted Laguerre polynomials first suggested by Longstaff & Schwartz [9] and later extended by [10]. They chose the following continuation function in their LSMC method:

C(S(t),V (t),tk) ≈ a(k)0 + a(k)1 L0  S(t) K  + a(k)2 L1 S(t) K  + a(k)3 L0 S(t) K V(t) θ  , (3.3)

where Li is the i:th weighted Laguerre polynomial with the weight (exp(−X2)). Then

L0(X ) = exp(−X2) and L1(X ) = exp(−X2) × (1 − X ), θ is the long-run mean of variance

in the Heston model, and a(k)i are the coefficients to be estimated by regressing the

continuation cash flows against the basis functions, with superscript (k) indicating that the regressions are affected at each time step. Their choice of the weighted Laguerre polynomials is possibly due to the fact that these polynomials form an orthonormal basis on the interval [0, ∞) (i.e. h f (x), h(x)i = 0, andph f (x), f (x)i = 1) [17].

2. The second approach for estimating the continuation value is used in chapter 12 of [11], where they used polynomials of simple power of the state variables which is a natural multi-dimensional extension to the continuation function used in Longstaff & Schwartz [9]. The continuation function is given by:

C(S(t),V (t),tk) ≈ a(k)0 + a(k)1 S(t)V (t) + a(k)2 S(t) + a3(k)V(t) + a(k)4 S(t)2+ a(k)5 V(t)2, (3.4) where a(k)i are the coefficients to be estimated as above.

(14)

3. For the third approach, we consider the “log-moneyness per unit of volatility” which is inspired by [12]. In this work, a similar variant of log-moneyness, scaled by the square root of maturity, is used as a sub-optimal early exercise rule in American option pricing. The resulting approximation in American pricing is still plausible. This motivates us to consider a continuation function in this single variable “y” which absorbs in both the prevailing stock price and the volatility level. It is given by y = log(S(t)K )/σ , where σ is the volatility

p

V(t). Then our continuation function will be a fourth degree

polynomial of the variable y.

C(S(t),V (t),tk) ≈ a(k)0 + a(k)1 y+ a(k)2 y2+ a(k)3 y3+ a(k)4 y4, (3.5)

4. The last continuation function we used is the shifted Legendre polynomials suggested by [13]. These polynomials are orthonormal on (0, 1), [17]. The continuation function is the following: C(S(t),V (t),tk) ≈ a(k)0 + a(k)1 p1  S(t) K  + a(k)2 p2  S(t) K  + a(k)3 p1  S(t) θ V(t) K  , (3.6)

where piare the shifted Legendre polynomials given by the formula below:

pk(x) = (−1) k 2kk! dk dxk((1 − (2x − 1) 2)k). (3.7)

### Simulation steps

In order to simulate the processes in the Heston model, which is time continuous, we need to discretise it into a finite number of time steps. Following the approach of [10], we choose the Euler scheme and we get the following discretisation of the Heston model:

S( j + 1) = S( j) + rS( j)∆t +pV( j)S( j)∆W1 V( j + 1) = V ( j) + κ(θ −V ( j))∆t + σv p V( j)(ρ∆W1+ q 1 − ρ2∆W 2), (3.8)

where S( j) and V ( j) are the stock price and variance at the j:th time step. Now that we have the discretisation, the simulation goes as follows:

Step 1: Simulate I paths for the two-dimensional market model process (3.8) with M + 1 time steps and results X (t, i) where t ∈ {0, ∆t, 2∆t, . . . , T }, i ∈ {1, . . . , I} and ∆t =MT.

Step 2: At maturity time T , the option value is given by its payoff function U (T, i) = ΦPut(S(T, i)) which is positive if the option is in-the-money at maturity and zero otherwise. The pay-off of the option at time T is therefore:

(15)

Step 3: Iterate backwards t = T − ∆t, . . . , 0;

• Regress the continuation value (Cv) against the state variable at each time step,

• Use the coefficients obtained by the linear regression to estimate the continuation value (Cv) for every simulated path,

• The value of the option is the maximum of its continuation value and the immediate exercise value.

Step 4: At t = 0 we obtain the price of the option by calculating the average of the LSMC estimator over all I paths

ˆ U(0)LSMC =1 I I

### ∑

i=1 U(0, i). (3.9)

### Variance reduction methods

Variance reduction methods can be used to increase efficiency of Monte Carlo (MC) sim-ulation. These methods rely heavily on taking advantage of tractable features of a model to correct simulation outputs, and reducing the variability in simulation outputs. One of the most effective techniques for improving the efficiency of MC simulation is the Control variate

method[18]. It exploits information about the errors in estimates of known quantities to reduce

the error in an estimate of an unknown quantity. To illustrate how the method works, we let Y1, . . . ,Ynbe outputs from n replications of a simulation. Yicould be the discounted payoff of a

derivative security on the i:th simulated path [18]. Suppose that Yiare independent and

identic-ally distributed and we intend to estimate E[Yi]. The sample mean is ¯Y = (Y1+ · · · + Yn)/n.

This estimator is unbiased and converges with probability 1 as n → ∞. On each replication we also calculate another output Xialong with Yi. The pairs (Xi,Yi), i = 1, . . . , n are i.i.d. to each

other and the expectation E[X ] of the Xiis known. Then for any fixed b we have:

Yi(b) = Yi− b(Xi− E[X]) (3.10)

from the i:th replication and then compute the sample mean ¯ Y(b) = ¯Y− b( ¯X− E[X]) = 1 n n

### ∑

i=1 (Yi− b(Xi− E[X])), (3.11)

This is a control variate estimator; the observed error ¯X−E[X] serves as a control in estimating

E[Y ]. As an estimator of E[Y ], the control variate estimator above is unbiased because

E[ ¯Y(b)] = E[ ¯Y− b( ¯X− E[X])] = E[ ¯Y] − E[Y ] (3.12)

and it is consistent because, with probability 1 lim n→∞ 1 n n

### ∑

i=1 Yi(b) = lim n→∞ 1 n n

### ∑

i=1 (Yi− b(Xi− E[X])) = E[Y − b(X − E[X ])] = E[Y ]. (3.13)

(16)

Each Yi(b) has variance

Var[Yi(b)] = Var[(Yi− b(Xi− E[X]))]

= σY2− 2bσXσYρXY+ b2σX2≡ σ2(b)

(3.14)

where σX2 = Var[X ], σY2= Var[Y ], and ρXY is the correlation between X and Y . The control

variate estimator ¯Y(b) has variance σ2(b)/n and the ordinary sample mean ¯Y (which

corres-ponds to b = 0) has variance σY2/n. Hence, the control variate estimator has a smaller variance than the standard estimator if b2σX < 2bσYρXY. The optimal coefficient b∗ minimises the

variance (3.14) and is given by

b∗= σY σX

ρ XY = Cov[X ,Y ]

Var[X ] . (3.15)

Substituting this value (3.14) and simplifying, we find that the ratio of the variance of the optimally controlled estimator to that of the uncontrolled estimator is

Var[ ¯Y− b∗( ¯X− E[X])]

Var[ ¯Y] = 1 − ρ

2

XY. (3.16)

Antithetic Variates

This method reduces variance by introducing negative dependence between pairs of replica-tions. This method is based on the observation that if U is uniformly distributed over [0, 1], then 1 −U is too. If we generate a path using (U1, . . . ,Un) as inputs, we can generate a second

path using (1 −U1, . . . , 1 −Un) without changing the probability law of the simulated process.

The variables Ui and (1 − Ui) form an antithetic pair in the sense that a large value of one is

accompanied by a small value of the other. This suggests that a large or small output computed from the first path may be balanced by the value computed from the antithetic path, resulting in a reduction in variance. These observations extend to other distributions through the inverse

transform method: F−1(U ) and F−1(1 −U ) both have distribution F but are antithetic to each

other because F−1 is monotone. In a simulation driven by independent standard normal

ran-dom variables, antithetic variates can be implemented by pairing a sequence (Z1, Z2, . . . ), of

i.i.d. N(0, 1) variables with the sequence (−Z1, −Z2, . . . ) of i.i.d. N(0, 1) variables, whether or

not they are sampled through the inverse transform method. If the Ziare used to simulate the

increments of a Brownian path, then the −Zi simulate the increments of the reflection of the

path about the origin. This suggests that running a pair of simulations using the original path and then its reflection may result in lower variance. To illustrate this further, assume we want to estimate an expectation E[Y ] and that using some implementation of antithetic sampling produces a sequence of pairs of observations (Y1, ˜Y1), (Y2, ˜Y2, ) . . . , (Yn, ˜Yn). The key features of

the antithetic variates method are the following: • the pairs (Y1, ˜Y1), (Y2, ˜Y2, ) . . . , (Yn, ˜Yn) are i.i.d.;

• for each i,Yi, and ˜Yihave the same distribution, though ordinarily they are not

(17)

We name a random variable with the common distribution of the Yi and ˜Yi as Y . The

antithetic variates estimator is the average of all 2n observations; ˆ YAV = 1 2n n

i=1 Yi+ n

### ∑

i=1 ˜ Yi ! = 1 n n

### ∑

i=1  Yi+ ˜Yi 2  . (3.17)

The rightmost expression in (3.17) suggests that ˆYAV is the sample mean of the n independent

observations  Y1+ ˜Y1 2  , Y2+ ˜Y2 2  , . . . , Yn+ ˜Yn 2  (3.18)

By the Central Limit Theorem, assuming σAV < ∞

ˆ YAV− E[Y ] σAV/ √ n ∼ N(0, 1), as n → ∞ (3.19) with σAV2 = Var Yi+ ˜Yi 2  (3.20) Antithetic reduces variance if

Var[ ˆYAV] < Var " 1 2n 2n

### ∑

i=1 Yi # (3.21) In other words, Var[Yi+ ˜Yi] < 2Var[Yi] (3.22)

The variance on the left can be written as

Var[Yi+ ˜Yi] = Var[Yi] + Var[ ˜Yi] + 2Cov[Yi, ˜Yi]

= 2V Var[Yi] + 2Cov[Yi, ˜Yi]

(3.23) given the fact that Yi and ˜Yi have the same variance if they have the same distribution. Thus,

the condition for antithetic sampling to reduce variance becomes

Cov[Yi, ˜Yi] < 0 (3.24)

In order to increase the efficiency of our simulation and reduce the variance, we considered generating an additional antithetic path for each simulated path. The option values from both paths were then averaged at the end of each simulation.

Price correction using the European counterpart

In our simulation, we use European option price as a control variate. We price the European

option using the Monte Carlo simulation (VELSMC) (see Appendix .B) and compare the price

to the semi-analytical solution (VE) which is obtained from the MATLAB built-in function

that uses the Finite Difference Method. The difference is then used to find the deviation [2]

in order to correct the American option LSMC price (VALSMC). We assume the deviation holds

true for the American option as well. Hence, the price of our American option is given by:

VALSMC= VA+ (VELSMC−VE), (3.25)

(18)

## Chapter 4

### Numerical Results

For the simulation we use computer program MATLAB, where we create a function that prices an American put option under the Heston model using the LSMC with the four different con-tinuation functions mentioned in the previous chapter. The function generates 500,000 paths from which 250,000 are antithetic paths. The simulation prices the option over 100 time steps for 6-months maturity and 10 time steps for a shorter maturity of one week. The code for performing the LSMC with one of the continuation functions (Legendre Polynomials) is listed in (Appendix .A).

To conduct extensive experimental studies, the pricing is done for seven different com-bination of parameters (κ, T, ρ). The results are presented up to four decimals. For each set

of parameters, we consider three different levels of moneyness (KS). In order to get a better

estimation, we conduct the simulation for 31 times and then take the average to obtain the LSMC put price. This will result in better approximated prices and allow us to look into the standard error.

The LSMC put prices are then compared to the benchmark price (PBM), which is obtained

from the MATLAB built-in function. The built-in function (optByHestonFD) uses the Finite

Difference Methodto approximate the American put prices.

The Heston parameters that are used for the simulation and holds for all combinations are presented in Table (4.1):

SSS(((000))) rrr VVV(((000))) θθθ σσσvvv

10 0.1 0.0625 0.16 0.9

Table 4.1: Heston Parameters

The results for the seven combination of parameters are provided in Tables (4.2 - 4.8). The

tables include the continuation functions, moneyness, benchmark prices (PBM), LSMC prices

(19)

between the benchmark price and LSMC price (Bias = PLSMC− PBM). The standard error is

defined as:

S.E = σprices/

√ 31

where σprices is the sample standard deviation of 31 simulated trials. The relative error is

hence, R.E = |PLSMC−PBM|

PLSMC .

Continuation Functions Moneyness PBM PLSMC Bias S.E R.E

0.7 0.0519 0.0537 0.0017 0.0001 0.0324 Laguerre Polynomial 1 0.8004 0.8003 -0.0001 0.0004 0.0001 1.2 2.1104 2.0432 -0.0672 0.0006 0.0329 0.7 0.0519 0.0538 0.0019 0.0001 0.0349 Polynomial of S(t) and V (t) 1 0.8004 0.8012 0.0008 0.0003 0.0009 1.2 2.1104 2.0424 -0.0679 0.0006 0.0333 0.7 0.0519 0.0538 0.0019 0.0001 0.0351 Polynomial of log(KS)/σ 1 0.8004 0.8005 0.0001 0.0004 0.0001 1.2 2.1104 2.0432 -0.0672 0.0006 0.0329 0.7 0.0519 0.0538 0.0018 0.0001 0.0345 Legendre Polynomial 1 0.8004 0.8005 0.0001 0.0004 0.0001 1.2 2.1104 2.0425 -0.0679 0.0005 0.0333

Table 4.2: American put price when κ = 7, T = 0.5, and ρ = 0.1.

Continuation Functions Moneyness PBM PLSMC Bias S.E R.E

0.7 0.0687 0.0709 0.0022 0.0001 0.0312 Laguerre Polynomial 1 0.8852 0.8843 -0.0009 0.0004 0.0010 1.2 2.1775 2.1181 -0.0594 0.0006 0.0280 0.7 0.0687 0.0709 0.0023 0.0001 0.0319 Polynomial of S(t) and V (t) 1 0.8852 0.8838 -0.0014 0.0004 0.0016 1.2 2.1775 2.1193 -0.0582 0.0006 0.0275 0.7 0.0687 0.0710 0.0023 0.0001 0.0325 Polynomial of log(KS)/σ 1 0.8852 0.8844 -0.0008 0.0003 0.0009 1.2 2.1775 2.1184 -0.0591 0.0006 0.0279 0.7 0.0687 0.0709 0.0023 0.0001 0.0324 Legendre Polynomial 1 0.8852 0.8839 -0.0013 0.0004 0.0014 1.2 2.1775 2.1192 -0.0584 0.0006 0.0275

(20)

Continuation Functions Moneyness PBM PLSMC Bias S.E R.E 0.7 0.0983 0.1014 0.0031 0.0001 0.0304 Laguerre Polynomial 1 0.8089 0.8041 -0.0049 0.0005 0.0061 1.2 2.0448 1.9422 -0.1026 0.0008 0.0528 0.7 0.0983 0.1013 0.0029 0.0001 0.0291 Polynomial of S(t) and V (t) 1 0.8089 0.8042 -0.0048 0.0004 0.0059 1.2 2.0448 1.9427 -0.1021 0.0007 0.0526 0.7 0.0983 0.1015 0.0032 0.0001 0.0319 Polynomial of log(KS)/σ 1 0.8089 0.8029 -0.0059 0.0004 0.0074 1.2 2.0448 1.9432 -0.1016 0.0007 0.0523 0.7 0.0983 0.1014 0.0031 0.0001 0.0307 Legendre Polynomial 1 0.8089 0.8037 -0.0052 0.0005 0.0064 1.2 2.0448 1.9421 -0.1027 0.0007 0.0529

Table 4.4: American put price when κ = 7, T = 0.5, and ρ = −0.8.

Continuation Functions Moneyness PBM PLSMC Bias S.E R.E

0.7 0.0905 0.0928 0.0023 0.0001 0.0243 Laguerre Polynomial 1 0.8943 0.8860 -0.0083 0.0005 0.0094 1.2 2.1581 2.0814 -0.0766 0.0006 0.0368 0.7 0.0905 0.0927 0.0022 0.0001 0.0237 Polynomial of S(t) and V (t) 1 0.8943 0.8856 -0.0087 0.0005 0.0098 1.2 2.1581 2.0810 -0.0771 0.0007 0.0370 0.7 0.0905 0.0927 0.0022 0.0001 0.0237 Polynomial of log(KS)/σ 1 0.8943 0.8851 -0.0092 0.0005 0.0103 1.2 2.1581 2.0818 -0.0763 0.0010 0.0366 0.7 0.0905 0.0928 0.0023 0.0001 0.0246 Legendre Polynomial 1 0.8943 0.8855 -0.0088 0.0005 0.0100 1.2 2.1581 2.0811 -0.0770 0.0007 0.0370

(21)

Continuation Functions Moneyness PBM PLSMC Bias S.E R.E 0.7 0.0690 0.0743 0.0053 0.0001 0.0714 Laguerre Polynomial 1 0.4311 0.4651 0.0340 0.0004 0.0730 1.2 2.000 1.8247 -0.1753 0.0005 0.0961 0.7 0.0690 0.0742 0.0052 0.0001 0.0702 Polynomial of S(t) and V (t) 1 0.4311 0.4644 0.0333 0.0004 0.0716 1.2 2.000 1.8243 -0.1757 0.0006 0.0963 0.7 0.0690 0.0741 0.0051 0.0001 0.0692 Polynomial of log(KS)/σ 1 0.4311 0.4649 0.0338 0.0004 0.0727 1.2 2.000 1.8443 -0.1756 0.0006 0.0963 0.7 0.0690 0.0742 0.0052 0.0001 0.0703 Legendre Polynomial 1 0.4311 0.4647 0.0336 0.0004 0.0722 1.2 2.000 1.8248 -0.1752 0.0006 0.0960

Table 4.6: American put price when κ = 0.01, T = 0.5, and ρ = −0.8.

Continuation Functions Moneyness PBM PLSMC Bias S.E R.E

0.7 0.0822 0.0846 0.0024 0.0001 0.0283 Laguerre Polynomial 1 0.9069 0.9070 -0.0078 0.0005 0.0086 1.2 2.2000 2.1288 -0.0711 0.0007 0.0334 0.7 0.0822 0.0844 0.0022 0.0001 0.0260 Polynomial of S(t) and V (t) 1 0.9147 0.9072 -0.0075 0.0005 0.0083 1.2 2.2000 2.1278 -0.0721 0.0007 0.0339 0.7 0.0822 0.0844 0.0022 0.0001 0.0265 Polynomial of log(KS)/σ 1 0.9147 0.9078 -0.0069 0.0005 0.0076 1.2 2.2000 2.1286 -0.0714 0.0005 0.0335 0.7 0.0822 0.0844 0.0022 0.0001 0.0266 Legendre Polynomial 1 0.9147 0.9070 -0.0077 0.0005 0.0085 1.2 2.2000 2.1276 -0.0723 0.0007 0.0340

(22)

Continuation Functions Moneyness P BM P L S M C Bias S.E R.E 0.7 2 .54 × 10 − 10 2 .5423 × 10 − 10 − 1 .0297 × 10 − 13 0.0000 0.0004 Laguerre Polynomial 1 0.1451 0.1440 -0.0010 0.0001 0.0072 1.2 2.000 1.9775 -0.0225 0.0001 0.0114 0.7 2 .54 × 10 − 10 2 .5423 × 10 − 10 − 1 .0297 × 10 − 13 0.0000 0.0004 Polynomial of S (t ) and V (t ) 1 0.1451 0.1441 -0.0009 0.0007 0.0068 1.2 2.000 1.9778 -0.0222 0.0002 0.0112 0.7 2 .54 × 10 − 10 2 .5423 × 10 − 10 − 1 .0297 × 10 − 13 0.0000 0.0004 Polynomial of lo g ( K S )/ σ 1 0.1451 0.1441 -0.0009 0.0009 0.0067 1.2 2.000 1.9776 -0.0224 0.0001 0.0113 0.7 2 .54 × 10 − 10 2 .5423 × 10 − 10 − 1 .0297 × 10 − 13 0.0000 0.0004 Le gendre Polynomial 1 0.1451 0.1441 -0.0009 0.0007 0.0067 1.2 2.000 1.9775 -0.0225 0.0001 0.0114 T able 4.8: American put price when κ = 7, T = 5 252 , and ρ = − 0 .8.

(23)

Tables (4.2 - 4.8) show that the number of simulation (I = 500, 000) was sufficient to ob-tain a good approximation regardless of the continuation functions. For all the combinations, except Table (4.6) when κ = 0.01, the bias is less than or equal to 10 cents and the relative error is less than or equal to 5%. Figure (4.1) indicates that the relative error is significantly low for all the values of κ, except when κ = 0.01 which gives a relative error slightly greater than 7%.

Figure 4.1: Plot of relative error against κ

The high relative error for κ = 0.01 can be due to either the simulation or the benchmark built-in function. There is no evidence in our results to distinguish what causes this high relative error. However, in certain cases, we observed a few bugs in the built-in MATLAB function. The confirmation of this dilemma is beyond the scope of this thesis and should be investigated further.

The simulation overall performed well under the change of parameters. The simulation of all the continuation functions provided fairly good approximations. The values of these approximations are extremely close to each other, therefore, it is unreasonable to choose the best continuation function based only on bias and relative error.

Thus, we consider the estimated computational time for each function as it is a crucial factor in financial applications. The computational time for each continuation function is presented in Table (4.9):

(24)

Continuation Functions Estimated time

Laguerre Polynomials 3.1 mins

Polynomial of S(t) and V (t) 5 mins

Polynomial of log(KS)/σ 1.6 mins

Legendre Polynomials 3 mins

Table 4.9: Estimated computational time for each continuation function conducted on a Len-ovo Ideapad 330 with processor Intel Core i5-8250U and 8 Gb of RAM.

Based on the computational time, we can conclude that the polynomial of log(KS)/σ is

significantly faster than the rest. The polynomial takes only 1.6 minutes to simulate 500,000 paths. This function combines the stock price (S) and strike (K) into one variable namely log(KS)/σ which will result in a polynomial of one variable. However, a limitation to this

con-tinuation function is that when the put option is deep out of money the argument log(KS)/σ

tends to blow up towards negative infinity due to its logarithmic nature. Hence, in the next sec-tion we will only proceed with the second fastest continuasec-tion funcsec-tion namely the Legendre polynomials.

### Early Exercise Boundary using LSMC

Early exercise boundary is interesting when it comes to pricing American options. It represents the range of prices at each time step for which it is optimal to early exercise the option. When the payoff of the option is greater or equal to its continuation value (Cv), then it is optimal to

exercise the option ((K − S(t))+≥ Cv). The decision to exercise or hold an American call or

put option at any time t depends only on time value t and the underlying stock value S(t). For an American call (on a stock without dividends), early exercise is never optimal regardless of model [2]. For an American put option, we assume that there is an early exercise boundary S(t) = B(t) such that state price S(T ) are split into the following two regions, i.e.

(

S(t) > B(t); Hold on to the option.

S(t) ≤ B(t); The put option is early exercised. (4.1)

The boundary values are calculated by a similar LSMC algorithm used in the previous section to price the put option. We simulate the process under risk neutral Heston model using Euler discretisation scheme and approximate the continuation values at each time step using linear regression. The continuation value is then used to uncover the early exercise boundary. To illustrate the exercise region, we present Figure (4.2) for a put option at-the-money showing four scenarios with different time steps. The number of simulated paths was kept constant while we increased the number of time steps. As the number of time steps increased, the curve became smoother defining clearer exercise and holding regions. In addition, it gives the following two important insights. First, it is feasible to use LSMC method to find out a smooth early exercise boundary with a sufficient large number of time steps. Second, the early exercise boundary defined as (4.1) is monotonically increasing with time, which has the same

(25)

pattern as in the Black-Scholes model.

Figure 4.2: Early Exercise Boundary for American put option with T = 0.5, κ = 7, ρ = −0.8, when (KS) = 1 evaluated using 5 time steps (top-left), 10 time steps (top-right), 50 time steps (bottom-left), and 100 time steps (bottom-right). (100,000 simulated paths)

Figure 4.3, uncovers the early exercise boundary at four different values for initial volatil-ity, in particular V0= 0.05, V0= 0.1, V0= 0.2 and V0= 0.3. We can observe that as the initial

volatility increases, the early exercise region decreases. This observation might seem quite intuitive as the expected payoff from holding the option is higher with higher volatility.

Figure 4.4, uncovers the early exercise boundary at four different values of correlation coefficient (ρ) in particular ρ = −0.8, ρ = −0.4, ρ = 0.1 and ρ = 0.8. We can observe that as ρ increases, the early exercise region also increases.

Figure 4.5, uncovers the early exercise boundary at four different long-term average vari-ance (θ ) in particular θ = 0.01, θ = 0.2, θ = 0.5 and θ = 1. We can observe that as θ increases, the early exercise region decreases.

Figure 4.6, uncovers the early exercise boundary at four different values of volatility of volatility (σv) in particular σv= 0.1, σv= 0.5, σv= 1 and σv = 1.5. We can observe that as

(26)

Figure 4.3: Early Exercise Boundary for American put option evaluated at four different initial

volatility V0 with T = 0.5, κ = 7, ρ = −0.8, σv= 0.9, θ = 0.16 using 100 time steps and

100,000 simulated paths

Figure 4.4: Early Exercise Boundary for American put option evaluated at four different

cor-relation (ρ) with T = 0.5, κ = 7, σv= 0.9, V0= 0.0625, and θ = 0.16 using 100 time steps

(27)

Figure 4.5: Early Exercise Boundary for American put option evaluated at four different

long-term average variance (θ ), with V0= 0.0625 T = 0.5, κ = 7, ρ = −0.8, σv= 0.9 using 100

time steps and 100,000 simulated paths

Figure 4.6: Early Exercise Boundary for American put option evaluated at four different volat-ility of volatvolat-ility values (σv), with V0= 0.0625 T = 0.5, κ = 7, ρ = −0.8, σv= 0.9 using 100

(28)

## Chapter 5

### Conclusion

We presented the Least Square Monte Carlo to simulate the risk-neutral Heston model. For this simulation, we considered four different continuation functions for a combination of dif-ferent parameters. From the simulation, we calculated the bias, relative error, standard error and noted the computation time. The continuation functions had similar relative errors and bias, therefore it is hard to base our conclusion only on the relative error values. Legendre polynomials were selected as the best approximation for the continuation function in our case due to not having exponential weights or logarithmic expressions. The exponential weights is computationally expensive, and the logarithmic expression (log(K/S)) explodes fast when the option is deep out of money. The use of antithetic paths decreases the variance as math-ematically shown in chapter 4. After investigating the early exercise boundary and the plots, we observe that with sufficient time steps and simulated paths, the LSMC provide a smooth monotonically increasing early exercise boundary. We also investigated the effects of chan-ging some of the Heston model’s parameters, such as volatility of volatility (σv), initial

vari-ance (V0), correlation coefficient (ρ) and long-term average variance (θ ). We observed that as

initial volatility (V0), volatility of volatility (σV), and long-term average variance (θ ) increase,

the early exercise region decreases. It was also noted that the early exercise region increases as the correlation (ρ) increases. In the future, one can investigate the LSMC under different pricing models and better benchmark prices, moreover, experiment with other continuation functions. Therefore, a further research is required.

(29)

## Bibliography

[1] D. Aristotle et al., “Politics,” 1998.

[2] J. Hull et al., Options, futures and other derivatives/John C. Hull. Upper Saddle River, NJ: Prentice Hall„ 2009.

[3] L. Bachelier, Louis Bachelier’s theory of speculation: the origins of modern finance. Princeton University Press, 2011.

[4] L. O. Scott, “Option pricing when the variance changes randomly: Theory, estima-tion, and an applicaestima-tion,” Journal of Financial and Quantitative analysis, vol. 22, no. 4, pp. 419–438, 1987.

[5] S. L. Heston, “A closed-form solution for options with stochastic volatility with ap-plications to bond and currency options,” The review of financial studies, vol. 6, no. 2, pp. 327–343, 1993.

[6] N. Clarke and K. Parrott, “Multigrid for american option pricing with stochastic volatil-ity,” Applied Mathematical Finance, vol. 6, no. 3, pp. 177–195, 1999.

[7] J. Barraquand, “Numerical valuation of high dimensional multivariate european securit-ies,” Management Science, vol. 41, no. 12, pp. 1882–1891, 1995.

[8] J. Y. Campbell, J. J. Champbell, J. W. Campbell, A. W. Lo, A. W. Lo, and A. C. MacKin-lay, The econometrics of financial markets. Princeton University press, 1997.

[9] F. A. Longstaff and E. S. Schwartz, “Valuing american options by simulation: a simple least-squares approach,” The review of financial studies, vol. 14, no. 1, pp. 113–147, 2001.

[10] F. AitSahlia, M. Goswami, and S. Guha, “American option pricing under stochastic volatility: an efficient numerical approach,” Computational Management Science, vol. 7, no. 2, pp. 171–187, 2010.

[11] Y. Hilpisch, Derivatives analytics with Python: data analysis, models, simulation, calib-ration and hedging. John Wiley & Sons, 2015.

[12] A. Medvedev and O. Scaillet, “Pricing american options under stochastic volatility and stochastic interest rates,” Journal of Financial Economics, vol. 98, no. 1, pp. 145 – 159, 2010.

(30)

[13] L. Stentoft, “Assessing the least squares monte-carlo approach to american option valu-ation,” Review of Derivatives research, vol. 7, no. 2, pp. 129–168, 2004.

[14] M. Kijima, Stochastic processes with applications to finance. Chapman and Hall/CRC, 2016.

[15] J. C. Cox, J. E. Ingersoll Jr, and S. A. Ross, “A theory of the term structure of interest rates,” in Theory of valuation, pp. 129–164, World Scientific, 2005.

[16] H. L. Royden and P. Fitzpatrick, Real analysis, vol. 32. Macmillan New York, 1988. [17] M. Abramowitz and I. A. Stegun, “Handbook of mathematical functions with formulas,

graphs, and mathematical table,” in US Department of Commerce, National Bureau of Standards Applied Mathematics series 55, 1965.

[18] P. Glasserman, Variance Reduction Techniques, pp. 185–279. New York, NY: Springer New York, 2003.

(31)

### Appendix .A

Listing 1: MATLAB Code for LSMC with Legendre polynomials

1 f u n c t i o n [ P ] = h e s t o n l e g e n d r e ( S0 , k , T , r , V0 , k a p p a _ s , t h e t a _ s , sigma , r h o , N ,M) 2 3 %P r i c i n g A m e r i c a n p u t o p t i o n s w i t h L e a s t −s q u a r e Monte−C a r l o u n d e r H93 4 5 %S0 i n i t i a l s t o c k p r i c e 6 % k s t r i k e p r i c e 7 % T M a t u r i t y 8 % r i n t e r e s t r a t e 9 % V0 i n i t i a l v a r i a n c e 10 % k a p p a _ s mean r e v e r t i n g r a t e 11 % t h e t a _ s l o n g t e r m a v e r a g e o f v a r i a n c e 12 % s i g m a i s v o l a t i l i t y o f v o l a t i l i t y 13 % r h o c o r r e l a t i o n c o e f f i c i e n t o f w i e n e r p r o c e s s w1 , w2 14 % N number o f t i m e s t e p s 15 % M number o f s i m u l a t e d p a t h s 16 17 %OBS ! a l l v a r i a b l e and v a l u e s w i t h s u b s c r i p t " 1 " s u c h a s ( P1 ) a r e r e l a t e d 18 %t o t h e a n t i t h e t i c s i m u l a t e d p a t h s 19 20 21 d t =T / N ; %t i m e s t e p d t 22 23 %C r e a t i n g empty m a t r i c e s f o r s t o c k p r i c e and v a r i a n c e r e s p e c t i v e l y 24 D=z e r o s(N+ 1 ,M) ; 25 VAR=z e r o s (N+ 1 ,M) ; 26 27 %s i m u l a t i n g d i s c r e t i z e d H e s t o n model by g e n e r a t i n g t h e i n c r e m e n t s o f t h e random v a r i a b l e . . . 28 %i n t h e w i e n e r p r o c e s s 29 f o r j = 1 :M 30 %C r e a t i n g empty v e c t o r s f o r s t o c k p r i c e and v a r i a n c e 31 S=z e r o s(N+ 1 , 1 ) ; 32 V=z e r o s(N+ 1 , 1 ) ; 33 S ( 1 ) =S0 ; 34 V ( 1 ) =V0 ;

(32)

35 %C r e a t i n g empty v e c t o r s f o r a n t i t h e t i c s t o c k p r i c e and v a r i a n c e 36 S1=z e r o s(N+ 1 , 1 ) ; 37 V1=z e r o s(N+ 1 , 1 ) ; 38 S1 ( 1 ) =S0 ; 39 V1 ( 1 ) =V0 ; 40 41 f o r i = 1 :N 42 %G e n e r a t i n g p a t h s w1 , w2 . 43 w1=r a n d n ; 44 w2=r a n d n ; 45 w2=w1∗ r h o +w2∗s q r t(1− r h o ∗ r h o ) ; 46 S ( i + 1 ) =S ( i ) + r ∗S ( i ) ∗ d t +s q r t(V( i ) ) ∗S ( i ) ∗w1∗s q r t( d t ) ; 47 V( i + 1 ) =V( i ) + k a p p a _ s ∗ ( t h e t a _ s −V( i ) ) ∗ d t + s i g m a ∗s q r t(V( i ) ) ∗w2∗s q r t( d t ) ; 48 V( i + 1 ) =s q r t (V( i + 1 ) ^ 2 ) ; 49 %g e n e r a t i n g a n t i t h e t i c p a t h s w1 , w2 . 50 S1 ( i + 1 ) =S1 ( i ) + r ∗ S1 ( i ) ∗ d t −s q r t( V1 ( i ) ) ∗ S1 ( i ) ∗w1∗s q r t( d t ) ; 51 V1 ( i + 1 ) =V1 ( i ) + k a p p a _ s ∗ ( t h e t a _ s −V1 ( i ) ) ∗ d t −s i g m a ∗s q r t( V1 ( i ) ) ∗w2∗s q r t( d t ) ; 52 V1 ( i + 1 ) =s q r t( V1 ( i + 1 ) ^ 2 ) ; 53 end 54 %S t o r i n g t h e s i m u l a t e d p a t h v a l u e s 55 D ( : , j ) =S ; 56 VAR ( : , j ) =V ; 57 %S t o r i n g t h e s i m u l a t e d a n t i t h e t i c p a t h v a l u e s 58 D1 ( : , j ) =S ; 59 VAR1 ( : , j ) =V ; 60 end 61 %C r e a t i n g empty m a t r i c e s f o r p u t p r i c e s 62 P u t =z e r o s(N+ 1 ,M) ; 63 %C r e a t i n g t h e p a y o f f m a t r i x f o r t h e o p t i o n f o r a l l s i m u l a t e d p a t h s a t e v e r y 64 %t i m e s t e p 65 p a y o f f _ p u t =max( ( k−D) , 0 ) ; 66 67 %C r e a t i n g empty m a t r i c e s f o r a n t i t h e t i c p u t p r i c e s 68 P u t 1 =z e r o s (N+ 1 ,M) ; 69 p a y o f f _ p u t 1 =max( ( k−D1 ) , 0 ) ; 70 71 %At t i m e T ( m a t u r i t y ) , t h e p r i c e o f t h e o p t i o n i s t h e s i m p l e p a y o f f 72 P u t ( 1 , : ) =max( k−D(N + 1 , : ) , 0 ) ;

(33)

73 P u t 1 ( 1 , : ) =max( k−D1 (N + 1 , : ) , 0 ) ; 74 %C r e a t i n g t h e v a r i a b l e s u s e d i n t h e c o n t i n u a t i o n f u n c t i o n 75 y=D / k ; y1=D1 / k ; 76 z = (D. ∗VAR) / ( k ∗ t h e t a _ s ) ; 77 z1 = ( D1 . ∗ VAR1 ) / ( k ∗ t h e t a _ s ) ; 78 79 %P e r f o r m i n g t h e LSMC 80 f o r i = 2 :N+1 %s t a r t i n g f r o m t i m e s t e p ( T−1) , we b e g i n p r i c i n g t h e o p t i o n u s i n g LSMC b a c k w a r d s u n t i l T_0 81 f o r g = 1 :M %s e t t i n g up t h e m a t r i x " b " o f p u t p r i c e s a t t i m e ( t + 1 ) and m a t r i x " A_put " o f c o n t i n u a t i o n 82 %f u n c t i o n v a l u e d a t t i m e " t " 83 b_p ( g , 1 ) =exp(− r ∗ d t ) ∗ P u t ( i −1 , g ) ∗s i g n( p a y o f f _ p u t ( i , g ) ) ; 84 b_p1 ( g , 1 ) =exp(− r ∗ d t ) ∗ P u t 1 ( i −1 , g ) ∗s i g n( p a y o f f _ p u t 1 ( i , g ) ) ; 85 %m u l t i p l y w i t h t h e s i g n o f p a y o f f t o o n l y i n c l u d e i n −t h e −money 86 %p a t h s 87 A_put ( g , : ) = [ 1 , 4 ∗ y ( i , g ) −2 ,( −16∗ y ( i , g ) + 4 8 ∗ ( 2 ∗ y ( i , g ) −1) ^ 2 ) , 4 ∗ z ( i , g ) −2]∗s i g n( p a y o f f _ p u t ( i , g ) ) ; 88 A_put1 ( g , : ) = [ 1 , 4 ∗ y1 ( i , g ) −2 ,( −16∗ y1 ( i , g ) + 4 8 ∗ ( 2 ∗ y1 ( i , g ) −1) ^ 2 ) , 4∗ z1 ( i , g ) −2]∗s i g n( p a y o f f _ p u t 1 ( i , g ) ) ; 89 end 90 %o m i t t i n g z e r o r o w s 91 A_p = [ A_put , b_p ] ;

92 A_p=A_p (any( A_p , 2 ) , : ) ;

93 A_p1 = [ A_put1 , b_p1 ] ;

94 A_p1=A_p1 (any( A_p1 , 2 ) , : ) ;

95

96 %P e r f o r m i n g l i n e a r r e g r e s s i o n on m a t r i c e s " A_put " and " b "

t o e s t i m a t e

97 %t h e c o e f f i c i e n t s o f t h e c o n t i n u a t i o n f u n c t i o n " x _ p u t "

98 x _ p u t =A_p ( : , 1 :end−1) \ A_p ( : ,end) ;

99 x _ p u t 1 =A_p1 ( : , 1 :end−1) \ A_p1 ( : ,end) ;

100 101 f o r j = 1 :M 102 %f o r e a c h s i m u l a t e d p a t h we a p p r o x i m a t e t h e c o n t i n u a t i o n v a l u e w i t h 103 %t h e c o n t i n u a t i o n f u n c t i o n and we p r i c e t h e o p t i o n 104 c v _ p u t = x _ p u t ’ ∗ [ 1 , 4 ∗ y ( i , j ) −2 ,( −16∗ y ( i , j ) + 4 8 ∗ ( 2 ∗ y ( i , j ) −1) ^ 2 ) , 4 ∗ z ( i , j ) −2 ] ’ ; 105 P u t ( i , j ) =max( k−D(N+2− i , j ) , c v _ p u t ) ;

(34)

106 107 c v _ p u t 1 = x _ p u t 1 ’ ∗ [ 1 , 4 ∗ y1 ( i , j ) −2 ,( −16∗ y1 ( i , j ) + 4 8 ∗ ( 2 ∗ y1 ( i , j ) −1) ^ 2 ) , 4∗ z1 ( i , j ) − 2 ] ’ ; 108 P u t 1 ( i , j ) =max( k−D1 (N+2− i , j ) , c v _ p u t 1 ) ; 109 end 110 end 111 112 %We t a k e t h e mean o f a l l p r i c e s a t T_0 f o r a l l s i m u l a t e d p a t h s 113 p u t =mean( P u t ( 1 , : ) ) ; 114 115 %We t a k e t h e mean o f a l l p r i c e s a t T_0 f o r a l l a n t i t h e t i c s i m u l a t e d p a t h s 116 p u t 1 =mean( P u t 1 ( 1 , : ) ) ; 117

118 %t a k i n g mean o f t h e two p r i c e s " p u t " and " p u t 1 " . 119 P = ( p u t + p u t 1 ) / 2 ;

(35)

### Appendix .B

Listing 2: MATLAB Code for European Put options with MC

1 f u n c t i o n [ P ] = h e s t o n E U ( S0 , k , T , r , V0 , k a p p a _ s , t h e t a _ s , sigma , r h o , N ,M) 2 3 %P r i c i n g EU p u t o p t i o n s w i t h Monte−C a r l o u n d e r H93 4 5 %S0 i n i t i a l s t o c k p r i c e 6 % k s t r i k e p r i c e 7 % T M a t u r i t y 8 % r i n t e r e s t r a t e 9 % V0 i n i t i a l v a r i a n c e 10 % k a p p a _ s mean r e v e r t i n g r a t e 11 % t h e t a _ s l o n g t e r m a v e r a g e o f v a r i a n c e 12 % s i g m a i s v o l a t i l i t y o f v o l a t i l i t y 13 % r h o c o r r e l a t i o n c o e f f i c i e n t o f w i e n e r p r o c e s s w1 , w2 14 % N number o f t i m e s t e p s 15 % M number o f s i m u l a t e d p a t h s 16 17 18 d t =T / N ; %t i m e s t e p s d t 19 20 %s i m u l a t i n g d i s c r e t i z e d H e s t o n model by g e n e r a t i n g t h e i n c r e m e n t s o f t h e random v a r i a b l e . . . 21 %i n t h e w i e n e r p r o c e s s 22 f o r j = 1 :M 23 %C r e a t i n g empty v e c t o r s f o r s t o c k p r i c e and v a r i a n c e 24 S=z e r o s(N+ 1 , 1 ) ; 25 V=z e r o s(N+ 1 , 1 ) ; 26 S ( 1 ) =S0 ; 27 V ( 1 ) =V0 ; 28 %C r e a t i n g empty v e c t o r s f o r a n t i t h e t i c s t o c k p r i c e and v a r i a n c e 29 S1=z e r o s (N+ 1 , 1 ) ; 30 V1=z e r o s (N+ 1 , 1 ) ; 31 S1 ( 1 ) =S0 ; 32 V1 ( 1 ) =V0 ; 33 34 f o r i = 1 :N 35 %G e n e r a t i n g p a t h s w1 , w2 . 36 w1=r a n d n; 37 w2=r a n d n;

(36)

38 w2=w1∗ r h o +w2∗s q r t(1− r h o ∗ r h o ) ; 39 S ( i + 1 ) =S ( i ) + r ∗S ( i ) ∗ d t +s q r t(V( i ) ) ∗S ( i ) ∗w1∗s q r t( d t ) ; 40 V( i + 1 ) =V( i ) + k a p p a _ s ∗ ( t h e t a _ s −V( i ) ) ∗ d t + s i g m a ∗s q r t(V( i ) ) ∗w2∗s q r t( d t ) ; 41 V( i + 1 ) =V( i + 1 ) ∗ (V( i + 1 ) >0) ; 42 %g e n e r a t i n g a n t i t h e t i c p a t h s w1 , w2 . 43 S1 ( i + 1 ) =S1 ( i ) + r ∗ S1 ( i ) ∗ d t −s q r t( V1 ( i ) ) ∗ S1 ( i ) ∗w1∗s q r t( d t ) ; 44 V1 ( i + 1 ) =V1 ( i ) + k a p p a _ s ∗ ( t h e t a _ s −V1 ( i ) ) ∗ d t −s i g m a ∗s q r t( V1 ( i ) ) ∗w2∗s q r t( d t ) ; 45 V1 ( i + 1 ) =V1 ( i + 1 ) ∗ ( V1 ( i + 1 ) >0) ; 46 end 47 %At t i m e T ( m a t u r i t y ) , t h e p r i c e o f t h e o p t i o n i s t h e s i m p l e p a y o f f d i s c o u n t e d

48 p u t ( j ) =max( k−S (end) , 0 ) ∗exp(− r ∗T ) ;

49

50 p u t 1 ( j ) =max( k−S1 (end) , 0 ) ∗exp(− r ∗T ) ;

51 end

52 %T a k i n g t h e mean o f t h e p u t p r i c e and i t s a n t i t h e t i c p r i c e 53 P = (mean( p u t ) +mean( p u t 1 ) ) / 2 ;

(37)

### Appendix .C

Bachelor Degree Objectives

In this section we present, as objectively as the writers can be, how our thesis satisfies the requirements for bachelor thesis specified in the document “How to Write a Thesis?” pub-lished by Sergei Silvestrov, Anatoliy Malyarenko and Dmitrii Silvestrov.

Objectives:

1. For Bachelor degree, student should demonstrate knowledge and understanding in the major field of study, including knowledge of the field’s scientific basis, knowledge of applicable methods in the field, specialisation in some part of the field and orientation in current research questions.

2. For Bachelor degree, the student should demonstrate the ability to search, collect, evalu-ate and critically interpret relevant information in a problem formulation and to critically discuss phenomena, problem formulations and situations.

3. For Bachelor degree, the student should demonstrate the ability to independently identify, formulate and solve problems and to perform tasks within specified time frames.

4. For Bachelor degree, the student should demonstrate the ability to present orally and in writing and discuss information, problems and solutions in dialogue with different groups.

5. For Bachelor degree, student should demonstrate ability in the major field of study make judgments with respect to scientific, societal and ethical aspects.

Fulfillment:

1. Objective 1: We have done a fair literature review on numerical methods for pri-cing American options, specifically The least-square Monte Carlo simulation method (LSMC). Starting from the first paper that introduced the method, we investigated dif-ferent papers, where the LSMC was implemented under difdif-ferent models and using various continuation functions. Also, we explain thoroughly the idea of LSMC and its implementation.

2. Objective 2: We looked into the Heston model presented in 1993, and the motivation behind why a stochastic volatility makes more sense than the constant volatility assump-tion which has been used before.

3. Objective 3: We formulated our problem to be about implementing the LSMC under the two-dimensional Heston model. Our goal was to evaluate the goodness of the LSMC

(38)

under the Heston model using different continuation functions, for both pricing the op-tions and uncovering the early exercise boundary. We succeeded in getting the work done on time.

4. Objective 4: We had several fruitful discussions with our supervisor, where we presen-ted our work and progress up to date.

5. Objective 5: We tried our best to refer and acknowledge any work or individual that has contributed to the content of our thesis.

Updating...