• No results found

Price Vector Recalculation Optimization

N/A
N/A
Protected

Academic year: 2022

Share "Price Vector Recalculation Optimization"

Copied!
61
0
0

Loading.... (view fulltext now)

Full text

(1)

Department of Physics Ume˚a University

Price Vector Recalculation Optimization

June 19, 2017

Student:

Sara Ekman

saek0034@student.umu.se

Supervisor:

Jonas Nyl´en jonas.nylen@cinnober.com Examiner:

Lisa Hed lisa.hed@umu.se

(2)

Abstract

Real-time risk calculations with a high degree of accuracy is an attractive element for a clearing house since risk calculations are performed in order to decide the size of the capi- tal required from their members. During a trading day, a lot can happen on the financial market resulting in large instrument price movements which in turn affects the accuracy of the real-time risk calculations. It is therefor interesting to investigate how one can optimize the recalculation frequency of instrument prices. In this thesis, different financial instru- ments are examined and depending on the complexity of the pricing, the instruments fall either in the fast pricing instrument or slow pricing instrument category. A fast pricing instrument is an instrument whose price can be found by using an analytical expression.

A slow pricing instrument is an instrument which requires simulations in order to find the price of the instrument. When considering fast pricing instruments, Futures contracts and European options are investigated. The goal is to find an optimization strategy that knows when recalculations are required. In order to know that, market movements are registered and a predetermined target value is set together with the constraint that the mean rela- tive error over all scenario prices, is not allowed to exceed this target value. Implementing different strategies results in a strategy that gives us the lowest relative mean difference of the number of calculations required compared to an optimal strategy, during a trading day.

This strategy considers the relative error of the stock price process in order to determine when to recalculate.

For the slow pricing instruments, arithmetic Asian call options are considered. For this instrument type, the focus lays on finding a good approximation method which is also computational fast. If we manage to approximate the instrument price to be close to the real price, the number of recalculations are optimized. Different approximation methods will be implemented and analyzed where both general- and instrument specific methods will be considered. An analytical Levy approximation method, which is an instrument specific method, fits the real prices well since no recalculation is required. A general modified Least- Square Monte Carlo method which uses historical information is the fastest approximation method but requires recalculations. Either the approximation method needs to fit the real prices well (such that no recalculation is needed) or a strategy for how one can determine when recalculations are required is needed. The later mentioned approach would have been interesting to study further for the slow pricing instruments.

(3)

Sammanfattning

Real-tids riskber¨akningar med h¨og nogrannhet ¨ar ett attraktivt hj¨alpmedel f¨or clearinghus eftersom riskber¨akningarna anv¨ands f¨or att best¨amma hur mycket clearingmedlemmarna beh¨over betala till clearinghuset. Under en b¨orsdag kan mycket h¨anda p˚a marknaden vilket leder till stora f¨or¨andringar p˚a instrumentpriser, som i sin tur p˚averkar exaktheten i real-tids riskber¨akningarna. Det ¨ar p˚a grund av detta som det ¨ar intressant att unders¨oka hur man kan optimera omr¨akningsfrekvensen av instrumentpriser. I detta arbete kommer olika finansiella instrument unders¨okas och beroende p˚a hur komplex priss¨attningen ¨ar, hamnar instrumen- ten antingen i den snabba priss¨attningskategorin eller den l˚angsamma priss¨attningskategorin.

Ett snabbt priss¨attningsinstrument ¨ar ett instrument d¨ar priset kan best¨ammas med hj¨alp av ett analytiskt uttryck och ett l˚angsamt priss¨attningsinstrument kr¨aver simuleringar f¨or att best¨amma priset p˚a instrumentet. N¨ar vi tittar p˚a snabba priss¨attningsinstrument ¨ar ter- minskontrakt och europeiska optioner de instrument vi unders¨oker. M˚alet ¨ar d˚a att hitta en optimeringsstrategi som vet n¨ar omr¨akningar av instrumentpriser b¨or ske. F¨or att veta det, registreras marknadsr¨orelser tillsammans med ett f¨orbest¨amt m˚alv¨arde, d¨ar begr¨ansningen att det relativa medelfelet ¨over alla scenariopriser, aldrig f˚ar ¨overstiga detta m˚alv¨arde. Efter att olika optimeringsstrategier har implementerats f˚ar vi en strategi som ger oss den l¨ags- ta relativa skillnaden av ber¨akningar, j¨amf¨ort med en optimal strategi, under en b¨orsdag.

Denna strategi tittar p˚a det relativa felet av aktieprisprocessen f¨or att best¨amma n¨ar om- akningar b¨or ske.

ar vi tittar p˚a l˚angsamma priss¨attningsinstrument ¨ar aritmetiska asiatiska k¨opoptioner den typ av instrument som unders¨oks. H¨ar ligger fokus p˚a att hitta bra approximationer av in- strumentpriser som ocks˚a ¨ar ber¨akningssnabba. Om vi lyckas approximera instrumentpriser- na s˚a att de ligger n¨ara de exakta instrumentpriserna s˚a ¨ar antalet omr¨akningar optimerade.

Olika approximeringsmetoder implementerades d¨ar b˚ade generella och instrumentspecifika metoder analyserades. En analytisk Levy approximation (som ¨ar instrumentspecifik) ¨ar den metod d¨ar instrumentpriserna ligger n¨armast de exakta instrumentpriserna, eftersom inga omr¨akningar kr¨avdes. En generell modifierad Minsta-Kvadrat Monte Carlo metod som an- ander sig av historisk information ¨ar den metod som ¨ar snabbast men kr¨avde ocks˚a omr¨ak- ningar. Detta betyder att antingen m˚aste de approximerade instrumentpriserna ligga n¨ara de exakta (s˚a att inga omr¨akningar kr¨avs) eller s˚a beh¨ovs en strategi som kan best¨amma ar omr¨akning av de approximerade priserna b¨or ske. Det senast n¨amnda tillv¨agag˚angss¨attet hade varit intressant att unders¨oka vidare.

(4)

Contents

1 Introduction 1

1.1 Background . . . . 1

1.2 Goal and Purpose . . . . 3

1.3 Disposition . . . . 3

1.4 Delimitations . . . . 4

2 Theory 5 2.1 Financial instruments . . . . 5

2.1.1 Stock price process . . . . 5

2.1.2 Risk neutral measure Q . . . . 6

2.1.3 Futures . . . . 6

2.1.4 European options . . . . 6

2.1.5 Asian options . . . . 7

2.2 Volatility . . . . 8

2.2.1 EWMA - model . . . . 9

2.2.2 GARCH(1,1) - model . . . . 9

2.2.3 Intra-day volatility . . . . 9

2.3 Risk measures . . . . 11

2.3.1 Value at risk: V aR . . . . 11

2.3.2 Expected shortfall: ES . . . . 12

2.4 Approximations . . . . 13

2.4.1 Taylor approximation . . . . 13

2.4.2 Control Variate . . . . 13

2.4.3 Control Variate on arithmetic Asian Call . . . . 14

2.4.4 Delta-Gamma derivatives estimation . . . . 15

2.4.5 Least Square Monte Carlo . . . . 15

2.4.6 Levy’s Approximation . . . . 17

2.5 Optimization . . . . 18

2.5.1 Optimization problem . . . . 18

2.5.2 Errors . . . . 19

2.5.3 Optimization problem: approximations . . . . 20

2.5.4 Motivation: Optimizing instrument price vectors . . . . 20

3 Method 22 3.1 Data . . . . 23

3.2 Scenario-generating . . . . 24

3.3 Instrument set-up . . . . 25

3.4 Fast pricing instruments . . . . 26

3.4.1 Solution generation . . . . 26

3.4.2 Optimization strategies . . . . 26

(5)

3.4.2.1 Constant recalculation . . . . 27

3.4.2.2 Relative change in stock price . . . . 27

3.4.2.3 Relative error of stock price . . . . 28

3.4.2.4 Relative error of instrument price . . . . 29

3.4.3 Evaluation of strategies . . . . 29

3.5 Slow pricing instruments . . . . 30

3.5.1 Solution generation . . . . 32

3.5.2 Approximation methods . . . . 32

3.5.2.1 Least Square Monte Carlo simulation (LSMC) . . . . 32

3.5.2.2 Historical Least Square Monte Carlo simulation (HLSMC) . . . 33

3.5.2.3 Levy Approximation . . . . 34

3.5.2.4 Delta-Gamma Approximation . . . . 34

3.5.2.5 Calibration of methods . . . . 35

3.5.3 Evaluation of approximation methods . . . . 36

4 Results 37 4.1 Fast pricing instruments - Optimization strategies . . . . 37

4.1.1 Futures . . . . 37

4.1.2 European options . . . . 38

4.2 Slow pricing instruments - Approximation methods . . . . 40

5 Discussion 42 5.1 Fast pricing instruments - Optimization strategies . . . . 42

5.1.1 Futures . . . . 42

5.1.2 European options . . . . 43

5.2 Slow pricing instruments - Approximation methods . . . . 44

5.3 Conclusion . . . . 45

5.4 Future work . . . . 46

6 References 48

Appendices 49

A Appendix: Distribution of scenarios 49

B Appendix: Determination of εtarget 55

(6)

1 Introduction

1.1 Background

Cinnober creates financial solutions to exchanges, clearing houses, banks and brokerage firms.

They develop systems concerning trading and real-time clearing of financial transactions. A clearing house acts like a third party in large financial transactions where the buyer and the seller are clearing members. The purpose of a clearing house is to add efficiency and stability to the financial market [1]. The clearing house takes on the role as a seller to the clearing member buyer and the role as a buyer to the clearing member seller. In this way no need for commu- nication between the clearing members has to be sustained since the clearing house takes the responsibility for clearing the transaction. This entails that if a clearing member would default, meaning not will be able to fulfill their part of a trade, the clearing house steps in and clears the transaction. This implies that the counter-party risk is reduced from the clearing members and moved onto the clearing house [2]. The big advantage with making use of a clearing house is therefore that the clearing member only needs to do business with the clearing house and hence only needs to trust the clearing house. Since the clearing house takes on both sides of a trade, the portfolio of a clearing house is always balanced and hence faces no market risk. The size of exposure is reduced for the clearing members since netting is allowed when a clearing house is present. Netting exposures across the clearing members means that if a clearing member should pay a certain amount to the clearing house for a transaction, but should also at the same time receive an amount from the clearing house for another transaction, the load of these transactions is netted [3]. In Figure 1 the effect of using a clearing house can be observed.

CM1

CM2

CM3

CM4

CM5 CM6

CM1

CM2

CM3

CM4

CM5 CM6 CH

Figure 1 – Illustrative picture of how the network can look like when several different bilateral contracts are set between six different clearing members (traders) versus how it looks like with a clearing house present.

As can be seen in Figure 1, on the left hand-side of the picture, the network consists of several

(7)

bilateral contracts between six different clearing members which gives rise to a rather messy net- work. The right hand-side of the picture shows how nice the network becomes when a clearing house is present.

In order for the clearing house to manage the counter-party risk they face, they collect collateral from the clearing members. Collateral is anything that can be converted into cash on a short notice and can be used by the clearing house if a clearing member defaults, in order to settle the other part of the trade. The clearing house only accepts collateral with a low liquidity risk.

The collected collateral can decrease in value, depending on what type of collateral that has been pledged by the clearing member, which is why an haircut often is applied to the value of the collateral [4]. An haircut of 5% means that only 95% of the value of the collateral can be accounted for. In order to determine the size of the collateral required by the clearing house in a trade, the clearing house performs risk calculations on the existing portfolios.

If a clearing member would default, the clearing house has a protocol for how to handle the situation. First, the clearing house uses the collateral that was received by the defaulting clear- ing member. If this collateral does not cover the losses, then the clearing house uses a fund called ”default fund contribution” which is a fund that usually all clearing members contribute to (as the privileged of making use of the clearing house). But at this point, the clearing house only uses the contribution from the defaulting clearing member. Then, if losses still are not covered, the clearing house can use some of their own equity fund to try and cover the remaining losses, in order to protect the non-defaulting clearing members as far as possible. If this is not enough, the next step in the protocol is to make use of the contribution from the non-defaulting members from the ”default fund contribution” and as an addition, the clearing house could ask for more contribution from the surviving clearing members (up to a certain amount). This pro- cedure is called ”rights of assessment”. The last step for the clearing house in order to cover the remaining losses is to use their own remaining equity. If the losses still are not covered after this last step, the clearing house becomes insolvent. If this ought to happen it would be a tragedy for the clearing house and would in fact have an major impact on the financial market [3].

Real-time risk calculations on existing portfolios with a high degree of accuracy is an attractive element for a clearing house in order to decide the size of the collateral required from the clearing members, and since Cinnober creates solutions for clearing houses, it is interesting for them as well. The risk calculations should be performed in such a way that a risk value of how much certain instruments are expected to loose at the most, given a certain probability level, should be estimated. Two commonly used risk measures are value at risk and expected shortfall. If estimated instrument prices differs from the real prices, the real-time risk calculations will also differ and hence loose accuracy. Now recalculations of price vectors are done at predetermined times, where a price vector contains prices of an instrument for several different scenarios at one point in time. One outcome from this can hence be that either recalculations are done even though it is not necessary, meaning that unnecessary computational power is used. Another outcome could be that during some short time period the market movements are very volatile which implies that the price is wrongly set and hence the accuracy drops. This in turn leads

(8)

to misleading risk estimations. Since recalculation of price vectors is quite time consuming a model that optimizes the recalculation frequency of the price vectors, given market movements such as changes in the underlying prices, volatility and interest rates, is of interest. Also, a method that estimates the errors between the current estimated instrument prices given market movements and the previously calculated instrument prices is important. An optimization of when to recalculate instrument prices hence implies real-time risk calculations with a high degree of accuracy.

1.2 Goal and Purpose

The main goal of this project is to investigate how to optimize the recalculation frequency and timing of instrument price vectors used for risk estimation. In order to optimize the recalculation frequency of instrument price vectors we face different optimization problems depending on what type of instrument that is considered. We divide the instrument types into two sub-categories:

fast pricing instruments and slow pricing instruments. Fast pricing instruments are instruments where there exists an analytical expression in order price the instrument and slow pricing instru- ments requires simulations to price the instrument. It is also important to find methods that are suitable to real-world applications which is why the models should be back-tested on real world scenarios, with real-time instruments.

The purpose of this project is to help Cinnober find a method that optimizes the recalculation frequency of instrument prices as part of risk estimation used for determining the size of collateral needed for the clearing house.

1.3 Disposition

The layout of this report is as follows: this section, Section 1, contains an introduction and background to the project as well as the delimitations made. Section 2 covers the relevant theory needed in order to perform this thesis, including a theory section of the financial instruments concerned in this thesis, how one can estimate volatility, two different risk measures such as value at risk and expected shortfall, approximation methods that can be used in order to approximate the instrument prices (for the slow pricing instruments). Lastly, an optimization section where the optimization problems are presented together with how one can estimate the errors stemming from simplifications made are explained. Section 3 explains the data used, how the real-world scenarios have been generated needed for risk calculations and an explanation of the instrument set-up considered in this thesis. Further, this section involves the optimization strategies cor- responding to the fast pricing instruments followed by how one can evaluate the implemented strategies. Approximation methods needed for the slow pricing instruments and how one can evaluate these methods finishes this section. Section 4 contains the main results generated for the fast pricing- and slow pricing instruments. The final section, Section 5, covers a discussion of the results obtained followed by conclusions with comments on whether or not the goal has been reached. Finally, some ideas for future work if the project would have been extended is presented.

(9)

1.4 Delimitations

This section will cover the delimitations concerned in this thesis. Starting with that we will only cover an optimization of the financial instruments: futures contracts, European options and arithmetic Asian call options. The motivation for why choosing these financial instruments is that both fast pricing instruments (futures and European options) and slow pricing instruments (arithmetic Asian call options) are interesting to look at since the pricing methods are different.

The instrument set-up is also limited where three different strike prices and four different time to maturity together with using ten different stocks as underlying will be considered. The results presented will only concern one trading day and will hence not be verified to hold for another day.

Since the optimization problem is tackled in different ways depending on the instrument type, we want to find a method that optimize the recalculation frequency and timing of instrument price vectors. When arithmetic Asian call options (slow pricing instruments) are considered, a delimitation is done regarding how to know when recalculations are required. The focus will on these types of instruments, lay on finding good approximation methods instead. The last delimitation done regards the risk calculations where only the optimization of price vectors needed for the risk estimations will be covered and hence the risk calculations will not be estimated.

(10)

2 Theory

This section intends to cover the relevant theory required to perform this thesis. Starting with the financial instruments considered, how volatility can be estimated, what types of risk measures that are most common to use and how they work. Then, different approximation methods of how to estimate the price mainly concerning slow pricing instruments will be presented. Lastly, an optimization section is presented where the optimization problem is presented concerning both the fast pricing and slow pricing instrument types, as well as how errors can be used in order to determine when recalculations needs to be performed. The last part of this section will cover a motivation for why not choosing to look at the relative errors of the profit and loss vectors, in order to determine when recalculations are required is presented.

2.1 Financial instruments

In the financial market there exists several different types of financial instruments and this section intends to cover a presentation of the instruments considered in this thesis. Also, how the stock price process can be modeled and an explanation of what the risk neutral measure is will be covered. The financial instruments presented will be divided into two categories: fast pricing instruments and slow pricing instruments. Futures contract and European options will be placed in the category fast pricing instruments and the arithmetic Asian call option in the slow pricing instrument category.

2.1.1 Stock price process

The stock price process, St, of a non-dividend paying stock is often assumed to follow a Geometric Brownian Motion [5]

dSt= µStdt + σStdWt

where µ is the constant expected rate of return, σ is the constant volatility of the stock and Wt

is a Wiener process at time t. Applying Itˆos formula [6] the solution becomes

St= S0exp [

1

2σ2)t + σWt

]

and to update the stock price path on discrete time intervals, t1 < t2< ... < tm, the following recursive formula can be used

Stj+1= Stjexp [

1

2σ2)(tj+1− tj) + σ

tj+1− tjZj+1

]

(1)

where Zj ∼ N[0, 1] and j=[1,..,m-1] [7]. This makes the stock price process, St, follow a log- normal distribution.

(11)

2.1.2 Risk neutral measure Q

If no arbitrage possibilities exists, the market is said to be arbitrage free. An arbitrage possibility is when you gain money without taking on any risk. The market is said to be arbitrage free if and only if there exists a risk neutral measure (a Martingale measure) Q such that the price of a financial derivative, Φ(St), is given by the risk neutral valuation formula

P (t, s) = exp{−r(T − t)}Et,sQ[Φ(St)]

where r is the risk free interest rate, t∈ [0, T ], s ∈ R+and the Q-dynamics of Stis given by:

dSt= rStdt + σt,StStd ¯W where ¯W is a Q-Wiener process [6].

2.1.3 Futures

In a futures contract, an agreements is set between to parties to buy or sell an asset for a predetermined price at a predetermined time. When the contract is initially set, at t = 0, the futures price is denoted as F (0; T, S) where T corresponds to the time when the contract expires and S corresponds to the asset that should be delivered at T [6]. The futures price for a contract on a non-dividend paying stock at time t, is given by

F (t; T, S) = Stexp{r(T − t)} (2)

where t < T , St is the spot price of the asset at time t and r is the risk free interest rate. It is important to note that initially, when entering the contract, no payment is done. As time goes, the futures price is updated with a predetermined time interval at the market. The difference between the updated futures market price and the initially set futures market price is received by the holder of the contract, in this way the contract is balanced and settled. This cash-flow stream is called ”marking to market”. If the futures price drops, the buyer needs to pay the difference to the seller of the contract. When time T is reached, the holder of the contract should pay the price F (T ; T, S) to the seller and receive the asset S. However, at time T , no gain or loss is obtained since the futures price is the same as the spot price. This is the reason why most futures contracts are closed before time of maturity [6]. The change in futures price with respect to change in underlying stock can be expressed as

∂F

∂S = ∆F = exp{r(T − t)}

where ∆F is called the futures delta.

2.1.4 European options

An European option, gives the holder of the contract the right, but not the obligation, to buy or sell a predetermined asset for a predetermined price (the strike price K) at time of maturity

(12)

T . An European call option gives the holder of the contract (the buyer) the right, but not the obligation, to buy the underlying asset at time T . An European put option gives the holder of the contract (the seller) the right, but not the obligation, to sell the underlying asset at time T . The holder of the contract needs to pay the underwriter of the contract a price, since the holder is the one that decides whether or not to exercise the option at time T . In order to price an European option contract the most commonly used formula is the Black-Scholes formula. This formula is based on the assumptions that the risk-free interest rate is constant and that the stock price process follows a geometric Brownian motion with constant drift and volatility [5]. The price for an European call option at time t, CE(t, St), on a non-dividend paying stock, with price process St, is given by

CE(t, St) = StN [d1(t, St)]− exp{−r(T − t)}KN[d2(t, St)] (3) where K is the strike price, N is the cumulative distribution function of the standard normal distribution, d1(t, St) and d2(t, St) are given by

d1(t, St) = 1 σ

T− t (

ln (St

K )

+ (r +1

2σ2)(T − t)) d2(t, St) = d1(t, St)− σ

T− t

The price of an European put option, PE(t, St), can be found using the Put-Call parity. For an European put option, with the same K and T , the price is given by

PE(t, St) = K exp{−r(T − t)} + CE(t, St)− St (4) The change in option price with respect to the underlying asset is known to be one of the Greeks,

∆. For a call option the change in option price can be expressed as

C =∂C

∂S = N [d1] and for a put option the change in option price is

P =∂P

∂S = N [d1]− 1 where d1 is the same as explained above [5].

2.1.5 Asian options

This section contains the theory behind the Asian option and is based on [7]. An Asian option is an exotic option since it is more complex than plain vanilla options (such as European and American options). The Asian option is a path-dependent option whose payoff depends on the mean value of the stock price process, between predetermined points in the time period.

The mean value of the stock price, ¯S, can either be determined by the geometric mean or the arithmetic mean. Considering a geometric Asian option, the mean value of the stock price process

(13)

on [t, T ] can be explained by

S¯G = (m

j=1

Stj )1/m

where t = t1<...<tm= T. When pricing a geometric Asian option the assumption that the stock price process follows a log-normal distribution and the product of the stock price process also follows a log-normal distribution is used. This assumption, together with Black Scholes and the risk neutral assumption, the price at t of a geometric Asian call on a non dividend paying is given by

Cg(t, St) = exp{−r(T − ˜T )}[

exp{−δ ˜T}S0N [d1]− exp{−r ˜T}KN[d2]]

(5) where ˜T = 1

m

m

j=1tj, d1 and d2 are given by

d1=

ln(S0/K) + (r− δ +1 2σ¯2) ¯T

¯ σ

T¯ d2= d1− ¯σ

T¯ where

¯

σ2= σ2 m2T¯

m j=1

(2j− 1)tm+1−j

and

δ = 1 2σ21

2σ¯2

where r−δ is the drift for the geometric mean stock price. The price of a geometric Asian option can hence be obtained but to price an arithmetic Asian option, there does not exists any closed form solution. This is due to the fact that the arithmetic mean value of the stock price process does not follow a log-normal distribution. The arithmetic mean value of the stock price process on [t, T ] can be described by

S¯A= 1 m

m j=1

Stj

and in order to price the option under the risk neutral measure explained in Section 2.1.2, the discounted expected payoff can be used. To find the expected payoff of an arithmetic Asian option, Monte Carlo simulations together with variance reduction techniques can be used. In this thesis we will use a method called control variate as a way to reduce the variance stemming from standard Monte Carlo simulations. This method will be presented later in Section 2.4.2 together with different approximation methods used in order to estimate the price of an arithmetic Asian call option.

2.2 Volatility

This section intends to explain how the volatility can be estimated which will be needed when constructing the different daily real-world scenarios presented in Section 3.2 and for the solution generation explained in Section 3.4.1 and in Section 3.5.1. The EWMA-model will be used

(14)

when creating the solution generated prices and the GARCH(1,1) - model will be used for the construction of the daily real-world scenarios. Lastly, a section concerning the intra-day volatility will be presented and used as motivation for an optimization strategy explained in Section 3.4.2.1.

This intra-day volatility section will contain a theory part together with a simulation part in order to strengthen the theory presented.

2.2.1 EWMA - model

One way of estimating the volatility, i.e the standard deviation of the logarithmic returns, y, is to use a exponentially weighted moving average (EWMA) model. In this model the most recent returns will have greater impact of the estimation of the volatility. An estimation window will be needed, in order to control what returns to use when estimating the volatility. With help of a decay factor, 0<λ<1, weights will exponentially decline and hence giving more weight to the most recent returns [8]. The volatility, ˆσt, can then be estimated by

ˆ

σt2= 1− λ λ(1− λWE)

WE

i=1

λiyt2−i

where WEis the length of the estimation window. For daily returns, λ is often set to λ=0.94 [8].

2.2.2 GARCH(1,1) - model

Another commonly used method to estimate the volatility is to use the generalized autoregressive conditional heteroskedasticity (GARCH) model. The volatility, ˆσt, can be forecasted by

ˆ

σt2= ω + αyt2−1+ βσ2t−1 (6)

where ω, α and β are parameters that need to be estimated. With help of a maximum log- likelihood function these parameters can be obtained. The log-likelihood function is given by

logL = −T− 1

2 log(2π)1 2

T t=2

(

log(ω + αy2t−1+ β ˆσt2−1) + yt2

ω + αy2t−1+ β ˆσ2t−1 )

(7)

as can be seen in equation (7), the time step starts at t=2, which means that ˆσ1is unknown and needs to be estimated. By calculating the sample variance of{yt}Tt=1one can get an estimate of ˆ

σ1. We also need to put some constraints on the parameters: all parameter needs to be positive, i.e α, ω, β > 0, this constraint is set in order to assure that we obtain positive volatility forecasts.

The other constraint set is that α + β < 1 and this is to ensure covariance stationarity [8].

2.2.3 Intra-day volatility

The seasonality of the intra-day volatility can be described as having a u-shape. This u-shape indicates that the intra-day volatility is most commonly high at the beginning of the day, in the middle of the day it is lowered and somewhat flat and at the end of the day it increases [9].

(15)

The intra-day volatility can be calculated with one minute closing prices and the following ex- pression is used in order to calculate the volatility, σt, at time t

σ2t = 1 N

N i=1

(yt,i− ¯yt)2

where N is the number of trading days considered, yt,i is the logarithmic return at time t for scenario i and is given by yt,i = log(Ct,i)− log(Ct−1,i) where C is the closing price and ¯ytis the mean value of the logarithmic returns over all scenarios [10]. The volatility can now be obtained for every minute of the trading day by collecting one minute closing prices during 15 trading days (from 2017-03-22 until 2017-04-11)) corresponding to the stocks Cisco Systems, Inc., Intel Corporation, Microsoft Corporation and NVIDIA Corporation. By calculating the volatility for every minute during 15 days and then averaging the volatility corresponding to the same minute of the trading day, we get an intra-day volatility pattern which can be observed in Figure 2.

Figure 2 – The figure describes the 1 minute intra-day volatility for four different stocks while using the 15 latest trading days closing prices. The intra-day volatility in the upper left corresponds to the Cisco Systems, Inc stock, the upper right corresponds to the Intel Corporation stock, the lower left to the Microsoft Corporation stock and the lower right to the intra-day volatility of the NVIDIA Corporation stock.

As can be observed in Figure 2, at least for the Microsoft Corporation and the Intel Corporation stock, we obtain the pattern described, with a high volatility in the beginning of the trading day,

(16)

which decreases during the middle of the trading day and then increases again at the end of the trading day. Looking at the intra-day volatility corresponding to the Cisco Systems, Inc. and NVIDIA Corporation stocks, we don’t quite obtain a clear increase of volatility during the end of the trading day, but at least we can observe a somewhat high volatility in the beginning of the trading day and a calmer period after approximately 2 hours.

2.3 Risk measures

There exists several different methods in order to perform risk calculations but in this thesis the focus will be on value at risk and expected shortfall. These risk measures will not be implemented in practice but they are essential to present, in order to get an understanding of how the different scenario prices are connected with the risk measures, which the clearing house needs in order to decide the size of the collateral required.

2.3.1 Value at risk: V aR

Value at risk, V aR, is a measure of an at least amount of money to loose, given a specified probability level and time period. This risk measure can be applied on almost any financial instrument and is one of the most commonly (after volatility) used [8]. This risk measure is a quantile of the profit/loss distribution and is often denoted V aRα100%(L), where α corresponds to the probability level and L to the sorted losses. Calculating V aR includes three main steps:

First we need to specify α, i.e the probability that losses will not exceed the value of V aR (usually set to 0.95 or 0.99). Second, the time horizon needs to be specified, i.e the time period where losses can occur (most common is to calculate V aR daily). The last step in calculating V aR, is to determine the profit/loss probability distribution of the portfolio considered. Usually, one uses historical observations and a statistical model to estimate the probability distribution [8]. Historical observations can be used in order to create N one-day (or the specified time horizon) real-world scenarios (which will be explained in Section 3.2) which then are applied on the instrument price in order to obtain N scenario prices at time t. Then, by subtracting the scenario prices from the instrument price, we obtain the profit and loss vector at a given time t

P nLi(t) = P (t)− Pi(t) (8)

where i corresponds to scenario i and i = [1, .., N ]. Then, by sorting the values obtained in the P nL-vector in an decreasing order (since we are considering the loss distribution in increasing order), the V aR value can be obtained by picking out the αN element from the sorted P nL- vector. V aRα100%(L) = P nL(αN ). In Figure 3 an illustrative picture of how the loss distribution with corresponding V aR-value can be seen.

(17)

V aR99%

V aR95% L

p

Figure 3 – An illustrative picture describing the loss distribution and corresponding V aR95%(L) and V aR99%(L) values.

In Figure 3, one can see that for a higher value of α, we obtain a higher V aR value. Since V aR tells us that the loss will be less than, or equal to the V aR value with a probability of at least α means that an increase of α gets us a higher value of V aR since we have higher restrictions of the guarantee.

2.3.2 Expected shortfall: ES

Expected shortfall, ES, can be considered to be an extension to V aR, whereas the value of V aR tells us the amount we will loose at the most, given a certain probability. Given that the value of V aR has been breached, ES tells us, how much we expect to loose. The ESα100%(L) can be obtained by averaging the elements in the sorted P nL-vector that are equal to or bigger than the V aR-value. Both V aR and ES have several advantages such as they are both general methods in such a way that they can be used for almost any financial instrument. Since ES is an expansion of V aR, it does not require a lot of additional work in order to obtain ES (given that V aR is calculated). One disadvantage with V aR is that the risk measure is sub-additive, which means that if the value of V aR is calculated on a portfolio, the portfolio V aR can be higher than the sum of the individual instruments value of V aR. This is not the case when using ES as risk measure. However, V aR is the most commonly used risk measure by financial institutions. The reasons for ES not being equally popular are mainly due to two arguments: ES has an additional source of error, since the expectation of tail observations needs to be determined. Also, ES is more complicated to backtest than backtesting V aR [8].

(18)

2.4 Approximations

When pricing some instrument types, there already exists analytical expressions, which is why approximation methods of these instruments will not be of interest. On these types of instru- ments (futures contracts and European options), the focus will rather be on finding strategies to determine when recalculations are required (which will be presented in Section 3.4.2). When considering slow pricing instruments (arithmetic Asian call options) we wish to find good ap- proximations of the instrument prices in order to optimize the recalculation frequency. Taylor approximation will be the first presented approximation method which will actually be used in optimizing both the fast pricing- and slow pricing instruments. The following presented ap- proximation methods will be used in order to approximate the price of arithmetic Asian call options.

2.4.1 Taylor approximation

If the n + 1 derivatives of a function f (x) are continuous in an interval around a point a, then the following holds in this interval

f (x) = f (a) + f(a)(x− a) +1

2f′′(a)(x− a)2+ ... + 1

n!fn(a)(x− a)n+ Rn+1(x) (9) where Rn+1(x) is the error term stemming from the approximation [11]. Here one assumes that the instrument price can be explained by a continuous function P (X), where X varies depending on what instrument type that is of interest. Considering an European option for example, the option price at time t depends on, P (X) = P (St, σ, r, K, T, t) ≡ P (St). Starting from this expression and applying a first order Taylor approximation around the initial stock price, S0, we get

P (St) = P (S0) +∂P

∂S(St− S0) + R2(St) using that ∂P

∂S = ∆ and denote R2(St) = ωand rearranging some terms we receive the following expression

|P (St)− P (S0)| = ∆|St− S0| + ω =⇒ |P (St)− P (S0)| ≈ ∆|St− S0| (10)

2.4.2 Control Variate

This section intends to cover a short explanation of the standard Monte Carlo (MC) simulation method followed by the control variate method which is a variance reduction technique. This theory section is based on [7]. MC simulations can be used in order to find an estimate of the instrument price and standard MC is a commonly used method. It is based on the idea that one simulates N different paths for the stock price process where the stock price follows a Geometric Brownian Motion, according to equation (1). The price of the option is then calculated for every path, ci, with help of the options payoff structure (the expected payoff with respect to a risk neutral measure Q). The standard MC option price, ˆcN, is then determined by the averaging

(19)

over the obtained path-prices

ˆ cN = 1

N

N i=1

ci

By using standard MC simulations and the Central Limit Theorem, we get an confidence interval of the estimated price. The error stemming from using standard MC is O

( 1

n )

regardless of dimension.

In order do reduce the error and shrink the confidence interval, different variance reduction techniques can be used and in this thesis we focus on a variance reduction technique called control variate. This method is the most effective procedure when one wishes to improve the MC simulations. This technique uses an option with similar features to the option we wish to price, as a control variate in order to reduce the variance arising from the standard MC simulations. If we let ¯Y denote the estimated price when using standard MC simulations

Y =¯ 1 N

N i=1

Yi

where Yicorresponds to the standard MC price in every inner scenario. If we let E[X] be the price of the control variate (the exact price of the similar option) we can then simulate Xi together with Yi. We get that Yi overestimates the exact price, E[Y ], if Xi− E[X] > 0. This can be corrected by letting

Y¯i(b) = Yi− b(Xi− E[X]) (11)

where b is a correction factor. This coefficient will be chosen in order to minimize the variance of Yi(b)

b = ρXYσYσX

σX2

where ρXY is the correlation between X and Y . Unfortunately, since E[Y ] is unknown, so is σY

and ρXY. We therefore use an estimate of the correction factor

b=

N

i=1(Xi− ¯X)(Yi− ¯Y )

N

i=1(Xi− ¯X)2 (12)

The price of the control variate estimate can then be expressed as

E[Y ] = ¯Y (b) = 1 N

N i=1

Y¯i(b) (13)

which will contain a reduced error than for the standard MC price.

2.4.3 Control Variate on arithmetic Asian Call

In order to find a good estimate of the price of an arithmetic Asian call option, the price of an geometric Asian call option can be used as a control variate. We then let E[X] = Cg, by using

References

Related documents

Stöden omfattar statliga lån och kreditgarantier; anstånd med skatter och avgifter; tillfälligt sänkta arbetsgivaravgifter under pandemins första fas; ökat statligt ansvar

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Data från Tyskland visar att krav på samverkan leder till ökad patentering, men studien finner inte stöd för att finansiella stöd utan krav på samverkan ökar patentering

Generally, a transition from primary raw materials to recycled materials, along with a change to renewable energy, are the most important actions to reduce greenhouse gas emissions

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

Från den teoretiska modellen vet vi att när det finns två budgivare på marknaden, och marknadsandelen för månadens vara ökar, så leder detta till lägre

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating