• No results found

Times Series Analysis of Calibrated Parameters of Two-factor Stochastic Volatility Model

N/A
N/A
Protected

Academic year: 2021

Share "Times Series Analysis of Calibrated Parameters of Two-factor Stochastic Volatility Model"

Copied!
62
0
0

Loading.... (view fulltext now)

Full text

(1)

School of Education, Culture and Communication

Division of Applied Mathematics

BACHELOR THESIS IN MATHEMATICS / APPLIED MATHEMATICS

Times Series Analysis of Calibrated Parameters of

Two-factor Stochastic Volatility Model

by

Renato E. Rios and Chrysafis Bourelos

Kandidatarbete i matematik / tillämpad matematik

DIVISION OF APPLIED MATHEMATICS

MÄLARDALEN UNIVERSITY SE-721 23 VÄSTERÅS, SWEDEN

(2)

School of Education, Culture and Communication

Division of Applied Mathematics

Bachelor thesis in mathematics / applied mathematics

Date:

2019-05-31

Project name:

Times Series Analysis of Calibrated Parameters of Two-factor Stochastic Volatility Model

Author(s):

Renato E. Rios and Chrysafis Bourelos

Supervisor(s): Ying Ni Reviewer: Milica Ranˇci´c Examiner: Christopher Engström Comprising: 15 ECTS credits

(3)

Abstract

Stochastic volatility models have become essential for financial modelling and forecasting. The present thesis works with a two-factor stochastic volatility model that is reduced to four parameters. We start by making the case for the model that best fits data, use that model to produce said parameters and then analyse the time series of these parameters. Suitable ARIMA models were then used to forecast the parameters and in turn, the implied volatilities. It was established that fitting the model for different groups of maturities produced better results. Moreover, we managed to reduce the forecasting errors by forecasting according to the different maturity groups.

(4)

Acknowledgments

We want to thank Mälardalen University for taking these three years of intellectual enrichment. We also want to thank our thesis supervisor Ying Ni, for her input and ideas. And finally, we’d like to extend of outmost appreciation to the student Organisation Finance Society for being a platform of personal growth for us. We are truly thankful.

(5)

Contents

1 Introduction 7

1.1 Background . . . 7

1.2 Literature review . . . 8

1.3 Problem formulation and aim of the paper . . . 9

1.4 Outlining the method . . . 10

1.5 Disposition of the paper . . . 11

2 Theoretical Considerations 12 2.1 The underlying two-factor stochastic model . . . 12

2.2 Calibration of the first-order approximation . . . 14

2.3 Parameter estimation method . . . 16

2.3.1 Least squares . . . 16 2.4 Time stability . . . 18 2.4.1 ARIMA . . . 18 3 The Data 20 3.1 Definitions . . . 20 3.2 Underlying assets . . . 21 3.3 Option datasets . . . 21 3.4 Calibrated parameters . . . 21 3.5 Calibration procedure . . . 22 3.5.1 One-step calibration . . . 22

(6)

3.5.2 Calibration of single-factor model . . . 23

4 Experiment Results 24 4.1 Selection process of regression method . . . 24

4.2 Parameter analysis . . . 25

4.2.1 Parameter time stability . . . 25

4.3 Term structure analysis . . . 27

4.3.1 Comparing parameter fittings across different maturities . . . 27

4.4 Time series analysis of calibrated parameters . . . 31

4.4.1 Stationarity . . . 31

4.4.2 ARIMA model selection . . . 33

4.4.3 ARIMA out-of-sample forecasting . . . 34

5 Conclusion 37

Contributions and Objectives 39

Bibliography 43

A Appendix 45

B Appendix 47

C Appendix 51

(7)

List of Figures

4.1 Time stability of fitted parameters - Stock dataset . . . 26

4.2 Time stability of fitted parameters - Index dataset . . . 26

4.3 ABB implied volatilites as a function of LMMR on April 25, 2017. The circles are from ABB data, and the line b∗+ aε(LMMR). . . . 28

4.4 STOXX implied volatilites as a function of LMMR on April 25, 2017. The circles are from ABB data, and the line b∗+ aε(LMMR). . . . 30

4.5 ABB dataset: The ACF and PACF of the intercept parameter for the calibrated model. . . 32

4.6 ABB dataset: The ACF and PACF of the intercept parameter for the calibrated model with difference of degree one. . . 32

4.7 Analysis of the residuals for the ARIMA(0, 1, 1) model - parameter aε 1. . . 33

4.8 Out-of-sample forecasting for ABB and STOXX - a1e parameter 10 day fore-casts regressing for 1 month and 2 months maturities. . . 35

4.9 In-sample forecasting for ABB and STOXX - a1e parameter 10 day forecasts of regressing for all τ0s. . . 35

A.1 Testing for different autoregressive terms p = 2 – parameter a1e. . . 45

A.2 Testing for different autoregressive terms p = 3 – parameter a1e. . . 46

B.1 ABB: Analysis of residuals – parameter a0d. . . 47

B.2 ABB: Analysis of residuals – parameter a2d. . . 48

B.3 ABB: Analysis of residuals – parameter a3e. . . 48

(8)

B.5 STOXX: Analysis of residuals – parameter a1e. . . 49

B.6 STOXX: Analysis of residuals – parameter a2d. . . 50

B.7 STOXX: Analysis of residuals – parameter a3e. . . 50

C.1 10 day forecast of parameter a0d - for maturities 3 to 4 months. . . 51

C.2 10 day forecast of parameter a2d - for maturities 5 to 6 months. . . 52

C.3 10 day forecast of parameter a3e - for maturities 9 to 12 months. . . 52

C.4 10 day forecast of parameter a1e - for maturities 18 to 24 months. . . 53

C.5 10 day forecast of parameter a1e - for maturities τ ≤ 5months. . . 53

C.6 10 day forecast of parameter a1e - for maturities. . . 54

C.7 10 day forecast of parameter a2d - for all maturities. . . 55

C.8 10 day forecast of parameter a3e - for all maturities. . . 55

(9)

List of Tables

3.1 Sample of calibrated parameters one-step . . . 23

4.1 Absolute and relative errors table for ABB dataset. . . 27

4.2 Absolute and relative errors table for STOXX dataset. . . 29

(10)

Chapter 1

Introduction

1.1

Background

For nearly half a century, the study of derivatives pricing has been of paramount importance for academics and practitioners due to uncertainty in the form of continuously changing market volatility. Derivative instruments such as options are largely used for coping with this inherent uncertainty. Pricing these — due to their innate complexity and advantages — is therefore a prime subject of interest in the financial services industry and academia, as one has to pay for these instruments.

To paraphrase Hull [13], options are nothing but financial instruments whose values are derived directly from the value of some asset upon which these are written; we call such asset the "underlying". These derivatives are commonly referred to as contingent claims, which in essence means that a claim is due upon some uncertain condition, regarding the underlying asset, being met. There are two types of the aforementioned that are of interest to us, calls and puts, whose attractiveness is a direct function of market expectations. The former has the attribute of granting the buyer the ability to buy the underlying asset at pre-specified dates and prices, whereas the latter, allows instead for a sell of the underlying asset in a similar manner. Traditionally, there are two categories of options: American and European. The former can only be exercised at maturity, as opposed to the latter which can be exercised at any time up to its maturity. Moreover, these differences in characteristics, although minor, play a major

(11)

role in the markets [13]. The pricing of these options is mainly conducted by using stochastic differential equations, usually referred, in this area of expertise, as stochastic volatility models. Although stochastic volatility models are used for the pricing of options, in our investigation, we will not go the whole extent of the process and mainly focus on the volatility dynamics by studying the stability over time of the model.

1.2

Literature review

Financial engineering reached a lofty peak when, in 1973 [2], Fischer Black and Merton Scholes derived their formula for derivative asset pricing. This widely renowned market model, allowed for pricing derivatives under conditions of constant volatility. Although the model has been widely adopted in finance, it is considered not to hold as its assumption of log-normality has been proven inaccurate at best since the markets are notably coerced by the forces of stochastic volatility, and was deemed insufficient for capturing market phenomena [12, 11, 14]. Due to the discrepancy between the model and the observations, the implement-ation of adaptimplement-ations and extensions to the Black–Scholes (BS) formula, to account for the stochasticity of the volatility component, was in place.

One of such adaptations was put forth by Heston [11], where he advances a model with a mean-reverting process for the variance1. He corroborated through the implementation of this model how stochastic volatility models can produce a wide range of effects on the prices of options, as opposed to the BS model.

Although, single-factor models as the one derived by Heston [11] have proven to be helpful in modelling of the leverage effect2, which in turn yields a volatility smirk, such single-factor models are, however, unduly restrictive in depicting the relationship between the volatility level and the gradient of the smirk [8]. As an explanation to this restrictiveness, Christoffersen et. al[8] established that, given that the correlation between stock returns and their variances is

1As inspired by the CIR model, which describes the evolution of interest rates.

2It is a concept derived from allowing negative correlation between stock returns and their respective

vari-ances, in single-factor volatility models [7]. It simplifies to say that a decreases in stock prices closely related to large increases in variances.

(12)

constant over time, the time-varying nature of the smirk isn’t properly captured through these models.

Consequently, to address the structural fallacies of the single-factor models, Christoffersen et. al [8] introduce a two-factor stochastic volatility (multi-factor) model; two (variance) processes, one of which with slow mean reversion and the other with fast mean reversion. The empirical study conducted in [8] successfully shows, that the multi-factor models are much more efficient at inducing the flexibility necessary for more accurate calibration of the short-and long-term volatility levels.

Further in the research of multi-factor volatility models Chairella and Ziveyi [6] derive what is considered to be a subclass of the model developed in [8], where they obtain a semi-analytical expression to solve for the American option price as well as European option price. The model presented in [6], served as a starting point for the model developed by Canhanga et. al[4], which will be the model under consideration in our paper.

Canhanga et. al [4] consider two uncorrelated and finite variance processes, these vari-ances guarantee a solution for such model considering both, real-world probabilities3 and risk-neutral probabilities4. Canhanga et. al, approximate the solution for the model using an asymptotic expansion method and reduce the model to four market parameters. These four market parameters are the ones to be calibrated, as one of the main objectives of this study, and analyzed over time to measure the predictive accuracy of the modelled parameters. Further-more, we will analyse the time stability of the calibrated parameters by means of the statistical technique ARIMA.

1.3

Problem formulation and aim of the paper

In this investigation, we are mainly concerned with how well the calibrated models fits the data, and how these behave over time. The point is to establish which model fits the data 3Probability scenarios that assume that investors require a risk premium to compensate for holding risky

assets.

4A probability scenario where discounted value of future cash flows is governed by a martingale behaviour

(13)

better, for which dataset, and for which maturities. For this purpose, we formulated a set of questions that are of interest, based on the aforementioned goal:

1. Do the estimated parameters show stability over time? Are there any significant differ-ences between Index options data and stock options data?

2. Does the fitting quality changes when fitting the calibrated parameters to the shortest maturities with the two-factor model contra single-factor model? Are there any signific-ant changes?

3. Do the answers in (2) change depending on the dataset?

4. Is the time series data of the parameters stable enough to generate good forecasts? These questions will serve as a compass to stay in the direction of understanding how, and to what degree, the four market parameters of the calibrated model (2.22) influence the volatility smirk, and We will investigate question 1 through 3 by running the data, for index and stock options, using the Least Squares regression method and measure the calibration quality by studying the stability of the parameters over time. The aim is to choose the model that best fits the data, also to determine whether the models work better on the index options data, and in that case, which model? Moreover, we will generate forecast for both sets of data with three different forecast periods, one month, two months and three months using the time series model. For answering question 3 we will follow a similar procedure of calibration, in this case calibrating the parameters for the singel-factor model and then study these parameters over time in a similar manner.

1.4

Outlining the method

We will restate the calibrated formula (2.22) derived by [4] in our custom-made R program and feed the call options data to it. We will analyse – with our primary tool, the least squares regression – the dataset daily and not on a maturity per maturity basis, and obtain daily cal-ibrated parameters. The results of these computations will give us two sets of 263 values for

(14)

calibrated parameters as output – if using stock options data, and 267, if using index options data. We will use both, the single- and double-factor stochastic volatility models for both data-sets, to highlight their differences in an attempt to confirming previous studies. After this, we will select the superior model and, move on and analyse thoroughly the resulting parameters. As a final step, the statistical technique that we will use for measuring the stability over time in a more explicit way, will be an ARIMA model. Furthermore, to push the envelope, we will generate forecasts, with our ARIMA time series model, for both sets of data with three different forecast periods, one month, two months and three months.

1.5

Disposition of the paper

For the sake of avoiding repetitiveness, we will confine the full description of the model solely to our theoretical chapter, Chapter 2. Chapter 3 will be devoted to presenting the data and explaining it. In Chapter 4 we dissect the results and give a full account of the experiments, and in Chapter 5 we create a consolidated account our findings through a conclusion.

(15)

Chapter 2

Theoretical Considerations

In this section we will present and thoroughly describe the underlying model of our study and derive the calibrated version of its’ first order approximation. The model will be reduced to four calibrated parameters with which we construct an expression for the implied volatility. Furthermore, we describe the model of regression, including the two methods of procedure that will be used, and finally, we dive into a short theoretical description of the characteristics of the time series model that was chosen, which will paint some colour as to why it was chosen.

2.1

The underlying two-factor stochastic model

The model under consideration in this paper, as expressed by Canhanga et. al [5], is defined as follows: dS = (r − q)S dt +√V1SdW1∗+ √ V2SdW2∗, dV1=  1 ε(θ1−V1) − λ1 V1  dt +√1 εξ1 √ V1ρ13dW1∗+ 1 √ εξ1 q V1(1 − ρ132 ) dW3∗, dV2= (δ (θ2−V2) − λ2V2) dt + √ δ ξ2 √ V2ρ24dW2∗+ √ δ ξ2 q V2(1 − ρ242) dW4∗. (2.1)

here (St,t ∈ [0, T ]), (V1t,t ∈ [0, T ]) and (V2t,t ∈ [0, T ]) are the asset price processes and two variance processes, respectively. The variance processes V1,V2 are mean-reverting processes with reversion rates of 1

ε, δ , volatilities q

1 εξ1 and

δ ξ2, and long run averages of θ1, θ2, respectively. r is the spot risk-free interest rate, q is the rate of continuously compounded

(16)

dividend yield, λ1, λ2 are two constants determining market prices of variance risks. The processes Wi∗are independent Brownian motions.

We let U (t, S, v1, v2)1denote the options price at time t < T for underlying spot price S, and slow and fast volatility factors v1 and v2 where, Canhanga et. al [5] expresses the European option price to be the solution of the partial differential equation,

 1 εL0+ 1 √ εL1+L2+ √ δM1+ δM2  U= 0 (2.2) for L0= (θ − v1) ∂ ∂ v1 +1 2v1 ∂2 ∂ v21 , L1= ρ13Sv1 ∂2 ∂ S∂ v1 , L2= ∂ ∂ t+ (r − q)S ∂ ∂ S+ 1 2(v1+ v2)S 2 ∂2 ∂ S2− r − λ1v1 ∂ ∂ v1 − λ2v2 ∂ ∂ v2 , M1= ρ24Sv2 ∂2 ∂ S∂ v2 , M2= (θ2− v2) ∂ ∂ v2 +1 2v2 ∂2 ∂ v22. (2.3)

Assuming that the we can express the solution of the price process U = Uε ,δ in the form below, Uε ,δ = U

0,0+ √

δU0,1+ √

εU1,0+ δU0,2+ εU2,0+ . . . (2.4) By substituting U in (2.2) with (2.4) and solving for the system of differential equations, we obtain the Ui, j for (2.4). Furthermore, Canhanga et. al [5] give a detailed derivation of the first-order approximation of the price process of European option as

˜ Uε ,δ = U 0,0+U1,0ε +U0,1δ (2.5) where U0= UBS, Uε 1,0= −(T − t)BεUBS, Uδ 0,1= (T − t)AδUBS. (2.6) Substituting (2.6) into (2.5) yields the first-order approximation

˜ Uε ,δ = U BS+ (T − t)Bδ+Aε  UBS. (2.7) 1U(t, S, v

(17)

where Bε = −ϒεD 1D2, ϒε = − √ ε ρ13 2  v1 ∂ φ (v1, v2) ∂ v1  ; Aδ = ΘδD 1 ∂ ∂ v2 , Θδ = 1 2ρ24v2 √ δ , Di= si ∂i ∂ si; τ = T − t. (2.8)

here, the notation h·i stands for the averages with respect to invariant distribution of the process V1i.e. h·i = Z ·Π(dv1) ¯ σ (v2) = rZ (v1+ v2)Π(dv1). (2.9)

2.2

Calibration of the first-order approximation

Implied volatility I, can be defined as the solution in the following equation

CBS(S, K, r, τ, I) = C(K, τ) (2.10)

where CBS is the option price given by Black-Scholes formula at volatility I with time to ma-turity τ, exercise price K and current underlying asset price S, while C is the observed market call price with time to maturity τ and exercise price K.

Below we can see the original Black-Scholes formula where it becomes apparent that to ex-tract the implied volatility from an observed option price one has to use numerical methods to identify the volatility that would calibrate the Black-Scholes formula.

CBS(S, K, r, T, σ ) = SN(d1) − Ke−r(T −t)N(d2) (2.11)

where N(d1) and N(d2) are cumulative distribution functions for a standard normal distribu-tion N ∼ µ, σ2 with the following formulation:

d1= logKS +r+σ2 2  (T − t) σ √ T− t d2= d1− σ √ T− t = logKS +r−σ2 2  (T − t) σ √ T− t . (2.12)

(18)

If we assume that Equation (2.11) can be a realistic model of option price dynamics, then the observed option price C should be equal to the Black-Scholes price given that same equa-tion (2.10) [5]. To adjust the model result to the observed market values we need to study the difference I − ¯σ between the implied volatility and the volatility used to compute the two dimensional BS prices, I− ¯σ = √ ε I1,0+ √ δ I0,1+ . . .. (2.13) Therefore, CBS(I) = CBS( ¯σ ) + ( √ ε I1,0+ √ δ I0,1) ∂CBS( ¯σ ) ∂ ¯σ + . . .. (2.14) If we consider the the prices given by the above equation to be the same with Equation 2.5 then we have, UBS(I) = UBS( ¯σ ) + ( 1 ¯ σϒ εD 1+ τΘ2D1) ∂CBS( ¯σ ) ∂ ¯σ + . . .. (2.15) For the previous equation to be true it has to be that,

√ ε I1,0 ∂UBS( ¯σ ) ∂ ¯σ = 1 ¯ σϒ εD 1 ∂UBS( ¯σ ) ∂ ¯σ (2.16) and √ δ I0,1 ∂UBS( ¯σ ) ∂ ¯σ = τΘ 2D 1 ∂UBS( ¯σ ) ∂ ¯σ (2.17) We can use the Vega and Gamma relations to express the above relations as,

∂UBS( ¯σ ) ∂ ¯σ = τΘ δD 1 ∂2UBS( ¯σ ) ∂ S2 . (2.18) It can be seen here that Vega is the options price sensitivity to changes in the volatility of the underlying asset price, and Gamma is the rate of change in an options delta with respect to the underlying asset price. Delta is defined as the rate of change in the options price with respect to the underlying asset price.

Therefore the two equations below hold: √ ε I1,0= ϒε 2 ¯σ  1 − 2r ( ¯σ )2  + ϒ ε ( ¯σ )3 logES τ (2.19)

(19)

√ δ I0,1= τ Θδ 2  1 − 2r ( ¯σ )2  + τ Θ δ ( ¯σ )2 logES τ (2.20) Using the difference between volatilities in Equation (2.13) and representation of I0,1 and I1,0 in Equations (2.19) and (2.20), respectively, we can express the implied volatility as:

I= aε

1+ τaδ0+ (τaδ2+ aε3)LMMR (2.21)

where LMMR, or log-moneyness-to-maturity-ratio is a metric of moneyness, i.e. a measure-ment for how far a given strike is from the asset spot price, see eq. (2.26), and

aε 1= ¯σ∗+ ϒε 2 ¯σ∗  1 − 2r ( ¯σ∗)2  ; aδ 0 = Θδ 2  1 − 2r ( ¯σ∗)2  aδ 2 = Θδ ( ¯σ∗)2; a ε 3= ϒε ( ¯σ∗)2. (2.22)

2.3

Parameter estimation method

2.3.1

Least squares

Single-factor stochastic volatility model regression

The single-factor model, is simply a model that does not use both slow and fast scale mean reversion, but one of them. The exclusion of either mean reversion scale, does make the model more stiff and less flexible [8]. If we would like to focus on fast scale theory only, we will set δ = 0, yielding

I≈ aε

1+ aε3LMMR. (2.23)

Whereas, for performing a calibration using slow factor theory, we would set ε = 0 and LM = τ LMMR, and plot the log-moneyness against the maturity adjusted implied volatility2. We obtain,

I≈ ¯σ + aδ0τ + aδ2(LM). (2.24) 2Expressed as I − bδτ

(20)

Two-factor stochastic volatility model regression

Initially, we will adopt two methods of regression for the estimation of the two sets of paramet-ers, one set for each dataset. For the first set of parameters we will linearly regress the daily call options data at once, according to equation (2.21), which is a least square regression. The only step in the regression procedure is to obtain the estimates aε

1, aδ0, aδ2, aε3that solve min aε 1,aδ0,aδ2,aε3

i, j  I(Ti, Ki j) − (aε3LMMRi j+ aδ2τ LMMRi j+ aδ0τ + aε1) 2 (2.25)

Both regression procedures have to be preceded by computing LMMRs for all maturities

(LMMR)i j = log K i j s  τi , (2.26)

where s is the spot price of the underlying and τiis the time to maturity. The intuition behind the concept of log-moneyness is that two options with the same LMMR are the same relative distance away from their respective spot prices.

While our data allowed for fitting the surface at once this was not the case in previous research attempts where a two step approach was used. To compare the two sets of results, i.e. their respective absolute and relative fitting error, we also applied a two-step method as in [3], where the data is first fit on a maturity per maturity basis followed by a fitting across the term structure. The first step in the regression procedure will be to obtain the estimates ˆai and ˆbi which solve min ai,bi

j I(Ti, Ki j) − (aiLMMRi j− bi) 2 (2.27) The second step will be to regress the estimates obtained in the previous step to obtain the estimates for the intercept and slope, ˆaε and ˆaδ, respectively, which solve

min a0,a1

i

(21)

and the intercept and slope, ˆb∗and ˆbδ, respectively, which solve min b0,b1

i ˆbi− (b0+ b1τi) 2 . (2.29)

2.4

Time stability

For a model that is studied over time, it is of paramount empirical importance to assign some credibility to the fitted parameters by studying their stability over time. Time stability of the estimated parameters over the studied period implies that the model is performing well in cap-turing the dynamics of the underlying; an essential feature in modelling financial instruments. In this investigation, we will apply an Autoregressive Integrated Moving Average (ARIMA) model. This model is used for understand the data over the period of time being studied or to predict future points in the series [1].

2.4.1

ARIMA

An ARIMA model has two components, an autoregressive (AR) component and a moving average (MA) component. The AR component is a way of regressing a variable against itself, it allows us to generate forecasts of a variable using a linear combination of past observations of that variable. While the MA component, instead of using past observations, uses the all the past forecast errors to establish a pattern of behaviour on the past observations [15]. There are two assumptions in ARIMA modelling:

1. Stationarity - the property of the series is not dependent on the time when it is captured and are constant over time. White noise and cyclical behaviour series are also considered to be stationary series.

2. Univariate - Autoregression is about regressing with past values, therefore, we use uni-variate analysis.

Moreover, there are three variables to be considered in a nonseasonal ARIMA(p, d, q) model: • p the number of lags of the stationarized series,

(22)

• d the number of nonseasonal differences needed for stationarity, and • q the number of moving average terms in the prediction equation.

Since in time series data we often find non-stationarity, and we go under the assumption that the data is stationary, we will transform the data stationary as the need arises. To find out if a data set is violating stationarity we look for trends in the mean or variance [10]. Moreover, we will determine which ARIMA model to use by using the AIC. These models extrapolate a moving average forecast over the selected forecast periods, based on the local trend established at the end of the series. This practice introduces bit of conservatism, which by no means reduces the power of the model.

(23)

Chapter 3

The Data

In this chapter, we give a detailed account of the market data used in this investigation as well as how it is used. Specifically, we provide essential definitions for and describe, the stock and index options data and the sources used, the parameters involved and the data cleaning procedure.

3.1

Definitions

There exist four variables that are fundamental for the study of options pricing, namely, the price at which the options are trading during the period being studied, the prices at which one may exercise the rights provided by owning a call option (the right to sell the stock), and the life-time of the stock option. Throughout this paper, we will refer to these as spot price (s), strike price (K) and time to maturity (τ)1, respectively. For our data set, we were provided with the strike price (K), time to maturity (τ), spot price (s) and the implied volatility(σ ).

1Defined as τ = T−t

252; where T is the maturity date; t is the date of issue; and 252 is the amount of trading

(24)

3.2

Underlying assets

We will confine ourselves to the use of ABB and Euro Stoxx 50 call options data spanning from December 2016 to December 2017, in both cases. ABB is a representative, non-dividend-paying, public company listed in the Nasdaq Stock Exchange, and its stocks and options are openly traded in the public market. On the other hand, Euro Stoxx 50 is a blue-chip2 stock index put together by STOXX3, whose options are also traded in the open market. The data provider is the company OptionsMetrics, a popular provider of historical options and implied volatility data.

3.3

Option datasets

Our first dataset, the ABB options data-file, contains 2731 spreadsheet entries per month (1301 in december). We compile all these entries into one large dataset, containing 34190 total en-tires of daily closing bid quotes. The second dataset, the index Eurostoxx50 (STOXX) options data-file, has a less consistent amount of entries per month, but totals 34711 entries when compiled into a large dataset.

Each data file contains information on standardized options, both calls and puts, with expir-ations of (with τ =) 30, 60, 91, 122, 152, 182, 273, 365, 547 and 730 calendar days. The standardized implied volatilities in the data file are calculated using a kernel smoothing tech-nique to generate smoothed volatility values at each of the interpolation grid points.

3.4

Calibrated parameters

The word calibration, in our context, simply means the adjustment and reduction of the para-meters given in our stochastic volatility model, such that the option prices yielded by the model, replicate, with as much accuracy as possible, the market prices for a given set of

de-2In essence, this means that it includes only large stable eurozone companies.

3The company that created the index. More detailed information about the index and how it is built can be

(25)

rivatives, in our case, call options. As established in previous chapters, the point our main aim is not to price the options, but to study the stability over time of the parameters obtained. Through the calibration procedure of the first-order approximation, as conducted in [3], the amount of parameters is reduced to four parameters (aε

1, aδ0, aδ2, aε3) that will be obtained by regressing against the implied volatility as expressed in equation (2.22). For simplicity, we will assign the following notation to our calibrated parameters4, aε

1 = a1e, aδ0 = a0d, aδ2 = a2d and aε

3 = a3e, and these set of notations will be used throughout the remainder of the investigation, and be referred to as the main parameters.

3.5

Calibration procedure

We regress on a daily basis as in [3] and [9]. This regression will yield an αi0s for each τi and a similar procedure is followed to obtain the βi0s. When the αi0sand βi0sare obtained, we proceed to regress each against τi. The end product of this process will yield 263 calibrated parameters. These parameters will then be studied over time with an ARIMA time series model.

3.5.1

One-step calibration

In this thesis, instead of regressing maturity by maturity and then across the term structure, as in [9] and [3], we perform, what we refer in this thesis as a one-step regression. We regress the whole datasets at once.

For this one-step calibration procedure, we proceed by lining up the dataset in a similar manner as in the previous calibration. Similarly, here, we make use of the least squares method to obtain the calibrated parameters, for which we will measure the stability over time as in previous method.

4We will add a two at the end of each parameter through the thesis to denote that they are obtained through

(26)

Table 3.1: One-step: Small sample of the parameters calibrated daily. We obtain 263 calib-rated parameters for the ABB dataset.

Day a1e a0d a2d a3e 1 0.20974 -0.00012 -0.08514 -0.04698 2 0.19828 0.00761 -0.05432 -0.00996 3 0.20286 0.00317 -0.06330 -0.01356 4 0.19649 0.00585 -0.06610 -0.01544 ... ... ... ... ... ... ... ... ... ... 263 0.14809 0.00540 -0.06175 -0.02322

3.5.2

Calibration of single-factor model

The single-factor model calibration procedure, is similar to that of the double-factor. Here, we exclude one of the factors, either the fast mean-reverting factor or the slow mean-reverting factor, meaning that we are left either with 2.23 or 2.24. We proceed by processing the data through both single-factor models. Although the parameters resulting from these models can-not be directly compared to the parameters from the double-factor model, we will use them as a benchmark for measuring the fitting accuracy of the double-factor model parameters, under different conditions.

(27)

Chapter 4

Experiment Results

In this chapter, we will start by selecting a regression method by analysing the errors gener-ated. We will continue with commenting on the stability of the fitted parameters, resulting from the selected regression method, over the period being studied. We will use the results of our fittings to produce volatility surfaces and other illustrations to describe the data, for both the index and stock options. The objective here is to understand the quality of the fitting across the different models, 1 and 2 factor, by producing plots, analysing them and comparing them.

Finally, we will conduct time series analysis of the calibrated parameters with ARIMA models applied to 12 months of our data. Using the fitted models we will forecast the said parameters for the remaining 10 days of our data, essentially an out of sample forecasting. Since we have the forecasted parameters the implied volatility can be calculated according to 2.21 and the individual errors for each observation in the last 10 days will the be averaged and studied. The option data will be restructured in different maturity groups so that valuable inferences can be drawn on the forecasting errors.

4.1

Selection process of regression method

Here, we focus our attention to the selection of a suitable regression method. Our selection process was be based on the structure of our data complemented by an analysis of the

(28)

er-rors. Since our data is well organised, that is 130 implied volatilities along 10 maturities and 13 strikes for every day of observations (263 for ABB and 267 for the index), we had the opportunity to fit the volatility surface and obtain our parameters at once for every day of observations. Previous research has used a two steps regression process that we also imple-mented arriving at the intuitive conclusion that no significant difference in the fitting error exists. Thus we decided to obtain our calibrated parameters with a straightforward one-step ordinary-least-square regression.

4.2

Parameter analysis

In this section, we analyse the time stability of the daily parameters over period of 12 months. The time stability of the parameters is an important requirement for models that attempt to replicate certain dynamics of a moving object, such as the one that we are working with here. The objective is to optimise the accuracy of such models by studying them over time, so as to understand the particularities of an environment better. In our case, the object that we subject to our model is an underlying financial asset. By understanding the behaviour of the parameters resulting from the model, in the given context, we will find ourselves equipped with a more stable tool to price and hedge against risk using financial instruments i.e. options.

4.2.1

Parameter time stability

In figure (4.1), it is shown how the calibrated parameters for the stock dataset behave over the 263 days of the investigation. We observe that the parameters display larger daily spreads when compared to figure (4.2). This discrepancy in the behaviour of the variances for each set of parameters, tells us that, the errors produced from the stock data parameters are more likely to be larger. This is not to say that the parameters are of less value due to their dispersion characteristics. Yet, this could be considered as a sign that the model captures the index data better.

(29)

Figure 4.1: Time stability of fitted parameters - Stock dataset

(30)

4.3

Term structure analysis

4.3.1

Comparing parameter fittings across different maturities

To start our analysis of the term structures, we will look at the absolute and relative fitting errors resulting from our estimations. We regress for different sets of maturities to establish a more valid set of results. We will start by looking at a summary of the errors from the ABB dataset, and then move on to look at the STOXX index dataset.

For the maturities of less than three months (τ ≤ 3m, 2m) in table (4.2) – which we will refer to as, the short maturities – we found that the 2-factor model captures the short maturities better than the 1-factor ones. Moreover, we see that, the 1-factor fast mean-reversion does not do a better job at capturing the shorter maturities than the 1-factor slow mean-reversion at all stances. In addition, we observe that the absolute and relative errors from the 2-factor fitting are the lowest produced, which is in line with our expectations.

Table 4.1: Analysis of relative and absolute fitting errors for different maturities with the two-factor and single-two-factor models, for the stock data of ABB.

Absolute Error 2-factor 1-factor fast 1-factor slow

τ ≤ 3m 0.00636 0.00939 0.00718

τ ≥ 3m 0.00396 0.00522 0.00497

τ ≤ 2m 0.00581 0.01004 0.00676

τ ≥ 2m 0.00441 0.00583 0.00561

All τ 0.00638 0.00811 0.00779

Relative Error 2-factor 1-factor fast 1-factor slow

τ ≤ 3m 3.69% 5.44% 4.12%

τ ≥ 3m 2.22% 2.95% 2.77%

τ ≤ 2m 3.33% 5.82% 3.85%

τ ≥ 2m 2.50% 3.33% 3.16%

All τ 3.65% 4.68% 4.40%

By looking at figure (4.3), we obtain additional support for our findings from table (4.2). The table is read from the top-leftmost strand, where we have to shortest maturities, counterclock-wise in an increasing manner. Here, we show graphically, how the 1-factor fast mean-reversion

(31)

model fails to smoothly capture the whole range of maturities. Yet, it captures some of the maturities well, as seen in the upper figure, in figure (4.3).

Similarly, the fitting of the shorter maturities, depicted in Figure (4.4), seems to be much better than the same type of fitting resulting from the ABB dataset. Again, we used the 1-factor. Yet, when fitting the whole range of maturities, the model fails to accurately capture the data.

Figure 4.3: ABB implied volatilites as a function of LMMR on April 25, 2017. The circles are from ABB data, and the line b∗+ aε(LMMR).

In table (4.3), we are looking at the relative and absolute errors resulting from the fitting of the STOXX index data. We observe here, a similar pattern as in the case for the ABB parameters. The fitting errors tend to increase as the τ0sincrease. Moreover, the fitting errors’ differences for the 1-factor fast and 1-factor slow seem to follow a similar pattern as that of the ABB errors i.e. the 1-factor slow continues to perform better.

(32)

Furthermore, as the errors generated seem to follow a similar pattern, regardless of the data-set, we can conservatively say that there are good grounds to justify further analysis of these patterns. Conducting time series analysis on the data, will not only help us model the para-meters but also forecast them. For that, we will proceed with the de facto superior model, the two-factor model.

Table 4.2: Analysis of relative and absolute fitting errors for different maturities with the two-factor and single-two-factor models, for the index data of STOXX.

Absolute Error 2-factor 1-factor fast 1-factor slow

τ ≤ 3m 0.00313 0.00881 0.00423

τ ≥ 3m 0.00359 0.01059 0.00764

τ ≤ 2m 0.00203 0.00780 0.00276

τ ≥ 2m 0.00400 0.01170 0.00821

All τ 0.00700 0.01545 0.01036

Relative Error 2-factor 1-factor fast 1-factor slow

τ ≤ 3m 2.07% 5.81% 2.80%

τ ≥ 3m 1.98% 5.83% 4.34%

τ ≤ 2m 1.46% 5.24% 1.94%

τ ≥ 2m 2.24% 6.55% 4.75%

All τ 4.19% 9.25% 6.34%

The fitting of the shorter maturities, depicted in Figure (4.4), seems to be much better than the same type of fitting resulting from the ABB dataset. Again, we used the 1-factor. Yet, when fitting the whole range of maturities, the model fails to accurately capture the data.

(33)

Figure 4.4: STOXX implied volatilites as a function of LMMR on April 25, 2017. The circles are from ABB data, and the line b∗+ aε(LMMR).

(34)

4.4

Time series analysis of calibrated parameters

In this final section, we will explore the possibility of establishing a pattern or trend in the calibrated parameters. Previous work has established a certain degree of stability for the para-meters. Experiments were previously run and up to 10 years of S&P 500 data were analysed and demonstrated the mentioned stability. Since we have daily values of 2 sets of 4 parameters for 13 months we considered interesting to assume stationarity for each parameter and thus subject them to in-sample forecasting. The study of the implied volatility forecasting error will help us determine the extend to which the calibrated parameters can be modelled.

4.4.1

Stationarity

Since the foundation of time series analysis is stationarity, we will subject the calibrated para-meters to the autocorrelation function (ACF) and partial autocorrelation function (PACF) to define how the time series data points are related. But first, we will start by performing a Kwiatkowski-Phillips-Schmidt-Shin (KPSS) test, for testing for stationarity.

In figure (4.1), the parameter aε

1 seems to be of constant the mean and thus no clear sign of a stationary trend is found, and as we established before the variance seems to be steady. Wee proceed to test that assumption. The output given by performing the KPSS test yielded a p-value of 0.01, which means that, at a 5% confidence level, we fail to reject the null hypothesis and conclude that there may be signs that data may be non-stationary. That is, as well, that we may be in the presence of a stationary trend, which means that the parameter a1e tends towards a constant mean. On the other hand, looking at the ACF chart in figure (4.5), we find signs of autocorrelation since a high number of the previous observations can be correlated to their future values. We will address this issue of transforming the data into non-stationary by the differencing method i.e. subtracting Xn−1 from Xn. That is, if the original time series is X1, X 2, X 3, X 4, X 5, ...X n, by applying a difference of degree one, we obtain X 2 − X 1, X 3 − X2, X 4 − X 3, ...X n − X (n − 1).

(35)

Figure 4.5: ABB dataset: The ACF and PACF of the intercept parameter for the calibrated model.

Figure (4.6) depicts the results from the same tests after making the parameter data stationary by differencing the series. We clearly see, that we managed to make the data stationary, as all significant correlation appears to have been removed. Then, at this point we see that the d has to be at least 1. Moreover, for selecting an appropriate ARIMA(p,d,q) model, we will use the akaike information criterion (AIC), which evaluates the relative efficacy for each set of variations in p,d and q and helps select the best one.

Figure 4.6: ABB dataset: The ACF and PACF of the intercept parameter for the calibrated model with difference of degree one.

(36)

4.4.2

ARIMA model selection

For selecting the appropriate ARIMA (p,d,q) model for each parameter, we used the R func-tion auto.arima which chooses a model by evaluating the Akaike Informafunc-tion Criterion (AIC) through different sets of p,d and q such that AIC is minimised.

AIC= −2 n ln L +

2

n(p + q + k), (4.1)

where we define L as the likelihood, p and q are the orders of the AR and MA parts, re-spectively. By minimizing AIC, we can arrive for example that the best model for aε

1 is an ARIMA(0,1,1).

Figure 4.7: Analysis of the residuals for the ARIMA(0, 1, 1) model - parameter aε 1.

Moreover, as we can see in figure (4.7), the residuals fluctuate around zero, the ACF of the residuals has stationary behaviour, and the QQ-plot captures the distribution of the data well, with the exceptions of a few points at the extremes of the tails. As for the Ljung-Box statistic, we found adequate p-values for the most part. Therefore we can assume that there are suitable ARIMA models for fitting to our parameters and these models can be used for with forecasting as well.

Although the behaviour of the PACF could suggest that we increase the number of autoregress-ive terms p we tested different variations of p and the results worsened, the set of figures can

(37)

be found in Appendix A. Furthermore, the same analysis procedure was conducted on all the other parameters for both datasets and the results are included in Appendix B. In general the results were good, some irregularities were found but they were in the nature of the data and much about it could not be done. Nevertheless we still managed to establish a good enough fit of the data and we now proceed to experiment with forecasting.

4.4.3

ARIMA out-of-sample forecasting

In the figures (4.8) and (4.9) we can see a forecast of the intercept parameters aε

1for both the index and ABB. Having fitted suitable ARIMA models using the first 253 and 257 days for ABB and the STOXX index respectively we used these models to forecast the last 10 days of implied volatility observations. We specifically chose 10 days because it split the data set in two parts- the first 12 months and the remaining 10 days in December 2017. Since the last 10 days of observations were not used to select the ARIMA model we refer to the method used as out-of-sample forecasting.

After obtaining 10 daily forecasts for every parameter we used 2.21 to calculate the implied volatilities for all options in the last 10 days. The forecast errors are then averaged by the number of all observations in these 10 days and an average absolute and relative error is calculated. As an experiment on the forecast accuracy we decided to split the data in 3 different groups of maturities, 5 groups of 2, 2 groups of 5 and 1 group for all maturities . It was after this that ARIMA forecasting was performed and the results can be seen in table 4.3, followed by our comments.

(38)

Figure 4.8: Out-of-sample forecasting for ABB and STOXX - a1e parameter 10 day forecasts regressing for 1 month and 2 months maturities.

These figures are indicative of the ARIMA forecasting capabilities where, after finding and appropriate model for our parameter, the same model is used to estimate 5 new values out of the training sample- The models performed good enough and were able to generate a trend based on its moving average properties.

Figure 4.9: In-sample forecasting for ABB and STOXX - a1e parameter 10 day forecasts of regressing for all τ0s.

(39)

By studying table 4.3, we can analyse the errors generated from our forecasting for both data-sets. The fitting error, absolute and relative, refers to the ordinary least squares method that we employed previously to obtain our parameters. We notice first that the fitting error does not necessarily indicate the quality of the forecasting error, although there is some correspondence which can be intuitively attributed to the level of data dispersion. As we mentioned before, when we looked at figures 4.1 and 4.2, we observed that the calibrated ABB parameters mani-fested greater dispersion than those of STOXX. It is clear from the forecasting quality in the index (STOXX) parameters that this fact did indeed affect the forecasting capabilities of our model since we achieved the minimum forecasting errors there.

Table 4.3: Error analysis of in-sample forecasting, tested across different maturities.

ABB τ1,2 τ3,4 τ5,6 τ9,12 τ16,24 τ1,2,3,4,5 τ6,9,12,16,24 All τ

Abs. fitting error 0.00581 0.00350 0.00352 0.00289 0.00214 0.00670 0.00325 0.00638

Rel. fitting error 3.32% 1.98% 1.98% 1.58% 1.16% 3.88% 1.79% 3.65%

Abs. forecast error 0.02212 0.01129 0.00952 0.00825 0.00401 0.01275 0.00673 0.00972

Rel. forecast error 14.43% 6.59% 5.85% 5.01% 2.55% 7.59% 4.16% 5.90%

Index τ1,2 τ3,4 τ5,6 τ9,12 τ16,24 τ1,2,3,4,5 τ6,9,12,16,24 All τ

Abs. fitting error 0.00203 0.00159 0.00176 0.00222 0.00277 0.00372 0.00335 0.00700

Rel. fitting error 1.46% 1.02% 1.05% 1.22% 1.41% 2.38% 1.80% 4.19%

Abs. forecast error 0.00547 0.00362 0.00344 0.00167 0.00212 0.00459 0.00351 0.00645

Rel. forecast error 4.33% 2.44% 2.31% 1.05% 1.27% 3.43% 2.11% 4.27%

It is apparent that forecasts appear to be better at certain maturity groupings, namely those that involve bigger maturities. The best forecasting performance for ABB was at the data group that included the 1,5 and 2 year maturities while for STOXX that was the group that included 9 and 12 months maturities. The pattern repeats itself when we split the data in 2 big groups each containing 5 maturities. The group with the bigger maturities is modelled and forecasted better. In general we deem the quality of specific forecasts very promising and we find the idea of further improvement on forecast accuracy conceivable as per our proposals for further research.

(40)

Chapter 5

Conclusion

The concept of the implied volatility surface is widely popular among finance practitioners. Using the Black-Scholes model, we can obtain a cloud of implied volatility values for a set of options across the range of maturities and exercise prices. An infinite number of surfaces pass through these points, but in our case we had the opportunity to work with a parametrized model for implied volatility, inspired by the model developed by Christoffersen and Heston [8], and derived by Chanhanga et. al [3]. What defines our model, is the presence of two independent mean-reversion volatility processes, each of them represented by two parameters in the final calibration (Equation 2.22); (aε

3 and aε1) and (aδ0 and aδ2), for the fast and slow mean-reversion factors, respectively.

Parametrising the volatility surface can also provide an indirect approach for pricing options. In our case the raw parameters that we obtained can be converted into a group of market para-meters that could be then used to price an option. Previous work has shown the results of introducing a change to the group market parameters to the parametrized volatility surface. This is common practice among practitioners in order to observe the effects on the surface. We steered off the group market parameters as our goal was not the pricing of options per se, but finding ways to optimize the calibration to obtain raw parameters that can fit the data as accurately as possible and can work as a foundation to generate good in-sample implied volatility forecasts.

(41)

After trying the one-step fitting in our data i.e. obtained all parameters at once, we concluded that there is no significant difference between the one-step and the Fouque et. al [9] two-step fitting method. Since we managed to spot marginal improvements in the one-step error char-acteristics we proceeded with our work with it. We then compared the performance of the two-factor and the one-factor models across different ranges of maturities. It was expected that the 2 factor model would capture better the range of volatilities because of it is 2 dimen-sional nature. In addition to this well documented and intuitive fact we observed the relatively better performance of the slow factor model relative to the fast factor, which can be explained by the presence of the tau factor which adds the maturity dimension.

Having replicated the explanatory power of the 2-factor model, confirming previous work on the subject, we attempted to forecast implied volatilities for a number of steps by forecasting the parameters in different data setups. The forecast is in-sample and the data is split accord-ing to maturity in three different groupaccord-ings of maturities to observe the model performance in each. To obtain forecasts we argued for the stationarity of the time-series of parameters and used suitable ARIMA models. The forecasted parameters were then used to obtain implied volatilities which were measured against the real ones in order to study the error in absolute and relative terms. Valuable conclusions are drawn for the performance of our model, in par-ticular the better performance when it comes to applying in longer maturity groupings.

In general we were able to demonstrate the value of the parametrisation of the volatility sur-face. Since implied volatility is adequate to price options the significance of reducing an option market to a set of parameters that approximate its volatility surface cannot be exag-gerated. Being able to work with a parametrisation based on a well documented model [8] certainly had an effect to our results. It is also important that by using 2 sets of data, namely option data from ABB and the SPXX500 stock index we were able to control for our results since in most cases the observed patterns are repeated.

(42)

Ideas for further research

• Further research can be done on analysing the performance of the fast-factor model i.e. I≈ b∗+ aε(LMMR) on maturities less than one month since the fast-factor model could explain implied volatility better for shorter maturities. This is not counterintuitive and has been shown by [9], and can be observed in our work i.e. the negative trend in fitting errors for a limited groups of 1 to 4 month maturities that we had in our data.

• Another proposal for further research is using different setups for forecasting, namely rolling ARIMA and/or different forecasting periods. Then the consistency of the fore-casting error findings can be explored further.

• Finally the experiments could be repeated with data from more underlying assets even from different asset classes. The study of the results will enhance the researchers’ un-derstanding of the parameters and the relation between the assets themselves.

(43)

Contributions and Objectives

Contributions

In this thesis both authors were equally involved in the work. Moreover the tasks were carried out at the same space and time with both partners present. However, at given points one contributed more with certain things while the other was contributing with others, particularly issues like implementation and document structuring and editing. The work was in these respects frictionless, as issues on who does what never arised.

Objectives

Objective 1: for Bachelor degree, student should demonstrate knowledge and understanding in the major field of study, including knowledge of the field’s scientific basis, knowledge of ap-plicable methods in the field, specialisation in some part of the field and orientation in current research questions.

The thesis is about calibrating a linear model for implied volatility based on a multi-scale stochastic volatility model. Implied volatility is an important and fundamental concept of fin-ancial mathematics since is the key for option pricing, a category of widely used finfin-ancial derivatives. The objective of this thesis is to analyse the time series of the estimated para-meters derived from a two-factor stochastic volatility model and to generate forecasts using ARIMA models. Theories from the different fields of mathematics are used, stochastic cal-culus, vector algebra, probability theory, time series analysis and statistical inference. The background and introduction of the thesis is presented thoroughly. The relevant models were

(44)

described, and the derivation of the two-stochastic volatility model was shown. The model in consideration is contrasted with previous variations derived by academics, and the one with the highest accuracy – i.e. the one that produced the lowest errors – was selected to generate forecasts with.

Objective 2: For Bachelor degree, the student should demonstrate the ability to search, col-lect, evaluate and critically interpret relevant information in a problem formulation and to critically discuss phenomena, problem formulations and situations.

For this study we relied on previous research that illuminated the way to obtain our parameters and understand the explanatory power of the different model variations. We heavily used at least two sources, Fouque et. al [9], Canhanga et. al [3], that were crucial in our understand-ing of the task and we draw inspiration for a handful of other previous research to craft our own experiments. These were based on questions that were posed in the field of modelling implied volatility and pricing of options. It was a rewarding experience since we discovered a multitude of ways that research in this particular field can be applied in the financial markets field by practitioners.

Objective 3: For Bachelor degree, the student should demonstrate the ability to independently identify, formulate and solve problems and to perform tasks within specified time frames.

We were able to formulate our own questions based on the data and methods that we were given and consequently introduce our own experiments of course using previous work as an inspiration. It was satisfying that the results of our experiments were meaningful and we tried our best to communicate them properly throughout this work. For most of the time we kept to our timetable and we finished in due time. There was some delay in delivering the corrections after the presentation of the thesis and we hope to our examiner’s understanding.

(45)

or-ally and in writing and discuss information, problems and solutions in dialogue with different groups.

Since the field of financial mathematics is ridden with nuances and various implications it was essential to have a good communication between us to grasp our task to a good extend. Only then we were able to communicate our points during the thesis presentation and be able to answer the questions posed. We also spent certain amount of time to make sure that the fundamentals of our problem, namely option pricing and implied volatility and also our own experiments were well explained and adequately motivated inside the thesis body.

Objective 5: For Bachelor degree, student should demonstrate ability in the major field of study make judgments with respect to scientific, societal and ethical aspects.

Having understood the implications of our work we did not hold back to actually rate the res-ults based on their promising significance for a finance practitioner. Managing risk is a key component of todays advanced economies and we like to think that we understand the signi-ficance of the work in the field.

(46)

Bibliography

[1] Søren Bisgaard and Murat Kulahci. Time series analysis and forecasting by example. John Wiley & Sons, 2011.

[2] Fischer Black and Myron Scholes. The pricing of options and corporate liabilities. Journal of political economy, 81(3):637–654, 1973.

[3] Betuel Canhanga, Anatoliy Malyarenko, Jean-Paul Murara, Ying Ni, and Sergei Sil-vestrov. Numerical studies on asymptotics of european option under multiscale stochastic volatility. Methodology and Computing in Applied Probability, 19(4):1075– 1087, 2017.

[4] Betuel Canhanga, Anatoliy Malyarenko, Jean-Paul Murara, and Sergei Silvestrov. Pri-cing european options under stochastic volatilities models. In Engineering Mathematics I, pages 315–338. Springer, 2016.

[5] Betuel Canhanga, Anatoliy Malyarenko, Ying Ni, and Sergei Silvestrov. Perturbation methods for pricing european options in a model with two stochastic volatilities. 2015.

[6] Carl Chiarella and Jonathan Ziveyi. American option pricing under two stochastic volat-ility processes. Applied Mathematics and Computation, 224:283–310, 2013.

[7] Andrew A Christie. The stochastic behavior of common stock variances: Value, leverage and interest rate effects. Journal of financial Economics, 10(4):407–432, 1982.

(47)

[8] Peter Christoffersen, Steven Heston, and Kris Jacobs. The shape and term structure of the index option smirk: Why multifactor stochastic volatility models work so well. Management Science, 55(12):1914–1932, 2009.

[9] Jean-Pierre Fouque, George Papanicolaou, Ronnie Sircar, and Knut Sølna. Multiscale stochastic volatility for equity, interest rate, and credit derivatives. Cambridge University Press, 2011.

[10] Everette S Gardner and Eddie McKenzie. Why the damped trend works. Journal of the Operational Research Society, 62(6):1177–1180, 2011.

[11] Steven L Heston. A closed-form solution for options with stochastic volatility with applications to bond and currency options. The review of financial studies, 6(2):327– 343, 1993.

[12] John Hull and Alan White. The pricing of options on assets with stochastic volatilities. The journal of finance, 42(2):281–300, 1987.

[13] John C Hull. Options futures and other derivatives. Pearson Education India, 2003. [14] Mark Rubinstein. Nonparametric tests of alternative option pricing models using all

reported trades and quotes on the 30 most active cboe option classes from august 23, 1976 through august 31, 1978. The Journal of Finance, 40(2):455–480, 1985.

(48)

Appendix A

Appendix

Figure A.1: Test for autoregressive term p = 2 – ARIMA(2,1,1). As we can see, the p-values for the Ljung-box test for p = 2 decreases making the model worse.

(49)

Figure A.2: Test for autoregressive term p = 3 – ARIMA(3,1,1). As we can see, the p-values for the Ljung-box test for p = 3 decreases even more than for p = 2, while the other statistics remain largely similar. The model becomes even worse.

(50)

Appendix B

Appendix

Figure B.1: ABB: Analysis of residuals for parameter a0d. We find similarities as with a1e. We have here a well-behaved parameter. The standardized residuals look good, the ACF of the residuals is not even close to alarming, the QQ plot captures the residuals well and the p-values for the ljung-box have nice high values. Although the data should have a p-value larger than 0.05, it does not. There is no much to do about it.

(51)

Figure B.2: ABB: Analysis of residuals for parameter a2d. Here, too, are we in the presence of well-behaved data. With the exception of the QQ plot. It does not capture the extreme ends properly. The p-value for this parameter was higher than 0.05, and we rejected the null hypothesis. In the Ljung-box plot we found that the data behaves nicely too.

Figure B.3: ABB: Analysis of residuals for parameter a3e. We obtained good results here, though the QQ plot did not capture well the extremes points. Moreover, the p-value for this parameter was over the 0.05 threshold so we reject the null hypothesis. Though in the Ljung-box plot we see that the p-values are very close to zero.

(52)

Figure B.4: STOXX: Analysis of residuals – parameter a0d.

(53)

Figure B.6: STOXX: Analysis of residuals – parameter a2d.

(54)

Appendix C

Appendix

(55)

Figure C.2: 10 day forecast of parameter a2d - for maturities 5 to 6 months.

(56)

Figure C.4: 10 day forecast of parameter a1e - for maturities 18 to 24 months.

(57)
(58)

Figure C.7: 10 day forecast of parameter a2d - for all maturities.

(59)
(60)

Appendix D

Grading criteria for degree projects.

Degree projects are graded with respect to the following six criteria. The examiner marks each criterion by 0, 1, or 2 points, and writes a motivation if the mark is less than 2 points.

Realisation of the task

2 point, passed with distinction:

the clarity in the description of the task, the execu-tion of the task, and the conclusions are all at good level.

1 point, pass: if not good, at least a decent level in these respects. 0 points, failed: serious flaws in one or more of these respects.

Mathematical content.

For degree project at undergraduate level

2 point, passed with distinction: tcontent mainly at a level above G1N courses. 1 point, pass: content mainly at G1N level.

0 points, failed: content mainly below G1N level. For degree project at graduate level

2 point, passed with distinction: content mainly at a level above G1F courses. 1 point, pass: content mainly at G1F level.

(61)

Mathematical/logical rigour.

2 point, passed with distinction: the clarity and the correctness of the mathematical presentation are both at a good level.

1 point, pass: if not good, at least a decent level in these respects. 0 points, failed: serious flaws in one or more of these respects.

Literature review.

2 point, passed with distinction: the relevance, the correctness, and the usefulness of the references are all at a good level.

1 point, pass: if not good, at least a decent level in these respects. 0 points, failed: serious flaws in one or more of these respects.

Written presentation.

2 point, passed with distinction:

completeness, adherence to conventions, struc-ture, readability, grammar, and spelling are all at a good level.

1 point, pass: if not good, at least a decent level in these respects. 0 points, failed: serious flaws in one or more of these respects.

Oral presentation.

2 point, passed with distinction:

quality of content, structure, clarity, time-keeping, and ability to answer questions are all at a good level.

1 point, pass: if not good, at least a decent level in these respects. 0 points, failed: serious flaws in one or more of these respects. If any criterion receives 0 points, the project receives a failing grade. Otherwise the grade is

given by the total sum of points as follows: 10-12p = VG, 6-9p = G.

(62)

The grade is always reported, also when the grade is “fail”. In the case of a passing grade no further revisions should be required and no more improvements can be done to achieve a higher grade. (The student may still improve the report before it is published or archived, but

this will not affect the grade.) In the case of a failing grade, the examiner states what revisions are required and whether a new oral presentation is also required. The committee of

supervisors decides in advance on three dates per year for oral presentations as well as dates for hand-in of revised reports (about a month after presentations).

Figure

Table 3.1: One-step: Small sample of the parameters calibrated daily. We obtain 263 calib- calib-rated parameters for the ABB dataset.
Figure 4.2: Time stability of fitted parameters - Index dataset
Table 4.1: Analysis of relative and absolute fitting errors for different maturities with the two- two-factor and single-two-factor models, for the stock data of ABB.
Figure 4.3: ABB implied volatilites as a function of LMMR on April 25, 2017. The circles are from ABB data, and the line b ∗ + a ε (LMMR).
+7

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Syftet eller förväntan med denna rapport är inte heller att kunna ”mäta” effekter kvantita- tivt, utan att med huvudsakligt fokus på output och resultat i eller från

Regioner med en omfattande varuproduktion hade också en tydlig tendens att ha den starkaste nedgången i bruttoregionproduktionen (BRP) under krisåret 2009. De

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

• Utbildningsnivåerna i Sveriges FA-regioner varierar kraftigt. I Stockholm har 46 procent av de sysselsatta eftergymnasial utbildning, medan samma andel i Dorotea endast