• No results found

Swedish Interest Rate Curve Dynamics Using Artificial Neural Networks

N/A
N/A
Protected

Academic year: 2021

Share "Swedish Interest Rate Curve Dynamics Using Artificial Neural Networks"

Copied!
60
0
0

Loading.... (view fulltext now)

Full text

(1)

IN

DEGREE PROJECT MATHEMATICS, SECOND CYCLE, 30 CREDITS

STOCKHOLM SWEDEN 2020,

Swedish Interest Rate Curve

Dynamics Using Artificial Neural Networks

BILLY WALLANDER

RICHARD SPÅNBERG

(2)
(3)

Swedish Interest Rate Curve Dynamics Using Artificial

Neural Networks

BILLY WALLANDER RICHARD SPÅNBERG

Degree Projects in Financial Mathematics (30 ECTS credits)

Master's Programme in Applied and Computational (120 ECTS credits) Mathematics KTH Royal Institute of Technology year 2020

(4)

TRITA-SCI-GRU 2020:051 MAT-E 2020:017

Royal Institute of Technology School of Engineering Sciences KTH SCI

SE-100 44 Stockholm, Sweden URL: www.kth.se/sci

(5)

Abstract

This thesis is a comparative study where the question is whether a neural network approach can outperform a principal component analysis (PCA) approach for predicting changes of interest rate curves. Today PCA is the industry standard model for predicting interest rate curves. Specifically the goal is to better understand the correlation structure between Swedish and European swap rates.

The disadvantage with the PCA approach is that only the information contained in the covariance matrix can be used and not for example whether or not the curve might behave different depending on the current state. In other words, some information that might be quite important to the curve dynamic is lost in the PCA approach. This raises the question whether the lost information is important for prediction accuracy or not.

As previously been shown by Alexei Kondratyev in the paper ”Learning Curve Dynamics with Artificial Neural Networks”, the neural network approach is able to use more informa- tion in the data and therefore has potential to outperform the PCA approach.

Our thesis shows that the neural network approach is able to achieve the same or higher ac- curacy than PCA when performing long term predictions. The results show that the neural network model has potential to replace the PCA model, however, it is a more time consuming model. Higher accuracy can potentially be achieved if more time is invested in optimizing the network.

(6)
(7)

Sammanfattning

Det h¨ar ¨ar en j¨amf¨orande studie d¨ar syftet ¨ar att unders¨oka hurvida noggrannare prediktioner kan uppn˚as genom att anv¨anda sig av artificiella neurala n¨atverk (ANN) ist¨allet f¨or prin- cipalkomponentanalys (PCA) f¨or att f¨orutsp˚a swapr¨antekurvor. PCA ¨ar idag industristand- ard f¨or att f¨orutsp˚a r¨antekurvor. Specifikt ¨ar m˚alet att b¨attre kunna f¨orst˚a korrelationsstruk- turen mellan de Svenska swapr¨antorna och de Europiska swapr¨antorna.

En nackdel med PCA ¨ar att den enda tillg¨angliga informationen sparas i kovariansmatrisen.

Det kan till exempel vara fallet att kurvan beter sig v¨aldigt annorlunda beroende p˚a om de nuvarande r¨anteniv˚aerna ¨ar h¨oga eller l˚aga. Eftersom att s˚adan information g˚ar f¨orlorad i PCA-modellen ligger intresset i att unders¨oka hur mycket noggrannare prediktionerna kan bli om man tar tillvara p˚a ¨annu mer av informationen i datan.

Som Alexei Kondratyev visar i rapporten ”Learning Curve Dynamics with Artificial Neural Networks”, s˚a har ANN-modellen potential att ers¨atta PCA-modellen f¨or att f¨orutsp˚a r¨antekur- vor. I denna studie framg˚ar det att ANN-modellen uppn˚ar samma eller b¨attre resultat j¨amf¨ort med PCA-modellen vid l¨angre prediktioner.

Swedish title

Dynamiken i svenska r¨antekurvor med neurala n¨atverk.

(8)
(9)

Acknowledgements

First of all we would like to sincerely thank our supervisors at SEB, Victor Shcherbakov and Morten Karlsmark for designing and supervising this project. Their excellent support whenever we needed it throughout the work on this thesis has been much appreciated.

We are also very grateful to our supervisor at KTH, Professor Anders Szepessy for providing guidance and knowledge throughout the project. His inputs during the project have been invaluable.

Stockholm, May 2020

Richard Sp˚anberg and Billy Wallander

(10)
(11)

List of Acronyms

ANN Artificial Neural Network

Bps Basis points (1 basis point = 0.01%) EURIBOR Euro Interbank Offered Rate IBOR Interbank Offered Rate

IRS Interest Rate Swap MLP Multilayer Perceptron MSE Mean Squared Error OIS Overnight Indexed Swap PC Principal Component

PCA Principal Component Analysis ReLU Rectified Linear Unit

STIBOR Stockholm Interbank Offered Rate SVD Singular Value Decomposition

TanH Hyperbolic Tangent

(12)
(13)

Contents

1 Introduction 8

1.1 Aim of the project . . . 8

1.2 Previous Research . . . 8

1.3 Company and department . . . 8

2 Background 9 2.1 Fixed Income Instruments . . . 9

2.1.1 Zero Coupon Bond . . . 9

2.1.2 Fixed Coupon Bond . . . 10

2.1.3 Floating Rate Bond . . . 11

2.2 Yield . . . 12

2.2.1 Short rate . . . 12

2.3 Swaps . . . 13

2.3.1 IBOR . . . 13

2.3.2 Valuing a swap . . . 14

2.4 Principal Component Analysis . . . 16

2.5 Artificial Neural Network . . . 19

2.5.1 Optimizer . . . 21

2.5.2 Regularisation . . . 21

2.5.3 Example of a neural network calibration . . . 21

2.5.4 Universal approximation theorem . . . 22

3 Data 22 3.1 Test data . . . 24

3.2 Generated data . . . 25

3.2.1 Vasicek model . . . 25

4 Methodology 27 4.1 Principal Component Analysis approach . . . 27

4.1.1 Swedish Data . . . 28

4.1.2 Swedish and Euro Data . . . 29

4.2 Neural Network approach . . . 30

4.2.1 Swedish Data . . . 31

4.2.2 Swedish and Euro Data . . . 31

4.3 Mean Squared Error . . . 31

5 Results 32 5.1 Vasicek model . . . 32

5.2 Real data . . . 33

(14)

5.3.2 Weekly . . . 42

6 Discussion 43 7 Conclusion 43 8 Further developments 44 A Data 46 A.1 Historical Swedish data . . . 46

A.2 Historical Euro data . . . 47

A.3 Correlation between Swedish and Euro . . . 48

A.4 Vasicek Data . . . 49

B Algebra 50

(15)

1 Introduction

1.1 Aim of the project

Quantitative analyses plays an essential role in investment decisions and therefore it is of big interest to understand the dynamics of financial instruments as accurately as possible. The dynamics of these instruments are dependent on several underlying factors. The dependency of these factors is interesting to understand in order to predict how a change in these might alter the overall price of the instrument.

The aim of this project is to analyze the dynamics of Swedish, and European, interest rate swap curves using Principial Component Analysis (PCA), which is the industry standard for understanding the principle factors that affect interest rate curves. PCA will be used for benchmarking an artificial neural networks (ANN) approach. The ANN that will be investigated for this project are multi-layered perceptrons with different activation functions (such as sigmoids and ReLU). The correlation structure between Swedish and European in- terest rates are of particular interest to the project. The purpose of this thesis is to answer the question: ”Can a Neural Networks approach be used for predicting interest rate curve dynamics?”

1.2 Previous Research

There is some previous research about using Artificial Neural Networks to learn curve dy- namics. As Kondratyev shows in [1] the ANN approach has potential to replace the PCA approach as the PCA approach can be viewed as a limit case of ANN with strong regu- larisation. He shows that ANN has the capacity of extracting more information from the training data than the PCA approach, in which all of the available information is contained in the covariance matrix and no information about how the current state affect the curve movement is learned.

With the results from [1] in mind the purpose with this report is to investigate if an ANN approach has the potential to replace the PCA approach when predicting swap rates and learning the correlation structure between different markets.

1.3 Company and department

This thesis is conducted in collaboration with the Fixed Income group at the Large Corpor- ates & Financial Institutions division at SEB (Skandinaviska Enskilda Banken). SEB is one of the major banks in Sweden and operates mainly in northern Europe with around 16000 employees and the headquarter is located at Kungstr¨adg˚arden, Stockholm.

(16)

SEB is operating through five divisions:

• Large Corporates & Financial Institutions

• Corporate & Private Customers

• Baltic

• Life

• Investment Management

The Fixed Income group are specialists on trading interest-bearing instruments and asso- ciated derivatives, advisory services and complex financial solutions. Their clients include pension funds, asset managers, insurance companies, central banks, sovereign wealth funds, hedge funds and major companies that invest in interest-bearing securities and they operate in all major interest-rate markets. The instruments that they focus on are mainly Nordic and European government bonds, mortgage bonds, credit bonds and commercial paper to various index-related products and derivatives.

2 Background

In this section the most relevant theory behind the project will be presented. First some financial concepts such as a yield and a swap and how it is valued will be presented and after that the two models (PCA and ANN) that are used in the project will be described.

2.1 Fixed Income Instruments

Fixed income instruments are investments that pay the investor a fixed interest or dividend, that is known in advance, on fixed dates during the investment period. The most common types of fixed income instruments are government and corporate bonds, where the govern- ment/corporate loans money from the investor in order to finance large projects. The default risk on these investments are very low as it is unlikely that a government/large corporate is unable to make the required payment obligation. Therefore it can be considered as a stable investment where the investor receives a steady income stream. The returns on these instruments are often lower than other investments, due to the low risk.

2.1.1 Zero Coupon Bond

A zero coupon bond is a bond in which the investor does not receive any interest payments or dividends but only receives a payment on the maturity date. The investor will at maturity T receive the face value K and on initiation day t pay the discounted value p(t, T ) for the bond. Meaning that the return of the bond is the difference between the face value and the discounted value.

The market value for a zero coupon bond with face value 1 and discount rate y(t, T ) is given by

p(t, T ) = e−y(t,T )(T −t)· 1.

(17)

2.1.2 Fixed Coupon Bond

A fixed coupon bond is a bond which pays predetermined payments, called coupons at fixed times to the holder of the bond. It is set up in the following way:

• A number of dates T0, ..., Tn, δ = Ti − Ti−1, a coupon ci and a face value K is set up.

T0 is the emission date whereas T1, ..., Tn are called coupon dates.

• At T1, ..., Tn−1 the holder of the bond receives a fixed coupon ci.

• At Tn the holder receives a coupon cn and the face value K of the bond.

To price such a bond one can consider the same example as in section 22.3.1 in [4], a portfolio consisting of zero coupon bonds with maturities Ti i = 1, ..., n. Consider the portfolio consisting of ci zero coupon bonds with maturities Ti, i = 1, ..., n − 1 and cn+ K bonds with maturity Tn. The price is then given by:

pfixed(t) = Kp(t, Tn) +

n

X

i=1

cip(t, Ti).

The coupons are often determined in terms of returns rather than in monetary terms. The return is typically quoted as a simple rate acting on the face value K over the period [Ti−1, Ti].

This means that the coupon ci has the return ri, and it can be expressed as:

ci = ri(Ti− Ti−1)K = riδiK, (1) which gives the coupon price in terms of the coupon rate as:

pfixed(t) = K p(t, Tn) +

n

X

i=1

riδip(t, Ti)

!

. (2)

(18)

2.1.3 Floating Rate Bond

There also exists bonds for which the coupon is not quoted as a fixed rate but instead reset for every coupon period. Which means that the coupon is quoted as some reference rate which is resetted every coupon period plus a spread, that remains constant. The following examples are again shown in section 22.3.2 in [4]. In most cases the resetting is determined by some financial benchmark. In the following example we are going to use the spot LIBOR (London Interbank Offered Rate) which is a benchmark interest rate at which major global banks lend to one another in the international market for short-term loans, as the coupon rate. The LIBOR rate L, is the solution to the equation

1 + (Ti− Ti−1)L(Ti−1, Ti) = p(t, Ti−1) p(t, Ti) , which for the spot LIBOR rate by definition 22.2 in [4] is

L(Ti−1, Ti) = − p(Ti−1, Ti) − 1 (Ti− Ti−1)p(Ti−1, Ti).

From equation (1) with ri = L(Ti−1, Ti), δ = Ti− Ti−1 and K = 1 this gives us that ci = L(Ti−1, Ti)δK = δ1 − p(Ti−1, Ti)

δp(Ti−1, Ti) = 1

p(Ti−1, Ti)− 1 (3) the value at time t of the term -1 paid out at Ti is equal to −p(t, Ti). The value of the term

1

p(Ti−1,Ti) paid out at Ti can be computed by considering the following scenario:

• At time t, buy one Ti−1-bond which will cost p(t, Ti−1).

• At Ti−1you receive the amount 1 and reinvest this amount in Ti-bonds, which will give you p(T 1

1−1,Ti) bonds.

• At maturity Ti you receive the amount p(T 1

i−1,Ti)

So, the value at t of obtaining the amount p(T 1

i−1,Ti) at Ti is given by p(t, Ti−1). And the total value of the coupon ci at time t is given by

p(t, Ti−1) − p(t, Ti) from which we obtain the value for the floating bond as

pfloat(t) = p(t, Tn) +

n

X

i=1

(p(t, Ti−1) − p(t, Tn)) = p(t, T0). (4) Normally T0 = t which means that p(t, T0) = 1.

(19)

2.2 Yield

The yield y(t, T ) is the discount rate applied to all cash flows which will give a bond the same value as the market value. A simple example of the yield is given by the yield for a zero coupon bond. Consider the market value p(t, T ) of the zero coupon bond with face value 1

p(t, T ) = e−y(t,T )(T −t)· 1.

The yield for this bond is then given by

y(t, T ) = −ln(p(t, T )) T − t .

The term structure of interest rates is the relationship between the yield and different terms/maturities (different tenors). A graph showing the term structure of yields is known as the yield curve and it can be used to identify the current state of an economy as it reflects market expectations about future changes in the interest rates.

2.2.1 Short rate

The short rate is the interest rate applying for a very short period of time. Or the interest rate for which an entity can borrow money for an instantaneous short period of time. The P-dynamics of the short rate are by section 24.1 in [4] given by

dr(t) = µ(t, r(t))dt + σ(t, r(t))dW and the term structure, again by section 24.1 in [4], is given by

(Ft+ {µ − λσ}Fr+ 12σ2Fr− rF = 0

F (T, r) = φ(r) ,

where µ is the drift term, σ is the diffusion term and λ is the market risk.

There are several different short rate models, some examples of such models can be found in section 24.1 in [4], for example the Vasicek model, which will be presented in section 3.2.1, the Ho-Lee model and the Hull-White model.

(20)

2.3 Swaps

A fixed for float interest rate swap (IRS) is a contract in which the parties swap a predeter- mined fixed interest rate (swap rate) for a floating rate, exchanging future cash-flows with each other. On the initiation date T0 the following are defined

• the principal K

• the fixed swap rate R

• which floating rate L to be used, normally an IBOR (Interbank Offered Rate) rate.

• a number of equally spaced dates (δ = Ti− Ti−1) on which the payments are going to occur, T1, ..., TN.

The principle of an IRS can be illustrated by Figure 1.

Figure 1: IRS cash flow for the part receiving the fixed payments.

2.3.1 IBOR

IBOR (also known as xIBOR) is a daily reference rate based on the averaged interest rates at which a number of active banks on the money market is willing to lend unsecured funds to one another at different maturities. There are several different IBOR rates, depending on which currency (market) is considered. The best known IBOR rate is the LIBOR (London Interbank Offered Rate). Today IBOR rates are used as a standard reference rates for a range of financial products but other types of reference rates are being discussed since the LIBOR rates has been shown to have some shortcomings. There are for example concerns that the interbank lending market is no longer sufficiently liquid and there have also been manipulations of LIBOR rates. Work is ongoing to find replacements for the LIBOR rate.

Since this thesis is discussing the Swedish and the European market the focus will be on the STIBOR and the EURIBOR rate.

(21)

The STIBOR (Stockholm Interbank Offered Rate) rate is the IBOR rate for a group of panel banks active on the Swedish money market (The Stibor Banks) and it is denominated in SEK. The tenors for the STIBOR are defined as

• Tomorrow Next (Tomorrow to next business day)

• 1 week

• 1 month

• 2 months

• 3 months

• 6 months.

The panel banks for STIBOR consists of 7 of the major banks on the Swedish money market and the overall responsibility is held by Financial Benchmarks Sweden AB.

The EURIBOR (Euro Interbank Offered Rate) rate is the IBOR rate for the European money market denominated in EUR. The tenors for the EURIBOR are defined as

• 1 week

• 1 month

• 3 months

• 6 months

• 12 months.

The panel banks for EURIBOR consists of 18 banks active on the European money market and the responsibility is held by The European Money Markets Institute.

2.3.2 Valuing a swap

A swap has a floating leg and a fixed leg which is an asset and a liability depending on which side you are of the contract. The value of the swap is determined by finding the value of the fixed and the floating leg separately. A simple example of how an IRS can be priced is shown in the following example presented in [4], section 22.3.3:

The value of the swap on the initiation date is set to zero. For this to hold, the values

(22)

The net cash-flow at time-step i for the party which has agreed on paying a fixed rate to receive a floating rate is then

K(L(Ti−1, Ti))δ − KRδ = K(L(Ti−1, Ti) − R)δ

where K is the principal, L(Ti−1, Ti) the floating IBOR rate, R is the swap rate and δ = Ti − Ti−1, i = 1, ..., n. There are two equivalent ways to value a fixed-for-floating swap, as bonds or as portfolios of forward rate agreements. Here the pricing with bonds, which can be found in section 22.3.3 in [4], will be presented.

With the prices for the fixed/floating coupon bonds from section 2.1 we can go on and use the same example as in section 22.3.3 in [4] to value the swap.

Πswap(t) = pfixed(t) − pfloat(t)

where pfixed and pfloat are the prices of the fixed and floating rate bonds respectively, given by equations (2) and (4). The swap rate is set so that the value of the swap is initially zero, meaning that

Πswap(T0) = pfixed(t) − pfloat(t) = 0 =⇒ pfixed(t) = pfloat(t) with

pfixed(t) = K p(t, Tn) + Rδ

n

X

i=1

p(t, Ti)

!

and

pfloat(t) = p(t, T0), solving pfixed = pfloat for R gives us the swap rate as

R = p(0, T0) − p(0, Tn) δPn

i=1p(0, Ti) .

Since the financial crisis in 2007-2008 there has been some changes in the pricing process mentioned above. The main difference is that the LIBOR rate is no longer considered as a risk-free rate and therefore it should no longer be used as the discount rate when pricing such derivatives. When pricing these derivatives today the overnight indexed swap (OIS) rate is commonly used instead. The OIS discounting process is more complicated than the LIBOR discounting one and would require a lot more explanation. However, the classical example used above is still a good example to get some understanding about how to value an IRS and determine the swap rate. Further reading about the OIS discouting can for example be found in the papers [6], [7] and [8].

(23)

2.4 Principal Component Analysis

This section is inspired by the papers [9] and [10]. Principal component analysis is a well- known technique for modeling curve dynamics. The main idea behind PCA is to reduce the dimensionality of the data by transforming a set of possibly correlated variables into a smaller set of uncorrelated variables called principal components. Or in other words to answer the question ”can we find a linear combination of our original basis, to form a new basis that best re-expresses our data? ”

Consider the data set matrix X ∈ Rm×n where m is the number of variables and n is the number of samples x. This data is assumed to consist of two sources, the signal and some noise. The goal of the PCA approach is then to find the most meaningful basis to re-express this data set. This new basis will hopefully reveal the hidden structure behind the data and also filter out some of the noise included in it as there are fewer sources of uncertainty included in the new representation.

The new representation Y ∈ Rm×n of the data set is given by the change of basis

PX = Y, (5)

where P ∈ Rm×m is a linear transformation where the rows of P, {p1, ..., pm}, are a set of new basis vectors. This can be shown by expanding the left hand side of equation (B):

PX =

 p1

... pm

x1 · · · xn =

p1· x1 · · · p1· xn ... . .. ... pm· x1 · · · pm· xn

.

Here it can be seen that each column i of Y is made from the dot product of xi and the corresponding row in P, corresponding to a projection of the data on to the new basis vectors of P. The covariance matrix CX ∈ Rm×m of X is defined as

CX = 1

nXXT (6)

where the diagonal element Ci,iX = σi2 is the variance of the variable xi and the off-diagonal Ci,jX = σ2i,j is the covariance between variable xi and variable xj, i, j = 1, ..., n, i 6= j. The covariance matrix reflects the noise and the redundancy of our data set in the following way:

• The variance gives us information about the signal strength/noise

• The covariance gives us information about the redundancy.

(24)

The goal of the PCA approach is to optimize a manipulated covariance matrix CY so that the redundancy is minimized and the signal is maximized. Meaning that CY should have the following properties

• The off-diagonal elements should be zero

• Each successive dimension should be ordered according to decreasing variance, meaning that the first PC explains the most variance, the second PC explains the second most variance, etc.

The principal components represent the directions which explain the maximal variance of the data, hence the maximal amount of information is extracted from each dimension. This means that an optimal rotation of the original basis vectors has to be found. A simple example can be illustrated by Figure 2, which demonstrates a 2-D case, with the data X ∈ RN ×2 and the covariance matrix C ∈ R2×2

Figure 2: Direction of maximal variance in a 2-D case.

In this simple case the new basis (the principal component) would be the vector that lies on the line of best fit to the data. The orthogonal vector shown in the figure is assumed to be noise and therefore by expressing the data using the principle component, the noise and the dimensionality redundancy of expressing the data using the original basis x and y will be minimized.

In multiple dimensions the procedure is similar. First choose a normalized direction which maximizes the variance of the data X and save this vector as a new basis vector p1, then search for an orthonormal vector to the previous basis vector, with maximized variance and store this as the basis vector pi and repeat until a sufficient amount of variance of the data

(25)

set is explained by the new set of basis vectors (principal components) {p1, ..., pm}. By constructing P in this way, we assume that it is an orthonormal matrix, i.e. PT= P−1. There is a more general solution using single value decomposition (SVD), but since the eigenvalue decomposition has been used in the model, it is the example that will be presen- ted here. The example using SVD can be found in [9].

We begin by defining CY in the same way as in equation (6):

CY = 1 nYYT

which by equation (B) and some algebra which is shown in Appendix B gives that

CY = PCXPT. (7)

Now, by selecting P to be a matrix whose rows are the eigenvectors of CX, and defining Q such that P ≡ QT equation (7) can be further expanded and again by some algebra presented in Appendix B it can be shown that

CY = Λ,

where Λ is the diagnoal matrix containing the eigenvalues of CX.

It can be seen that this choice of P does diagonolize CY and the manipulated covariance matrix can be obtained as CY = Λ which gives the approximated covariance matrix CX as

CX = VΛVT,

where V are the eigenvectors corresponding to the eigenvalues in Λ that explains a suf- ficient amount of variance of the data. The significance for each principal component is by given by section 3 in [10] as

ψi = λi Pn

j=1λj,

where λ are the diagonal elements in Λ. The approximated covariance matrix can then be used to calculate expected values of assumed distributions, for example an expected value of the curve movement conditioned on a jump for one of the tenors.

To summarize:

• The principal components are the eigenvectors of the covariance matrix of the data that explains a sufficient amount of the variance in the data

• PCA is restricted to linear transformations of the original basis

(26)

2.5 Artificial Neural Network

Another approach for predicting yield curves is by using an Artificial Neural Network (ANN).

The idea behind an ANN is inspired by the human brain in which signals are sent between neurons. The ANN works in a similar way but uses artificial neurons which send real numbers as signals to other neurons which sum all weighted incoming signals (often includes bias as well) and processes them through an activation function and send the signal further. The output s of neuron j in layer i can therefore be described with:

sij = σ X

k

wijk· si−1k  + bij

!

, (8)

where σ is the activation function, k is the number of neurons in each layer, wjki is the weight from the k-th neuron in the previous layer to the j-th neuron in the i-th layer, and bij is the bias of neuron j in layer i. The activation functions can be almost any function but some functions that are commonly used are presented in Figure 3.

σ(x) = max(0, x)

σ(x) =

(max(0, x), x < 6

6, x ≥ 6

σ(x) = 1+e1−x σ(x) = tanh(x) = (ex−e−x)

(ex+e−x)

Figure 3: Example of Activation Functions.

(27)

A basic structure of an ANN is an input layer, a certain amount of hidden layers, and an output layer. Such a network is called a Multi Layer Perceptrons (MLP) (see Figure 4).

Figure 4: Example of the structure of a MLP network.

The network trains on a part of the available data and uses the rest for validation. The training is done by starting with a random set of weights and biases and then comparing the output given by the ANN and the actual target output. The ANN then updates the weights and biases usually using some form of gradient descent:

wt+1= wt− η∇E wt , (9)

where w is the weights (usually including bias), E is the error function (for example mean squared error), η is the learning rate, i.e. how fast the gradient descent updates the weights, and t is the current iteration. This is done until either a specified number of epochs (number of passes through the training data) has been completed or a chosen cost function (for example the mean squared error) has reached a specified threshold.

(28)

2.5.1 Optimizer

The optimizer used for this report was the Adam optimizer which is a highly efficient adaptive learning rate optimization algorithm. The algorithm as described in the original paper [12]:

η: Learning rate (η = 0.001 in original paper);

β1, β2 ∈ [0, 1): decay rates for moment estimates (β1 = 0.9, β2 = 0.999 in paper.);

f (θ): Objective function where θ are the parameters;

: Denominator offset to avoid dividing by 0 ( = 10−8 in original paper);

Initialize:

m0 = 0 (First moment vector);

v0 = 0 (Second moment vector);

t = 0;

while θt has not converged do t = t + 1;

gt= ∇θftt−1);

mt= β1· mt−1+ (1 − β1) · gt (Update biased first moment);

vt= β2· vt−1+ (1 − β2) · gt2 (Update biased second moment);

ˆ

mt= 1−βmtt

1 (Calculate bias-corrected first moment);

ˆ

vt= 1−βvtt 2

(Calculate bias-corrected second moment);

θt = θt−1− η · mˆt

ˆ

vt+ (Update parameters).

end return θt

Algorithm 1: Adam optimizer as described in its original paper.

2.5.2 Regularisation

The error function used in this report is the mean squared error with a L2-regularizaton term to avoid overfitting:

E (w) = 1 N

N

X

i=1

(f (xi) − wxi− bi)2+ αw2, where α controls the level of regularization.

2.5.3 Example of a neural network calibration

Let X ∈ RN ×3 be a set of inputs and Y ∈ RN ×5 be a set of targets for the neural network in Figure 4. Let w = [w1, w2]T be the weights where w1 is the weights between the input layer and the hidden layer and w2 is the weights between the hidden layer and the output.

Also let the error function be the mean squared error:

E(w) = 1 N

N

X

i=1

(Yi − Yi)2, where Yi is the output of the neural network:

Yi = wi,2· σ(wi,1· X).

(29)

The calibration is then done by initializing w with some random weights w0, calculate Y using these initial weights

Y0 = w0,2· σ(w0,1· X), calculate the mean square error

E(w0) = 1 N

N

X

i=1

(Yi− Y0,i)2 and then update the weights

w1 = w0− η∇E (w0) and repeat until the results are satisfactory.

2.5.4 Universal approximation theorem

The idea for using neural networks to predict curve dynamics is validated by the universal approximation theorem. The universal approximation theorem which was first proven by George Cybenko [2] (and later expanded on [3]) states that a feed-forward artificial neural network with one hidden layer can approximate any continuous function on compact subsets of Rn to any accuracy with a finite number of neurons. Mathematically this can be written as: Let σ be almost any continuous activation function (see [3] for more details) and f (x) be the function we want to approximate. The neural network approximation f(x) is described as:

f(x) =

N

X

i=1

ciσ wiTx + bi , (10)

where ci, bi ∈ R. Then for any  > 0 there exists an integer N so that

|f(x) − f (x)| < ε x ∈ C, (11)

where C is a compact subset of Rn.

3 Data

The data that will be used in this report is either provided by SEB, is public information, or is generated for testing/evaluation purposes. SEB have provided historical data of IRS rates for Swedish Krona (SEK) with STIBOR3M as floating rate, and for Euro (EUR) with EURIBOR6M as floating rate. The time until maturity is called tenor and the tenors for these swaps are: 1Y, 2Y, 3Y, 4Y, 5Y, 6Y, 7Y, 8Y, 9Y, 10Y, 12Y, 15Y, 20Y, 25Y, 30Y and the data is given to the fourth decimal. The time frame for this data is from 02-01-2009 to 17-01-2020 which gives 2882 rates (days) per tenor.

Day 1Y 2Y 3Y 4Y 5Y 6Y 7Y 8Y 9Y 10Y 12Y 15Y 20Y 25Y 30Y

(30)

The general shape of the rates curve for any given day can be seen in Figure 5. It is essentially the dynamics of this curve that is of interest to predict in this report.

Figure 5: Swedish swap rates evolvement.

Figure 6: Historical Swedish and Euro swap rates.

(31)

The initial assumption is that the Swedish and European rates are highly correlated. It can be seen in Figure 6 that the long end (10Y and 30Y) is strongly correlated whereas the short end still seems to be correlated but not as much as the long end.

3.1 Test data

Data for the time period 20-01-2020 to 06-05-2020 was used for measuring the average performances of the methods for several predictions. This data consists of 79 rates per tenor.

Day 1Y 2Y 3Y 4Y 5Y 6Y 7Y 8Y 9Y 10Y 12Y 15Y 20Y 25Y 30Y

1 0.0021 0.00225 0.0026 0.00305 0.003625 0.0042 0.00475 0.0053 0.005825 0.006325 0.0073 0.0084 0.009325 0.00945 0.009325 2 0.0021 0.002275 0.0026 0.003025 0.00355 0.0041 0.004625 0.005175 0.005675 0.006175 0.00715 0.00825 0.009175 0.0093 0.00915

... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...

79 0.001075 0.000825 0.0009 0.00115 0.001475 0.001875 0.0023 0.0027 0.003125 0.00355 0.00435 0.0051 0.00545 0.00505 0.004475

Table 2: Structure of IRS rate test data. The data for SEK and EUR have identical struc- tures.

(32)

3.2 Generated data

In order to validate that the models are working correctly some artificial data is generated.

The dynamics for this data is known and therefore the results obtained from the PCA and ANN models can be verified easily. This data is generated using the Vasicek model, which will be presented in the following section.

3.2.1 Vasicek model

The Vasicek model is a one factor model for modeling short rates which signifies that it only depends on one source of uncertainty. The Vasicek model is an Ornstein-Uhlenbeck process on the form:

drt= (b − art) dt + σdWt, (12)

where Wt is a Wiener process, σ is the instantaneous volatility, ba is the long term mean level which the short rate is pulled towards, and a is the speed of reversion. By proposition 5.3 in [4], it can be shown using Ito’s formula that the solution to (12) is:

rt= r0e−at+ b 1 − e−at + σe−at Z t

0

easdWs. (13)

By Lemma 4.15 in [4], any process X(t) with a deterministic function σ(t) on the form:

X(t) = Z t

0

σ(s)dW (s) has a Gaussian distribution with parameters:

E[X(t)] = 0, V ar[X(t)] = Z t

0

σ2(s)ds.

Equation (13) can therefore be written as:

rt= r0e−at+ b 1 − e−at + ,  ∼ N(0, σ2

2a(1 − e−2at), so the short rates can be simulated easily.

By proposition 23.3 in [4] the risk neutral valuation of Bond prices are given by the for- mula

p(t, T ) = F (t, r(t); T ), where

F (t, r; T ) = Et,rQ[eRtTr(s)ds].

where EQ denotes that the expectation should be taken with respect to the martingale measure Q, assuring us that the market is free of arbitrage.

The term structure for the Vasicek model is given according to proposition 24.3 in [4] as p(t, T ) = eA(t,T )−B(t,T )r(t)

, (14)

(33)

where

A(t, T ) = (B(t, T ) − T + t)(ab −12σ2)

a2 − σ2B2(t, T )

4a , B(t, T ) = 1

a(1 − e−a(T −t)), from which we can obtain the yield for each bond by the formula for pricing a zero coupon bond:

p(t, T ) = e−y(t,T )(T −t)

=⇒ y(t, T ) = −ln(p(t, T )

T − t . (15)

To see how the Vasicek model behaves at different tenors we can bump one of the yields by x basis points (bps) and then calculate the short rate that corresponds to the bumped yield in the following way:

By combining (14) and (15) we obtain the short rate corresponding to the bumped interest rate as:

r(t) = A(t, T ) + (r(t, T ) + x)(T − t)

B(t, T ) , (16)

which can then be used to calculate the yield for the remaining tenors. Here 3000 days are simulated, which gives the data matrix X ∈ R3000×15.

Figure 8: Yield generated from the Vasicek model.

The Vasicek data was generated using the parameters in Table 3.

a 0.2 b 0.0045

(34)

4 Methodology

This section will describe how the theory from chapter 2 is used in this project. The as- sumptions made and the construction of matrices and model structures will be presented.

4.1 Principal Component Analysis approach

The PCA is done on the daily returns ∆D i.e. the changes from day to day which are assumed to be sampled from a random multivariate variable ∆A ∼ N(0, Σ), Σ ∈ RN ×N.

D =

R1,1 R1,2 . . . R1,N R2,1 R2,2 . . . R2,N

... ... ... ... RT ,1 RT ,2 . . . RT,N

=⇒ ∆D =

∆R1,1 ∆R1,2 . . . ∆R1,N

∆R2,1 ∆R2,2 . . . ∆R2,N

... ... ... ...

∆RT ,1 ∆RT ,2 . . . ∆RT,N

 ,

∆Rt,1, ∆Rt,2, . . . , ∆Rt,N

∼ i.i.d. N(0, Σ),

where N is the number of different tenors, T the number of days included in the data, and t ∈ [1, T ].

The covariance matrix Σ of ∆A is approximated by the method introduced in section 2.4.

By this construction the approximation of Σ is given by C∆A= V∆DΛ∆DVT∆D,

where Λ∆D is the diagonal matrix containing the eigenvalues of ∆D that explains a sufficient amount of variance of the data and V is a matrix containing the corresponding eigenvectors.

This covariance matrix can then be used in the calculation of the conditional curve move- ments given a bump for some of the tenors.

The expected curve movements conditioned on a jump for one or several of the tenors is then calculated in the following way:

The changes from day T to T + 1 is assumed to follow a multivariate Gaussian distribu- tion. If ∆A then is partitioned as:

∆A = ∆A1 , ∆A2  , ∆A1 ∈ RT ×N −k, ∆A2 ∈ RT ×k,

where ∆A2 are known movements of k tenors and ∆A1 are the unknown movements of the remaining tenors. Using the assumption that the expectation follows a multivariate Gaussian distribution:

µ =  µ1 µ2



and Σ = Σ11 Σ12 Σ21 Σ22

 .

With the approximated covariance matrix C∆A the assumption now is that the conditional expectation is a multivariate Gaussian with

(35)

µ =  µ1 µ2



and C∆A = Σ11 Σ12 Σ21 Σ22



and by section 4.2 in [14] the expected values of (∆A1|∆A2 = x) ∼ N(µ, Σ) is

µ = µ1+ Σ12Σ−122 (x − µ2) , (17) where µ1 is the expected value of ∆A1, µ2 is the expected value of ∆A2, Σ12 is the covariance matrix between ∆A1 and ∆A2, and Σ22 is the covariance matrix for ∆A2. The output µ in equation (17) is the predicted difference between Rt−1 and Rt. To get the predicted rates Rt for Rt the predicted difference is added to the previous rates RT −1:

RT = RT −1+ µ. (18)

4.1.1 Swedish Data

The daily Swedish data is sampled with ∆t = 1 and has the form:

AdailySEK =

R1,1 R1,2 . . . R1,15 R2,1 R2,2 . . . R2,15

... ... ... ... R2882,1 R2882,2 . . . R2882,15

=⇒ ∆AdailySEK =

∆R1,1 ∆R1,2 . . . ∆R1,15

∆R2,1 ∆R2,2 . . . ∆R2,15 ... ... ... ...

∆R2882,1 ∆R2882,2 . . . ∆R2882,15

 ,

where the columns are the corresponding tenors:

1Y, 2Y, 3Y, 4Y, 5Y, 6Y, 7Y, 8Y, 9Y, 10Y, 12Y, 15Y, 20Y, 25Y, 30Y.

The weekly data is sampled with ∆t = 5, which corresponds to one business week, and has the form

AweeklySEK =

R1,1 R1,2 . . . R1,15 R2,1 R2,2 . . . R2,15

... ... ... ... R576,1 R576,2 . . . R576,15

=⇒ ∆AweeklySEK =

∆R1,1 ∆R1,2 . . . ∆R1,15

∆R2,1 ∆R2,2 . . . ∆R2,15 ... ... ... ...

∆R576,1 ∆R576,2 . . . ∆R576,15

 .

(36)

4.1.2 Swedish and Euro Data

The full (Swedish and Euro) daily data has the form:

Adailyfull =

RSEK1,1 RSEK1,2 . . . RSEK1,15 REUR1,1 . . . REUR1,15 RSEK2,1 RSEK2,2 . . . RSEK2,15 REUR2,1 . . . REUR2,15

... ... ... ... ... ...

RSEK2882,1 RSEK2882,2 . . . RSEK2882,15 R2882,1EUR . . . R2882,15EUR

=⇒ ∆Adailyfull =

∆RSEK1,1 ∆RSEK1,2 . . . ∆REUR1,15

∆RSEK2,1 ∆RSEK2,2 . . . ∆REUR2,15 ... ... ... ...

∆RSEK2882,1 ∆RSEK2882,2 . . . ∆REUR2882,15

 ,

where the first 15 columns are the SEK data and the other 15 columns are the EUR data.

The columns therefore correspond to the tenors:

1Y, 2Y, 3Y, 4Y, 5Y, 6Y, 7Y, 8Y, 9Y, 10Y, 12Y, 15Y, 20Y, 25Y, 30Y

| {z }

SEK

, 1Y, 2Y, 3Y, 4Y, 5Y, 6Y, 7Y, 8Y, 9Y, 10Y, 12Y, 15Y, 20Y, 25Y, 30Y.

| {z }

EUR

The full (Swedish and Euro) weekly data has the form:

Aweeklyfull =

RSEK1,1 RSEK1,2 . . . RSEK1,15 REUR1,1 . . . REUR1,15 RSEK2,1 RSEK2,2 . . . RSEK2,15 REUR2,1 . . . REUR2,15

... ... ... ... ... ...

R576,1SEK R576,2SEK . . . RSEK576,15 REUR576,1 . . . REUR576,15

=⇒ ∆Aweeklyfull =

∆R1,1SEK ∆RSEK1,2 . . . ∆R1,15EUR

∆R2,1SEK ∆RSEK2,2 . . . ∆R2,15EUR ... ... ... ...

∆RSEK576,1 ∆R576,2SEK . . . ∆REUR576,15

 .

(37)

4.2 Neural Network approach

The idea of how to predict the shape of the rates curve using neural networks is to train the network on the movements of one or several of the tenors. The input to the network is therefore the current rates Ri(t) at time t for the different tenors i and k number of

∆Rj(t + ∆t), j ∈ i, k < 15:

Input = [R1(t), R2(t), ...RN(t), ∆Rj(t + ∆t), ..., ∆Rk(t + ∆t)] ∈ RN +k and the output is the movement of the rates for t + ∆t:

Output = [∆R1(t + ∆t), ∆R2(t + ∆t), ..., ∆RM(t + ∆t)] ∈ RM. where M = N − k.

The predicted rates at day Ti+∆t is then given by

R(Ti+∆t) = R(Ti) + ∆R(Ti+∆t), (19) where R(t) is the vector containing the N input rates used in the input vector and ∆R(t) are the rate changes obtained from the output vector. Here the main focus are on daily and weekly predictions (one business day and five business days). Therefore ∆t = 1 or ∆t = 5 is used here.

Creating a good structure for a neural network means finding values for several hyper- parameters such as:

• Number of hidden layers

• Number of neurons in each hidden layer

• Choice of activation functions

• Choice of optimizer

• Learning rate

• Choice of cost function

• Choice of regularization

• Batch-size

• Number of epochs

• Choice of normalization of the data Due to the large number of parameters the method of grid searching, in which a large amount of combinations of these parameters were tested, was utilized. There is also a problem with creating a general neural network. A network that works well with one set of data might not work well with another. The hyperparameters used were therefore found using grid searching for each dataset that was tried. For this report the neural networks have been built using the Keras library which runs on top of the Tensorflow library for Python 3.7.

(38)

4.2.1 Swedish Data

The model for the Swedish data is constructed in the following way. The input vector contains the current state of the Swedish yield curve and also a fixed change (bump) for one or several (k) of the tenors

Input = [R1(t), R2(t), ...R15(t), ∆Rj(t + ∆t), ..., ∆Rk(t + ∆t)] ∈ R15+k. The output is the movement of the rates for t + ∆t:

Output = [∆R1(t + ∆t), ∆R2(t + ∆t), ..., ∆R15(t + ∆t)] ∈ R15. The predicted yield curve is then obtained from equation (19).

4.2.2 Swedish and Euro Data

The model with both Swedish and Euro data is constructed in the same way as the Swedish model mentioned above. The only difference is that the input and output vectors now contains both the Swedish and Euro rates.

Input = [R1(t), R2(t), ...R30(t), ∆Rj(t + ∆t), ..., ∆Rk(t + ∆t)] ∈ R30+k The output is the movement of the rates for t + ∆t:

Output = [∆RSEK1 (t + ∆t), ..., ∆RSEK15 (t + ∆t), ∆REU R1 (t + ∆t), ..., ∆REU R15 (t + ∆t)] ∈ R30. And the predicted yield curve is again obtained from equation (19).

4.3 Mean Squared Error

The error metric used when evaluating the accuracy of the models is the mean squared error (MSE). The mean square error is defined in section 11.2 in [16] as

MSE = E[(Y − ˆY)2], where ˆY is an estimator of Y.

The empirical MSE for the yield curve, which is used to evaluate the accuracy for the predictions is given by

MSE = 1 n

n

X

i=1

(y(t, Ti) − ˆy(t, Ti))2, (20) where n is the number of tenors included, Ti is the i-th tenor, ˆy(t, Ti) is the by PCA or ANN predicted yield for different tenors Ti and y(t, Ti) is the exact yield from the historical data.

(39)

5 Results

This section is going to present the results obtained from the models.

5.1 Vasicek model

To validate that the PCA and neural network approaches are feasible the data generated from the Vasicek model was used to test if the neural network would be able to predict the yield curve when the rate for a tenor is ’bumped’ by +100 basis points, i.e. that ∆Rj(t + ∆t) = 0.01. The correct values for the bumped rates can be calculated using (16). Both the PCA and the ANN models were trained on the generated data points presented in section 3.2.1.

Figure 9: Yield curve with the 5Y tenor bumped +100 bps.

It can be observed from Figure 9 and Figure 10 that the PCA approach is able to perfectly predict the bumped yield curve using only 1 PC. This is due to the Vasicek model being a one factor model and therefore only has one non-zero eigenvalue and in turn only one principle component. This can be seen using the explained variance metric which is the ratio Pλi

jλj. For the Vasicek data the first eigenvalue explains 100% of the variance.

(40)

5.2 Real data

The real data is dependant on several underlying factors and will need more principle com- ponents than the Vasicek data. The explained variance for the first three eigenvalues sum up to 0.99 i.e. they explain 99% of the variance. The first 3 principle components will therefore be used with the real data in this report.

Figure 11: Eigenvectors for the real data.

Left: Unscaled eigenvectors.

Right: Eigenvectors scaled by their corresponding eigenvalue (significance).

The procedure to test the performance of the neural network approach is to take the data of a day t where the rates on day t + ∆t are known. Then use the data until t to predict the rates on t + ∆t and take the mean squared error of the predicted rates and the actual rates.

The performance of the neural network was first tested using a time step ∆t of 1 day. The performance was then also tested with ∆t = 5 as this would simulate data of a rate updated weekly (business week). The plotted rates are the rates for SEK but the data used for both approaches also contains the rates for Euro. Both the SEK and Euro swap rates were bumped to see if the neural network approach could better implement the information of the Euro movement in its prediction of the SEK movement than the PCA approach. The 1Y and 5Y tenor were chosen as ’steering’ tenors, i.e. the tenors that are bumped. The 1Y tenor was chosen because it had the shortest time to maturity and the 5Y tenor was chosen as it is where the explained variance plateaus (see Figure 11).

(41)

5.2.1 Daily Prediction

The parameters used for predicting the daily rates with the neural network approach:

No. layers Neurons α η Activation function

4 50 0.005 0.00001 tanH

The following graphs show one day predictions for the SEK data, made by the PCA and ANN models as have been described in sections 4.1 and 4.2.

Figure 12: Swedish swap rates one day prediction, 1Y Swedish tenor bumped by -0.5 bps.

(42)

Figure 14: Swedish swap rates one day prediction, 5Y Swedish tenor bumped by -1 bps.

Figure 15: Absolute error between the predicted rates and the actual rates when bumping 5Y tenor using ∆t = 1.

Figures 12 and 14 show the predicted rates of 17-01-2020 calculated by equation (19) for the neural network approach and by equation (18) for the PCA approach using the daily SEK data in section 4.1.1. The actual rates are also shown for comparison. Figures 13 and 15 show the absolute errors for the different tenors for the same day.

(43)

The following graphs shows one day predictions for the SEK and EUR data, made by the PCA and ANN models as have been described in sections 4.1 and 4.2.

Figure 16: Swedish swap rates one day prediction, 1Y Euro tenor bumped by 0.4 bps.

Figure 17: Absolute error between the predicted rates and the actual rates when bumping Euro 1Y tenor using ∆t = 1.

(44)

Figure 18: Swedish swap rates one day prediction, 5Y Euro tenor bumped by -0.5 bps.

Figure 19: Absolute error between the predicted rates and the actual rates when bumping Euro 5Y tenor using ∆t = 1.

Figures 16 and 18 shows the predicted rates of 17-01-2020 calculated by equation (19) for the neural network approach and by equation (18) for the PCA approach using the full daily data in section 4.1.2. The actual rates are also shown for comparison. Figures 17 and 19 shows the absolute errors for the different tenors for the same day.

Tenor bumped MSE PCA MSE ANN

SEK 1Y 4.38 · 10−8 6.72 · 10−7 SEK 5Y 2.74 · 10−8 6.43 · 10−7 EUR 1Y 2.05 · 10−6 6.93 · 10−7 EUR 5Y 3.67 · 10−8 6.85 · 10−7

Table 4: Mean squared errors for predicting daily rates.

(45)

5.2.2 Weekly Prediction

The parameters used for predicting the weekly rates with the neural network approach for 1Y, 5Y , and Euro 5Y tenors:

No. layers Neurons α η Activation function

4 50 0.005 0.0001 tanH

For Euro 1Y tenor:

No. layers Neurons α η Activation function

4 50 0.05 0.00001 tanH

The following graphs weekly predictions for the SEK data, made by the PCA and ANN models as have been described in sections 4.1 and 4.2.

Figure 20: Swedish swap rates one week prediction, 1Y Swedish tenor bumped by 2.5 bps.

(46)

Figure 22: Swedish swap rates one week prediction, 5Y Swedish tenor bumped by 0.5 bps.

Figure 23: Absolute error between the predicted rates and the actual rates when bumping 5Y tenor using ∆t = 5.

Figures 20 and 22 show the predicted rates of 17-01-2020 calculated by equation (19) for the neural network approach and by equation (18) for the PCA approach using the weekly SEK data in section 4.1.1. The actual rates are also shown for comparison. Figures 21 and 23 show the absolute errors for the different tenors for the same day.

(47)

The following graphs shows weekly predictions for the SEK and EUR data, made by the PCA and ANN models as have been described in sections 4.1 and 4.2.

Figure 24: Swedish swap rates one week prediction, 1Y Euro tenor bumped by -1 bps.

Figure 25: Absolute error between the predicted rates and the actual rates when bumping Euro 1Y tenor using ∆t = 5.

(48)

Figure 26: Swedish swap rates one week prediction, 5Y Euro tenor bumped by -0.8 bps.

Figure 27: Absolute error between the predicted rates and the actual rates when bumping Euro 5Y tenor using ∆t = 5.

Figures 24 and 26 shows the predicted rates of 17-01-2020 calculated by equation (19) for the neural network approach and by equation (18) for the PCA approach using the full weekly data in section 4.1.1. The actual rates are also shown for comparison. Figures 25 and 27 shows the absolute errors for the different tenors for the same day.

Tenor bumped MSE PCA MSE ANN

SEK 1Y: 1.42 · 10−5 1.08 · 10−6 SEK 5Y: 1.68 · 10−6 1.04 · 10−6 EUR 1Y: 2.25 · 10−6 1.17 · 10−6 EUR 5Y: 1.16 · 10−6 1.07 · 10−6

Table 5: Mean squared errors for predicting weekly rates.

References

Related documents

The recurrent neural network model estimates a lower

The average accuracy that is achieved over time indicates if a population is able to evolve individuals which are able to solve the image classification task and improve over time..

Five different type of models will be tested, two parametric; linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA) and three non-parametric methods -

FMV har inom ramen för den kunskapsuppbyggande verksamheten Försvarets Framtida Taktiska Kommunikation (FFTK) en uppgift att för Försvarsmakten under 2003 demonstrera

Syftet med studien är att bidra med ökad kunskap om utvecklingssamtal i förskolan och utifrån pedagogernas perspektiv belysa samverkan med vårdnadshavare vid utvecklingssamtal

With a reactive path-finding algorithm in place, the responsible ROS node searches for the next recognizable landmark, using a vector field obstacle avoid- ance algorithm to stay

Figure 4.3 shows the time-evolution of the numerically computed and analytically derived two-soliton solutions for the Boussinesq equation.. Figure 4.4 does the same for the

Tommie Lundqvist, Historieämnets historia: Recension av Sven Liljas Historia i tiden, Studentlitteraur, Lund 1989, Kronos : historia i skola och samhälle, 1989, Nr.2, s..