• No results found

Modelling Credit Risk: Estimation of Asset and Default Correlation for an SME Portfolio

N/A
N/A
Protected

Academic year: 2022

Share "Modelling Credit Risk: Estimation of Asset and Default Correlation for an SME Portfolio"

Copied!
54
0
0

Loading.... (view fulltext now)

Full text

(1)

Modelling Credit Risk: Estimation of Asset and Default Correlation for an SME

Portfolio

Credit Risk Department at Handelsbanken Master Thesis 30 hp

Authors:

Y

AXUM

C

EDENO

R

EBECCA

J

ANSSON

Supervisors:

B

UJAR

H

USKAJ

O

LOW

S

ANDE

Master Thesis 30 hp

MSc. Industrial Engineering and Management – Risk Management 300 hp

(2)

When banks lend capital to counterparties they take on a risk, known as credit risk which traditionally has been the largest risk exposure for banks. To be protected against potential default losses when lending capital, banks must hold a regulatory capital that is based on a regulatory formula for calculating risk weighted assets (RWA). This formula is part of the Basel Accords and it is implemented in the le- gal system of all European Union member states. The key parameters of the RWA formula are probability of default, loss given default and asset correlation. Banks today have the option to estimate the probability of default and loss given default by internal models however the asset correlation must be determined by a formula provided by the legal framework.

This project is a first approach for Handelsbanken to study what would happen if banks were allowed to estimate asset correlation by internal models. We assess two models for estimating the asset correlation of a portfolio of Small and Medium Enterprices (SME). The estimates are compared with the asset correlation given by the regulatory formula and with estimates for another parameter called default cor- relation. The models are validated using predicted historical data and Monte-Carlo Simulations. For the studied SME portfolio, the models give similar estimates for the asset correlations and the estimates are lower than those given by the regulatory formula. This would imply a lower capital requirement if banks were allowed to use internal models to estimate the asset correlation used in the RWA formula. Default correlation, if not used synonymously with asset correlation, is shown to be another measure and should not be used in the RWA formula.

Keywords: Basel Capital Accord, Capital Requirements, SME, Portfolio Credit Risk, Monte-Carlo Simulations, Risk Weighted Assets (RWA).

(3)

Sammanfattning

När banker lånar ut kapital till motparter tar de en risk, mer känt som kreditrisk som traditionellt har varit den största risken för banker. För att skydda sig mot potentiella förluster vid utlåning måste banker ha ett reglerat kapital som bygger på en formel för beräkning av riskvägda tillgångar (RWA). Denna formel ingår i Basels regelverk och är implementerad i rättssystemet i alla EU-länder. De vikti- gaste parametrarna för RWA-formeln är sannolikheten att fallera, förlustgivet fal- lissemang och tillgångskorrelation. Bankerna har idag möjlighet att beräkna de två variablerna sannolikheten att fallera och förlustgivet fallissemang med interna mod- eller men tillgångskorrelation måste bestämmas med hjälp av en standardformel givet från regelverket.

Detta projekt är ett första tillvägagångssätt för Handelsbanken att studera vad som skulle hända om banker fick beräkna tillgångskorrelation med interna modeller. Vi analyserar två modeller för att skatta tillgångskorrelation i en portfölj av Små och Medelstora Företag (SME). Uppskattningarna jämförs sedan med den tillgångskor- relation som ges av regelverket och jämförs även mot en parameter som kallas fallis- semangskorrelation. Modellerna som används för att beräkna korrelationerna valid- eras med hjälp av estimerat data och Monte-Carlo Simuleringar. För den studerade SME portföljen ges liknande uppskattningar för de båda tillgångskorrelationsmod- ellerna, samt visar det sig att de är lägre än den korrelationen som ges av regelverket.

Detta skulle innebära ett lägre kapitalkrav om bankerna fick använda sig av interna modeller för att estimera tillgångskorrelation som används i RWA-formeln. Om fal- lissemangskorrelation inte används synonymt till tillgångskorrelation, visar det sig att fallisemangskorrelation är en annan mätning än tillgångskorrelation och bör inte användas i RWA-formeln.

Nyckelord: Basel Kapitalavtal, Kapitalkrav, SME, Portfölj Kreditrisk, Monte-Carlo Simuleringar, Riskvägda Tillgångar (RWA).

(4)

First, we would like to give great thanks to Bujar Huskaj and Olow Sande for being our supervisors throughout this master thesis, you have contributed to invaluable support and engagement. Further, we would like to acknowledge Handelsbanken and especially the Modelling group of the Independent Credit Risk Department for taking extraordinary care of us during this semester. We would also like to acknowl- edge everyone else that has been involved in our project in some way, it has been a truly inspiring period.

(5)

CONTENTS

Contents

1 Introduction 2

1.1 Background . . . 2

1.1.1 Asset Correlation vs Default Correlation . . . 3

1.2 Project Description . . . 3

1.3 Purpose . . . 4

1.4 Delimitations . . . 4

1.5 Data . . . 4

1.5.1 SME Portfolio . . . 5

1.5.2 Macro-Economics . . . 6

1.6 Outline . . . 7

2 Credit Risk and Regulations 8 2.1 Basel Committee of Banking and Supervisory . . . 8

2.2 Internal Rating-Based System . . . 9

2.3 Probability of Default . . . 10

2.4 Risk Weight . . . 10

2.4.1 Asset Correlation . . . 11

2.4.2 Dependence of Asset Correlation, PD and RW . . . 11

3 Mathematical Models of Credit Default 14 3.1 Correlation . . . 14

3.2 Merton’s Model . . . 14

3.3 Derivation of the Risk Weight Formula . . . 15

3.4 Asset Correlation Models . . . 16

3.4.1 Binomial Likelihood . . . 17

3.4.2 Large Portfolio Approximation . . . 17

3.5 Default Correlation Model . . . 19

3.5.1 Joint Default Probability . . . 19

4 Statistical Methods 21 4.1 Principal Component Analysis . . . 21

4.2 Multivariate Regression Analysis & Ordinary Least Squares . . . . 22

4.3 Maximum Likelihood Estimation . . . 23

4.4 Monte-Carlo Simulation . . . 23

4.5 Validation Tests . . . 24

4.5.1 Augmented Dickey-Fuller Test . . . 24

4.5.2 Mean Absolute Error . . . 25

4.5.3 Root Mean Square Error . . . 25

5 Method 26 5.1 Predicting Data . . . 26

5.1.1 PCA . . . 27

5.1.2 Regression Analysis . . . 29

5.2 Estimating Asset Correlation . . . 30

(6)

5.2.1 Binomial Likelihood . . . 30

5.2.2 Large Portfolio Approximation . . . 31

5.3 Estimating Default Correlation . . . 31

5.4 Simulation of Default Correlation . . . 32

5.5 Validation of Likelihood Models . . . 32

6 Results 33 6.1 Data Validation . . . 33

6.2 Estimated Asset and Default Correlation . . . 36

6.3 Correlation Coefficient in the RW-Formula . . . 37

6.4 Asset and Simulated Default Correlation . . . 38

6.5 Validation of Likelihood Functions . . . 39

6.5.1 Monte-Carlo with Different Maturity . . . 39

6.5.2 Monte-Carlo with Different Number of Simulations . . . 40

7 Discussion 42 7.1 Modeling Data . . . 42

7.2 Analysis of Estimated Correlation . . . 42

7.3 RW - formula as a benchmark . . . 43

7.4 Conclusion . . . 44

7.5 Further development . . . 45

8 References 46

(7)

CONTENTS

Abbreviations

BCBS Basel Committee of Bank Supervision

BL Binomial Likelihood

IRB Internal Rating-Based Approach CPI Consumer Price Index

IAPI Inflation-adjusted property index JDP Joint Default Probability

LGD Loss Given Default

LPA Large Portfolio Approximation MAE Mean Absolute Error

MCS Monte-Carlo Simulation

MLE Maximum Likelihood Estimation OLS Ordinary Least Square Method PBI Price Base Index

PC Principal Component

PCA Principal Component Analysis PD Probability of Default

RMSE Root Mean Square Error

RW Risk Weight

RWA Risk Weighted Assets SME Small Medium Enterprises

TCW TCW-index

UL Unexpected Loss

UR Unemployment

VaR Value at Risk

10Y 10Y Government Bond

(8)

1 Introduction

This Section aims to introduce the reader to the project and enlighten important knowledge for an understanding throughout the article. The introduction covers background, project description and the purpose of the project. An explanation of the two key parameters default and asset correlation that are emphasized throughout the article is provided. Furthermore the underlying data of the project is presented.

1.1 Background

Deposits provide capital for the banks which they can in turn lend to others at inter- est. This way loans is a source of payoff for the bank but also a potential source of losses. There is always a risk that a borrower may not be able to repay the principal of a loan or the interest associated with it. This is known as credit risk (Investope- dia, 2018a). Governments across the world want the financial sector to be stable and thus financial institutions are heavily regulated. The central bank regulators require banks to hold a certain amount of capital in order to protect themselves towards risk exposures by following the Basel Capital Accords Supervision. Credit risk has traditionally been the greatest risk a bank faces and it is usually the risk where most regulatory capital is required (Hull, 2015).

The Basel Accords are recommendations of laws and regulations for internationally active banks issued by the Basel Committee on Banking Supervision (BCBS). The Basel Committee does not have the power to enforce these recommendations, how- ever the European Union has incorporated the Basel Accords into the legal system to ensure that the European banking specificities are appropriately addressed (BIS, 2018). The main objective of bank regulation is to prevent banks from defaulting by holding enough capital for its risk exposures. It is obviously not possible to elimi- nate all risks, but governments aim to reduce the probability of default for banks.

Loss given default (LGD) and probability of default (PD) are well known in deter- mination of credit risk while asset correlation has not received as much attention.

The reason for this is mainly due to the lack of available historical data and because the regulatory has focused more on PD and LGD (Moody’s, 2008). Although, in recent years the interest in studying asset correlation has increased significantly, which is a result of the enhanced acknowledgement of credit portfolio management and capital framework together with the development of credit default swaps (CDS) and collateralized debt obligation (CDO). The three parameters PD, LGD and asset correlation are the key drivers in the risk weight formula, see Equation 3 in Section 2.4, which determines the capital requirement.

It is of high importance for banks to possess the ability to correctly estimate the credit risk exposure. If banks hold excessive capital they might lose investment op-

(9)

1 INTRODUCTION

parameters in the calculation of capital requirements. Today, banks have the oppor- tunity to estimate PD and LGD with internal models however, the asset correlation must be calculated by a formula given by the legal framework, see Equation 5.

1.1.1 Asset Correlation vs Default Correlation

In this article we will study the measurement correlation. Two types of correlations will be examined, asset and default correlation which refer to the dependence be- tween two enterprises. Asset correlation is one of the key drivers in the calculation of capital requirement and it is defined by how one firm’s value depends on the value of another firm. Asset correlations may be calculated directly if the asset values are known, for example using stock prices as estimates for asset values. However, asset correlations are often estimated using default data, as we do in this paper. Because of this, asset correlation is sometimes called "default correlation" in the literature, for example in the paper of Demey et al (2004) titled "Maxium likelihood estimate of default correlations".

Default correlation is defined as the dependence between the default of one firm and the default of another (Moody’s, 2008). This default correlation is studied in for example "Default correlation and credit risk analysis" by Lucas (1995). To avoid confusion we will always call the correlation of asset values "asset correlation" and the correlation of default events "default correlation". In this article we mainly study asset correlation since this is the parameter used in the credit risk formula and thus of interest. However, we will also consider default correlation, as it was proposed by Handelsbanken. For the interest in comparing default and asset correlations we will use historical default data as well as Monte-Carlo Simulations (MCS). In section 3.1, the mathematical definition of correlation is presented to fully understand the concepts of asset and default correlation.

1.2 Project Description

This article will investigate asset and default correlation for a Small Medium En- terprise (SME) portfolio. The project is proposed by the Credit Risk Department at Handelsbanken and boils down to two main objectives. On the one hand, the aim is to estimate asset and default correlation by using different models to make a comparison between the parameters. The second objective is to evaluate how the risk weight formula behaves when implementing the estimated asset correlation, in comparison to the asset correlation in the framework.

We will estimate the asset correlation by implementing two different models and the default correlation will be estimated by another model. The models are described in Section 3 as well as the mathematical model of defaulting loans underlying the Basel risk weight formula.

(10)

1.3 Purpose

It is of interest for banks to utilize internal models to estimate the parameters in the capital accords to achieve an optimal reflection of a portfolio. Thus, this thesis is a first approach if Handelsbanken in the future are allowed to estimate the asset cor- relation in the risk weight formula by internal models. The second aim of this thesis is to ascertain that asset and default correlation for a SME portfolio differentiate.

1.4 Delimitations

To ensure that the project is completed by deadline some delimitations are made.

There are various approaches for estimating correlation and for this thesis three models are selected. This limitation is necessary due to the time frame. The SME portfolio consists of different risk classes however, due to complexity and the time limitation, the portfolio is assumed homogeneous throughout the article. All oblig- ors are aggregated into one risk class and the migration is from non-default to de- fault. Hence the estimated asset and default correlation is assumed constant between all obligors in the portfolio.

Another factor to consider is the lack of knowledge about the portfolio. The only knowledge is that the analyzed portfolio comprises of small and medium enter- prises, beyond that there is no revealed information about which country or industry the enterprises operate in. To be able to reflect the portfolio by the general state of economy the SME portfolio is assumed to comprise of Swedish companies. This limitation is considered throughout the project and the authors are aware of that this limitation might affect the results.

1.5 Data

In this article we will utilize internal default data of the SME portfolio provided by Handelsbanken to estimate the asset and default correlation. The data is con- fidential information and is therefore anonymized in the article. In Table 1 the configuration of the data is presented and we can perceive that the portfolio only comprises eleven years of data, 2005-2015. This is a common problem for financial institutions and hence it is of interest to predict longer time series for an enhanced analysis. One approach to describe the behavior of enterprises is macro-economic factors (Investopedia 2018b) and thus it is taken into consideration in the prediction of new data. In Table 4 we present the selected macro-economic variables and in Section 5.1 the modeling of the data prediction is explained.

(11)

1 INTRODUCTION

1.5.1 SME Portfolio

By definition a SME portfolio is a portfolio consisting of small and medium enter- prises with less than 50 Million Euros in annual sales. The SME portfolio provided by Handelsbanken comprises the amount of counterparties, see Table 1, and the amount of defaulted counterparties, see Table 2, on a yearly frequency over the time period 2005-2015. We can see that the counterparties are divided into different risk classifications depending on their creditworthiness. The AAA classification is the highest rated securities which implies highest credit worthiness, while D has the lowest creditworthiness. With a higher probability to default for the borrower, the larger credit risk exposure for the bank.

Table 1: The SME portfolio, containing number of counterparties each year.

Risk class 2005 2006 · · · 2015

AAA • • · · · •

AA • • · · · •

A • • · · · •

BBB • • · · · •

BB • • · · · •

B • • · · · •

CCC • • · · · •

D • • · · · •

Total companies • • · · · •

Table 2: The SME portfolio, containing number of counterparties that defaulted each year.

Risk class 2005 2006 · · · 2015

AAA • • · · · •

AA • • · · · •

A • • · · · •

BBB • • · · · •

BB • • · · · •

B • • · · · •

CCC • • · · · •

D • • · · · •

Total defaulted companies • • · · · •

(12)

Although the SME portfolio is divided into different risk classes, as presented in Ta- ble 1 and Table 2, the portfolio is assumed homogeneous. This means that all oblig- ors are aggregated into a single risk class and the migration is from non-default to default, shown in Table 3, and this is the portfolio that will be analyzed throughout the article.

Table 3: The SME portfolio, containing number of total companies and defaulted companies each year.

2005 2006 · · · 2015

Total companies • • · · · •

Total defaulted companies • • · · · •

1.5.2 Macro-Economics

It is no revealed knowledge of the enterprises for the SME portfolio and hence it would not be feasible to use company specific information to explain the data set.

However, Handelsbanken proposes to apply macro-economic data as the underlying explanation for the SME portfolio and thus ten variables are selected. The macro- economic variables are presented in Table 4 and are selected by availability as well as the variables are implemented in the European Banking Authorities stress test, which is an authority that test how well banks in Europe manage their economy.

The ten variables covers yearly observations of the time series 1990-2015 and are free to collect from NASDAQ, Ekonomifakta and Statistics Sweden.

Table 4: Macro-economic data from Sweden.

Variables Unit

Export SEK

Import SEK

GDP SEK

10Y Government Bond, (10Y) (%) Consumer Price Index, (CPI) SEK

Price Base Index, (PBI) SEK

OMX30 SEK

Unemployment, (UP) (%)

Inflation-adjusted property index, (IAPI) SEK

TCW-index, (TCW) SEK

The development of export and import explains the country’s relationships with other countries. These factors are sensitive to changes in the currencies and the economy. Gross domestic product (GDP) is one of the most important econom-

(13)

1 INTRODUCTION

than placing the money at a savings account although yields low return compare to stocks. The benefit of the product is the low risk and this factor is an indicator of how the market is moving (SCB, 2018).

Consumer price index, CPI, is a measure to evaluate how the development of prod- uct prices have changed. It describes the annual average price of a product com- pared to a product with the price of 100 SEK in 1980. The price base index, PBI, represents the cost of a basket of necessary products for one person. OMX30 is an index that represents the 30 most digested companies on the Stockholm stock ex- change (NASDAQ, 2018). Inflation-adjusted property price index is retrieved from the value defining the countries property price and compares one- or two family house or terraced house by the development of the price from start value 100 at 1986 (Ekonomifakta, 2018). The TCW-index is a clear way to see how the SEK has developed compared to a basket of currencies. A high value of the TCW-index indicates that the SEK is weak (Sveriges Riksbank, 2018).

1.6 Outline

This article emphasis the estimation and analysis of asset and default correlation on a SME portfolio provided by Handelsbanken. To achieve this we apply statistical modeling based on the same underlying theory as the Basel risk weight regulation.

The project disposition is as follow, Section 2 covers how the regulations and mod- eling of credit risk under the Basel Accords have developed over time. In Section 3 the mathematical theories behind the three models that are implemented to esti- mate asset and default correlation are described, Binomial Likelihood (BL), Large Portfolio Approximation (LPA) and Joint Default Probabilty (JDP). Section 4 ex- plains the statistical methods that are applied in this article to achieve the objectives as well as the data prediction models. Section 5 covers the implementation of the modeling along with the assumptions for the approaches. In Section 6 the results are presented which are then analyzed and discussed in Section 7.

(14)

2 Credit Risk and Regulations

In this Section we aim to introduce the reader to credit risk management as well as the development and explanation of the Basel Accords. In the determination of capital requirements a risk weight (RW) formula is used where asset correlation is one of the key drivers together with probability of default and loss given default, see Section 2.4. The risk weight formula for a SME portfolio is presented, Equation 3, and further the dependence between asset correlation, probability of default and risk weight are described.

2.1 Basel Committee of Banking and Supervisory

The Basel Committee of Banking and Supervisory Practices (BCBS) was founded at the end of 1974 because of the instability in the international banking markets.

Its main purpose was to enhance financial stability worldwide and increase bank supervision of their own business. The Basel Committee has since then established new directives, called Basel Accords, which are banking regulations. The purpose of the Basel Accords is to make sure that banks have sufficient capital for managing different types of negative economic impacts that could lead to unexpected losses.

The Basel Committee always endeavor to improve regulations on the present eco- nomic environment. As of today three regulations have been established which are Basel I, Basel II and Basel III accords. Almost all countries with internationally operating banks follow these frameworks (BIS, 2018).

The Basel I accord was announced in 1988 and introduced the minimum capital requirement. The aim of the regulation was to prevent banks from defaulting due to credit risk exposure and this by having a risk weighted asset (RWA) of 8% as a buffer. In 2004 The New Capital Framework was implemented as Basel II and consisted of three pillars. The first pillar defined the minimum capital requirements as in the previous Basel Accord with an expansion to cover operational risk and market risk as well. The second pillar consists of Supervisory Review and the third pillar of Market Discipline (BIS, 2018). Basel III enhanced the Basel II framework and additional regulations are progressively being implemented into the banks until 2019 (BIS, 2017).

Since the Basel Committee does not have authority in any country, the framework from Basel has been applied in the different countries’ legal systems as well as the European Union. Therefore the Swedish banks have the obligation to follow both the Swedish and the European Union’s financial legal system. In order to control this, the Swedish banks frequently report to the Swedish financial controlling au- thority, Finansinspektionen (BIS, 2018).

(15)

2 CREDITRISK ANDREGULATIONS

2.2 Internal Rating-Based System

When Basel II was implemented into the system banks got the possibility to cal- culate their own credit risk and use their own models. There are three levels for calculating the credit risk components:

1. Standardized Approach (SA)

2. Foundation Internal Rating-Based Approach (FIRB) 3. Advanced Internal Rating-Based Approach (AIRB)

Under the standardized approach banks have to follow Basel’s suggested methods and models to calculate the credit risk. However, Foundation IRB gives banks the possibility to estimate the PD with internal models. With the advanced IRB the banks are, in addition to this, allowed to estimate LGD and exposure at default (EAD) and other parameters that are linked to RWA. Before banks are permitted to use internal estimations they need to get an approval from the financial supervisors (BIS, 2017). The relation between the risk measurements PD, EAD and LGD is given by:

EL= PD · EAD · LGD (1)

The formula displays how banks calculate expected loss (EL) which is mostly cov- ered by risk premiums and interest rates. However, there is also unexpected loss (UL) which is described by being larger than the expected loss as seen in Figure 1.

Figure 1: The line represents a loss distribution.

Unexpected loss is an important measure for banks since it is always a risk of un- expected events in the market which banks must be aware of (Coen, 2000). This

(16)

is why banks and financial institutions are requested to hold a buffer against unex- pected losses.

2.3 Probability of Default

Default is the failure to pay principal or interest on a loan or security. It occurs when a debtor is unable to perform the legal obligation for repayment (Investopedia, 2018c). When lenders decide whether to issue a loan they must consider the risk of default for the counterpart. The PD is described as the likelihood that a repayment will not be paid on time (Financial Times, n.d).

2.4 Risk Weight

The risk weight (RW) is a key parameter in the calculation of RWA which deter- mines the minimum capital requirement a bank must hold when lending money.

In the calculation of RWA each individual asset within the different risk types is weighted and the formula to determine the Risk Weighted Assets is given by (BCBS, 2005):

RWA= 12.5 · RW · EAD (2)

Where RW is the risk weight of the security expressed as a percentage of the expo- sure value and EAD the exposure at default. Depending on what kind of securities that are assessed the risk weight formula is modified. The formula to determine RW for securities such as enterprises is derived in Section 3.2 and it is given by (The European Unions Official Paper, 2013):

RW=

LGD· Φ(Φ−1(PD)+

ρ Φ−1(0.999)

1−ρ ) − LGD · PD

·1 + (M − 2.5) · b

1 − 1.5 · b · 12.5 · 1.06 (3) Where Φ(x) is the cumulative distribution function for a standardized normal dis- tributed random variable and Φ−1(Z) is the inverse cumulative distribution function for a standardized normal distributed random variable. M is the maturity time for the instruments of the credit portfolio and b is the maturity factor, calculated by:

b= (0.11852 − 0.05478 · ln(PD))2 (4) .

(17)

2 CREDITRISK ANDREGULATIONS

Furthermore, ρ is the asset correlation which is a function of PD with a minimum value of 0.12 and a maximum value of 0.24. It is calculated by:

ρ = 0.12 ·1 − e−50·PD

1 − e−50 + 0.24 ·

1 −1−e−50·PD

1−e−50



(5)

2.4.1 Asset Correlation

In 2001 the Basel committee proposed the second consultative document (CP2) for the calculation of RWA. In the proposal the asset correlation parameter was set to 0.2 for all firms. Criticism and extensive discussions of the AIRB approach arose and the reason for this was that especially SME were afraid of higher capital costs for banks that would lead to higher costs of the credits for these companies.

The AIRB approach was very questionable since it gave a much higher risk weight in many cases compared to the standard approach. Enterprises with a very good credit rating resulted in a very low risk weight in comparison to companies with a rating worse than BB- resulted in a significant higher risk weight. Since SME rarely obtain a rating higher than BB- banks would have to hold a larger capital for such companies according to CP2.

As a consequence for the criticism of the second consultative document the Basel Committee proposed a refined version of the RWA function for the AIRB approach in 2004. Some changes for the final version of the AIRB approach were made. The parameter asset correlation was modified to ρ(PD) as presented in Equation 5.

2.4.2 Dependence of Asset Correlation, PD and RW

As shown in Figure 2 the asset correlation ρ(PD) declines with increased values of PD by applying Equation 5. It equals 0.12 for the highest value of PD while for the lowest value of PD the asset correlation equals 0.24. The Basel II approach proposes that small firms have low asset correlation. This can be supported by two arguments. First, larger companies can be thought of as a portfolio of small companies and therefore have a better diversification and consequently, lower PD.

This means that the idiosyncratic risk would be smaller compare to the systematic risk, and the systematic risk is related to the asset correlation. Secondly, business sectors that are known to be very cyclical and hence depend more on the systematic risk shows a majority of large firms. This indicates that firm size, i.e. large firms have high asset correlations (Henneke & Trück, 2006).

(18)

Figure 2: Dependence between ρ and PD with the formula retrieved from the regu- lations.

In Figure 3 the dependence between PD and RW is presented, by applying the risk weight formula, Equation 3. The parameter LGD is predetermined to 30%, the maturity time M is 1 and the PD takes on values between 1%-100%. The maturity factor b and the asset correlation varies depending on the value of PD which are determined by applying Equation 4 and Equation 5. Increased values of PD up to 31% gives an increased value of RW which can be explained by the fact that if an obligor is likely to default the creditor must hold a larger buffer in order to protect itself. It is also shown that RW is trending down with PD values over 31%. This is a drawback of the risk weight formula and the article will only focus on the increasing curve of the risk weight since a loan to enterprises with PD values larger than 31%

would be most unlikely.

(19)

2 CREDITRISK ANDREGULATIONS

Figure 3: Dependence between RW and PD calculated by the formula retrieved from the legal framework.

(20)

3 Mathematical Models of Credit Default

This chapter describes the mathematical theories of the core models applied in this article which are the risk weight formula and the three models of asset and de- fault correlation. Merton’s model is a well known one factor model which is the foundation of the risk weight-formula defined in Section 2.4 and in this chapter we present the derivation of Equation 3, see Section 3.3. Further, the two models for estimating asset correlation are derived from Merton’s model, described in Section 3.4 and lastly, the joint probability model to estimate default correlation is derived in Section 3.5.

3.1 Correlation

In order to fully understand the concepts of asset and default correlation a brief review of the definition of correlation is needed. Let X and Y be random variables with expected values µX and µY and standard deviation σX and σY. The correlation of X and Y is defined by:

ρXY =E[(X − µX)(Y − µY)]

σXσY (6)

Note that if X = Y then the correlation is ρXY = E[(X − µX)2]/σX2= 1 and similarly if X = −Y then the correlation is ρXY = −E[(X − µX)2X2 = −1. On the other hand, if X and Y are independent then ρXY = E[(X − µX)]E[(Y − µY)] = 0. In other words, correlation is a measure of how much two variables depend on each other, with 1 being complete positive dependence, -1 being complete negative dependence and 0 being independence (Lucas & Douglas J., 1995).

3.2 Merton’s Model

Merton’s model is a commonly used model to describe the value of a firm, including the parameter asset correlation. It is an appropriate model to apply for this thesis since the risk weight formula is based on the model, derived in the next Section 3.3. Another reason for the appropriateness is that this article analyzes default of loans and from the model we can derive default likelihood functions as described in Section 3.4 and thus we can estimate asset correlations.

Conditionally independent credit risk models established from the factor-based ap- proach are popular to use by risk managers since they are among the few models that can reflect a realistic interrelationship on the default motion and still enough

(21)

3 MATHEMATICALMODELS OFCREDITDEFAULT

In credit risk models it is common to model default events as a discrete random variable Z that follows a Bernoulli law. Thereby Z can take on the value 0 or 1 and here, Z = 1 indicates default of a firm.

Merton’s formula is a well known one-factor model to describe the value of a firm, proposed by Robert C. Merton in 1974. The value of the assets of an obligor is driven by a common systematic risk factor Y and an obligor-specific risk factor εi that is independent across the obligors and independent of Y . Both Y and εi are standard normally distributed and the asset value of firm i is given by:

Vi(T ) =√

ρY +p

1 − ρεi (7)

Where Vi(T ) is the value of the assets at a given maturity time T . Note that by definition Vi(T ) is a standard normally distributed random variable, since the sum of two independent normally distributed random variables also is normally distributed and the coefficients are chosen such that E[Vi(T )] = 0 and E[Vi(T )2] = 1.

Furthermore, given the definition of correlation in Section 3.1, it is easy to see that the correlation of two asset values Viand Vkis ρViVk= ρ while the correlation of one asset value Viand the systematic risk factor Y is ρViY =√

ρ . Note that in this model, the correlation of two asset values is the same for every pair of assets. Because of this ρ is simply called the asset correlation. Following Gordy (2010) we will often call√

ρ a "factor loading".

In the setting of Merton’s model a firm will default if Vi(T ) falls below a given default threshold γ. If Zi is the random variable that models the default of the firm ithen P(Zi= 1) = P(Vi(T ) < γ). In other words, since Vi(T ) is standard normally distributed the probability of default is given by PD = P(Vi(T ) < γ) = Φ(γ), where Φ is the cumulative distribution function of the standard normal distribution. If the systematic risk factor is known the individual conditional default probability is given by:

p(y) = P[Vi(T ) < γ|Y = y] = Φ(γ −√

√ ρ y

1 − ρ ) (8)

3.3 Derivation of the Risk Weight Formula

Let Lidenote the loss of the bank if firm i defaults on its loan, then Li= LGD · Zi, where LGD is loss given default as usual. The expected loss for the bank is then given by E[Li] = LGD · PD. However, the aim of the risk weight formulas in the Basel framework is that a bank should have enough capital to cover it’s losses in 99.9% of states of the economy. Let α denote the contrary quantile for Y , indicating that the chance for an inferior outcome of the systematic risk factor Y is 0.1%.

In other word P(Y < α) = 0.001. Compare this to Figure 1, depicting expected

(22)

loss, unexpected loss and stress loss. A bank should hold enough capital to cover expected losses and normal unexpected losses. Conditional on this worse state of economy for a single loan the probability of default is:

P[Vi(T ) < γ|Y = α] = Φ(γ −√

√ ρ α

1 − ρ ) (9)

Since Vi(T ) is standard normally distributed from Equation 7, we get:

PD= P(Vi(T ) < γ) = Φ(γ) (10)

Hence the default threshold is given by γ = Φ−1(PD). By the same reasoning we get α = Φ−1(0.001) but using that the standard normal distribution is symmetric we get that α = −Φ−1(0.999). Then the expected loss of one loan, given this bad state of the economy, is given by:

E[Li|Y = α] = LGD · Φ(Φ−1(PD) +√

√ ρ α

1 − ρ ) (11)

Prior to June 2004 the risk weight formula was based on Equation 11. However, the formula was modified to only use unexpected loss, given by:

E[Li|Y = α] − LGD · PD (12)

Furthermore the risk weight was calibrated by multiplying the unexpected loss with a maturity factor, a function depending on the maturity time M and the maturity factor b, which resulted in the final Basel AIRB function to determine the regulatory capital (Henneke & Trück, 2006), as seen in Section 2.4:

RW=

LGD· Φ(Φ−1(PD)+

ρ Φ−1(0.999)

1−ρ ) − LGD · PD

·1 + (M − 2.5) · b

1 − 1.5 · b · 12.5 · 1.06 (13)

3.4 Asset Correlation Models

In this Section we describe the two default likelihood models Binomial Likelihood and Large Portfolio Approximation that are derived from Merton’s model. The models are based on different constrains and they are applied in this thesis to esti- mate the asset correlation.

(23)

3 MATHEMATICALMODELS OFCREDITDEFAULT

3.4.1 Binomial Likelihood

In Equation 7 we studied the asset value of a single obligor. Recall that the value Vi(T ) of firm i depends on the systematic risk factor Y and a firm specific random variable εi. Let us denote the number of obligors as n and defaulted obligors as d for each year t. Each obligor’s probability of default is modeled as in Equation 8 and it is assumed that the default threshold γ and asset correlation ρ is constant across all obligors in the portfolio and over time. Where Z follows a Bernoulli law for each company i. Then conditional on the systematic risk factor Y = y the probability of dnumber of defaults for the portfolio is binomially distributed:

P[

n i=1

Zi= d|Y = y] =n d



p(y)d(1 − p(y))n−d (14)

Where p(y) is given by Equation 8. The probability density function of the number of defaults is thus the expected value of the probability of d defaults conditional on the systematic risk, by the law of iterated expectations. This implies the following likelihood function for the parameter ρ:

L(ρ|d, γ, n) = Z

R

n d



Φ(γ −√

√ ρ y

1 − ρ )d(1 − Φ(γ −√

√ ρ y

1 − ρ ))n−ddΦ(y) (15) The parameters in the Binomial Likelihood (BL) function can be estimated by max- imizing the likelihood (Gordy and Heitfield, 2010). In this article all parameters except the asset correlation is known and hence Equation 15 will be applied as one of the two models to estimate the asset correlation.

3.4.2 Large Portfolio Approximation

This model is based on the theory of the law of large numbers, which implies that the actual default rate of an entire portfolio equals the individual probability of default for the obligors in the portfolio. From Merton’s model the likelihood function of defaults can be derived, as seen in Equation 15. Conditional on the systematic risk factor y the defaults happen independently from each other. Therefore, in a very large portfolio the fraction of the defaulted obligors X equals the individual probability of defaults, ensured by the law of large numbers:

P[X = p(y)|Y = y] = 1 (16)

Where p(y) is given by Equation 8. If Y is known, the fraction of obligors that will default can be predicted by certainty. Although Y is yet unknown the probability distribution function of X can be reached by the law of iterated expectations. The probability of the loss fraction X to result in the value x is defined as:

(24)

P(X ≤ x) = Z

R

P[X = p(y) ≤ x|Y = y]φ (y)dy (17)

Where φ is the probability density function of the standard normal distribution. The lower bound of the integral is substituted with a parameter denoted as −y∗ which is an expression corresponding p(−y∗) = x, hence:

P(X ≤ x) = Z

−yφ (y)dy = Φ(y) (18)

Recalling the definition of p(y) in Equation 8 we get:

y= 1

√ρ(p

1 − ρΦ−1(x) − γ) (19)

By combining Equation 18 and Equation 19 we can derive the probability distribu- tion function of the loss fraction X :

F(x) := P(X ≤ x) = Φ(y∗) = Φ( 1

√ρ(p

1 − ρΦ−1(x) − γ)) (20)

And hence, taking the derivative with respect to x, the default probability density function is derived. The function depends on three parameters x, ρ and γ. The default probability density function for the yearly fraction x is obtained by:

f(x) = s

1 − ρ

ρ · exp{1

2(Φ−1(x)) − 1

2ρ(γ −p

1 − ρΦ−1(x))2} (21)

Which then can be written as the likelihood function:

L(ρ|x, γ) = s

1 − ρ

ρ · exp{1

2(Φ−1(x)) − 1

2ρ(γ −p

1 − ρΦ−1(x))2} (22)

The parameters in the Large Portfolio Approximation (LPA) can be estimated by maximizing the likelihood (Schönbucher, 2000). Here, all parameters are known as well except the asset correlation and hence Equation 22 will be applied as a model to estimate the asset correlation. The two models BL and LPA, presented in Equation 15 and Equation 22, will be the models for estimating asset correlation on the SME portfolio, described in Chapter 5.

(25)

3 MATHEMATICALMODELS OFCREDITDEFAULT

3.5 Default Correlation Model

This Section describes the model for estimating default correlation which is based on joint default probabilities. In the article we compare asset correlation with de- fault correlation and since the model is well known in modeling portfolio credit risk it is selected for the estimation of default correlation.

3.5.1 Joint Default Probability

Default correlation is built on the approach to estimate bivariate transition proba- bilities from given default rates. It is defined by discrete events of survival or non- survival for two obligors over one year. The default correlation ρdis then calculated based on the standard definition of correlation between two random variables as de- scribed in Equation 6 (Lucas & Douglas J., 1995):

ρd= pd− pdpd

ppd(1 − pd)pd(1 − pd) (23) Where pd is the average joint probability for the number of creditors migrating to default for the entire time period and pd equals the average default rate over the entire time period. D is the number of obligors migrating to default and N is the total number of obligors each year, then the joint probability for a given year is calculated by:

D2/N2 (24)

In practice, default data is given on a yearly basis and joint probabilities are cal- culated with the same frequency. When the joint probabilities are determined they are aggregated into an average joint probability for the observation period, with the assumption of each year to be an independent data set (Servigny & Renault, 2002).

The formula to aggregate the probabilities to calculate the average joint probability is given by:

pd=

n

t=1

Nt

ns=1Ns (Dt)2

(Nt)2 (25)

Where each year is weighted by its relative size. The formula weights the num- ber of counterparties each year to the total number of counterparties of the entire time period. Thereafter the weight is multiplied with the joint default probability and finally the terms are summed. When the average joint probability has been determined the default correlation is calculated by applying Equation 23. In this derivation of default correlation only one risk class is considered since the analyzed SME portfolio is assumed homogeneous. But one should notice that the model is applicable for portfolios with different risk classes as well (Servigny & Renault,

(26)

2002). We will apply Equation 23 as the model for estimating default correlation in this article which is described in Chapter 5.

The three models described in Section 3.4 and 3.5 will be applied as the models for estimating asset and default correlation in this article. To achieve an optimal estimation of asset and default correlation the models are combined with statistical methods which are described in the next chapter.

(27)

4 STATISTICALMETHODS

4 Statistical Methods

In this chapter we emphasize the statistical models applied in this article which can be seen as a complement to the models described in the previous chapter in order to achieve optimal results. As depicted in Section 1.5 default data is scarce and thus we execute Principal Component Analysis and Regression Analysis to predict longer time series, presented in Section 4.1 and 4.2.

Maximum Likelihood Estimation is described in Section 4.3 which is applied to approximate asset correlation in Equation 15 and 22. Further, we execute Monte- Carlo Simulations for two purposes. First we apply it for validation of the two models and secondly to evaluate the difference of asset and default correlation and the method is explained in Section 4.4. The chapter also includes three validation tests which are presented in Section 4.5.

4.1 Principal Component Analysis

Principal Component Analysis (PCA) is a dimension reducing method to obtain a lower dimensional representation of a data set. It finds a simplified representation of the data by tracing where it contains as much variation as possible. In this article we apply PCA to simplify the modeling of the ten macro-economic variables described in Section 1.5. By conducting the method it is easier to estimate new data sets with the multivariate regression analysis which is described in Section 4.2.

To explain the principal components (PC), let a data set contain n observations in a set of p features, X1, X2, · · · , Xp. Not all p dimensions are of equal interest, that is why the first PC, Z1, has a normalized linear combination of X1, X2, · · · , Xp, with the largest variance. The first PC, Z1, is presented below:

Z1= φ11· X1+ φ21· X2+ · · · + φp1· Xp (26) The components φ11, φ21· · · φp1are referred as loadings for the first principal com- ponent, i.e eigenvectors of ordered sequence of the matrix XTX and the eigenvalues are the variance of the components. The loadings are a loading vector for the first principal, φ1= (φ11, φ21· · · φp1)T. By normalizing the loadings we avoid promptly large loadings, which in turn could lead to promptly sizable variance, hence:

p

j=1

φ2j1= 1 (27)

(28)

The components z11· · · zn1are referred as scores of the first principal component:

Z1=

zi1= φ11· x11+ φ21· x12+ · · · + φp1· x1p ...

zn1= φ11· xn1+ φ21· xn2+ · · · + φp1· xnp

, (28)

After the first Z1is estimated the second Z2can be estimated. The second principal component is uncorrelated with the first principal component and it is also a linear combination of X1, X2, · · · , Xpthat has the most variance compared to all other linear combinations. This process is repeated until all principal components for the data set are estimated, at most min(n − 1, p) for Zi. However the first few principal components are the most representative to the data (James et al., 2013).

4.2 Multivariate Regression Analysis & Ordinary Least Squares

To predict longer time series of default data we apply Multivariate Regression Anal- ysis. It is a model for describing the dependence between two or more variables with a principal to construct a model that best suits the observed data. The method is commonly used for making forecast and the linear regression model is given by:

Yt= α + Xt· β + εt, t = 1, · · · , T. (29) Where Ytis the response on the forecaster variable Xtat time t. In this article we will assign Xtto represent macro-economic variables as described in Section 1.5, both by utilize principal components by applying Equation 26 and by implement individual macro-economic variables which is described in Section 5.1.2. The parameter β is the slope term coefficients, α is the intercept constant coefficient and εt is the error term at time t. The form of the variables are:

Y =

 y1

... yn

, X =

x1,1 · · · x1,k ... . .. ... xn,1 · · · xn,k

, β2=

 α β1

... βk−1

 , ε =

 ε1

... εn

 (30)

Where k is the number of macroeconomic factors and n is the number of observa- tions. The coefficients α and β are estimated by the ordinary least squares (OLS) method. A new matrix Xis first considered with an extra column of an one-vector for the OLS calculation:

 

(29)

4 STATISTICALMETHODS

With the new matrix Xthe OLS calculation can be estimated by the formula (Mc- Neil et al., 2005):

 ˆα βˆ



= (XT· X)−1· XT ·Y. (32)

4.3 Maximum Likelihood Estimation

Maximum Likelihood Estimation (MLE) is a general method for parametric esti- mation. The main idea behind the method is to find the parameters of a statistical model that maximizes a likelihood function given a set of observations. In this arti- cle we will maximize the two likelihood models in Equation 15 and 22 to estimate the asset correlation assigned as the parameter ρ.

Consider a random vector X = (X1, X2, · · · , Xn) of iid components, describing a set of data where n is the number of observations, with joint probability density func- tion fX(x; θ ) indicated by a parametric vector θ = (θ1, θ2, · · · , θp), p number of parameters, for some unknown value of θ .

Given the data, the likelihood function for the unknown parameters θ is L(θ ; X) = fX(x; θ ) and the maximum likelihood estimator θ is the value that maximizes the likelihood function. Equivalently by maximizing the log-likelihood function l(θ ; X)=

ln L(θ ; X). For large n the estimated parameters are expected to be close to the real value of θ . For the vector X with univariate density f the log-likelihood function is given by (McNeil et al., 2005):

ln L(θ ; X) = ln

n

i=1

f(Xi; θ ) =

n i=1

ln L(θ ; Xi) (33)

The implementation of Equation 33 is described in Section 5.2 where we apply Maximum Likelihood Estimation to approximate the asset correlation on the SME portfolio.

4.4 Monte-Carlo Simulation

Monte-Carlo Simulation (MSC) is the general name for mathematical algorithms that are based on random numbers. In this paper we will execute MSC to perform a validation of the two likelihood models in Equation 15 and 22. It will also be utilized for evaluation of the difference between the parameters default and asset correlation which is described in Section 5.4.

Monte-Carlo Simulation is the general name for mathematical algorithms that are based on random numbers. Let X be a random vector, X = (X1, · · · , Xn) with den-

(30)

sity function f (x1, · · · , xn) and a given function g(x). Suppose the objective is to estimate the expected value of the function, calculated by:

E[g(X)] = Z Z

· · · Z

g(x1, · · · , xn) f (x1, · · · , xn)dx1· · · dxn (34)

To estimate E[g(X)], a random vector X(1)is generated with joint density function f , and Y(1)= g(X(1)) is calculated. The procedure is repeated i times, until r number of iid random variables Y(i) = g(X(i)) where i = 1, .., r, have been generated. The strong law of large numbers (SLLN) is given by:

r→∞lim

Y(1)+ · · · +Y(r)

r = E[Y(i)] = E[g(X)] (35)

Where the expected value of g(X) thus can be estimated by the mean value of the generated Y(1)· · ·Y(r) (Ross, 2010).

4.5 Validation Tests

To enhance the credibility of the predicted default data, described in Section 5.1, we apply three different validation measurements. To test stationarity of the macro- economic variables the Augumented Dickey-Fuller test is applied and to verify the predicted default data we use Mean Absolute Error and Root Mean Square Error.

4.5.1 Augmented Dickey-Fuller Test

The Augmented Dickey-Fuller test examines if a data set has a trend or not and in this article it is applied to evaluate if the macro-economic variables in Table 4 are stationary. The purpose is that in the execution of the regression analysis, as described in Section 4.2, it is important to implement stationary variables, i.e data with no trend to achieve the most accurate results. The test assumes the null hypothesis for each variable to have a unit root and the equation for testing this is given by:

∆yt= α + δ t + β yt−1+ c1∆yt−1+ · · · + cp∆yt−p+ εt (36) Where α is a constant, the β is the coefficient on a time movement and p the lag order of the autoregressive process. When implementing the regression to the data and finding β =0, it implies that the data have a random walk with a trend, hence the data have a unit root (Dickey and Fuller, 1979). Contrary to this β =1 implies that

(31)

4 STATISTICALMETHODS

4.5.2 Mean Absolute Error

The Mean Absolute Error (MAE) is a method that measures the spread between predicted data ˆy and the actual data y by taking the average absolute value of the n observations. When estimating longer time series of default data, as we do in this paper by conducting regression analysis described in Section 4.2, it is of high importance to verify if the data is realistic. Hence, MAE is applied and the formula is given by (Chai & Draxler, 2014):

MAE= 1 n

n

j=1

| yj− ˆyj| (37)

4.5.3 Root Mean Square Error

As described in the previous Section 4.5.2, we have a corresponding method to measure the spread between predicted data ˆyand the actual data y which is the Root Mean Square Error (RMSE). This test is also applied for the predicted time series of default data in this article. The formula measures the spread by taking the squared root of the mean of the squared discrepancy, for all data points n. The formula is given by (Chai & Draxler, 2014):

RMSE= s1

n

n

j=1

(yj− ˆyj)2 (38)

(32)

5 Method

The aim of this chapter is to explain the model implementation in this thesis. Be- fore the asset and default models are implemented we predict new time series of data and the approach for this is explained. When the data is validated asset and default correlations are estimated and this chapter describes the assumptions of the implementation for the three approaches. As a supported argument of the differ- ence between asset and default correlation we execute a Monte-Carlo Simulation to estimate default correlation. Lastly, in order to evaluate if the selected models are credible, a validation is performed for the two likelihood models.

5.1 Predicting Data

We know from Section 1.5 that the historical default data provided by Handels- banken is scarce and thus it is of interest to predict longer time series. Increased time series imply more accurate results in the estimation of correlation (Gordy &

Heitfield, 2010). For the data prediction we conduct a Multidimensional Regres- sion Analysis as explained in Section 4.2 by using historical default rates of the SME portfolio. Before the regression model is implemented it is of high importance to select relevant underlying variables and thus the ten macro-economic variables presented in Table 4 in Section 1.5.2 are chosen for this purpose.

Before the new data sets are predicted we must assure that the macro-economic variables are stationary. Data that is non-stationary are unpredictable and cannot be forecasted since the results obtained by such data may indicate time series with a relationship between two variables where one does not actually exist (Investopedia, 2018d). In Table 4 the macro-economic variables are presented in different units however, an adequate approach to retrieve stationary data is to transform it to per- centage differences. That is calculated by PtP−Pt−1

t−1 , where Pt represents the value of the variable at time t and Pt−1 represents the value with one lag.

We transform the ten macro-economic variables to percentage differences and apply the Augmented Dickey–Fuller test, described in Section 4.5.1, to test stationarity.

As expected the variables reflects stationarity which is confirmed by the test, see Table 5, where β =1 indicates stationary variables.

(33)

5 METHOD

Table 5: Macro-economic variables tested with the Augmented Dickey-Fuller test.

Variables β Stationary

Export 1 Yes

Import 1 Yes

GDP 1 Yes

10Y Government Bond, (10Y) 1 Yes

Consumer Price Index, (CPI) 1 Yes

Price Base Index, (PBI) 1 Yes

OMX30 1 Yes

Unemployment, (UP) 1 Yes

Inflation-adjusted property index, (IAPI) 1 Yes

TCW-index, (TCW) 1 Yes

5.1.1 PCA

To simplify the management of the ten macro variables, a Principal Component Analysis is conducted as described in Section 4.1. Before the implementation of PCA the macro-economic data is normalized due to the fact that data is not often consistent and thus, by using the normalization formula:

Z= (Y −U )

σ (39)

Where Y is the macro-economic data, U the mean value of for each macro-economic variable and σ is the standard deviation of each variable. In Table 6 the normalized data of the ten macro-economic variables are presented.

(34)

Table 6: Normalized macro-economic data by applying Equation 39.

Year Export Import GDP 10Y CPI PBI OMX30 UP IAPI TCW

1990 -0.37 -0.14 2.07 3.20 2.18 0.88 1.00 0.09 0.34 0.18

1991 -0.77 -1.42 0.99 2.77 3.11 -1.61 -0.09 2.39 -0.64 -0.15 1992 -0.97 -0.98 -1.50 0.10 1.33 0.08 -0.52 2.68 -1.66 -0.39

1993 1.32 0.39 -1.41 0.99 0.11 -0.08 0.42 2.05 -3.19 3.42

1994 1.53 1.46 0.82 0.06 0.23 1.55 -0.91 -0.12 -0.33 -0.00

1995 1.87 1.22 1.25 0.19 -0.19 -0.36 2.21 -0.53 -0.58 -0.05 1996 -0.92 -0.96 -0.65 -0.58 -0.20 0.27 -0.71 0.00 -0.70 -1.60 1997 0.92 1.03 0.03 -0.57 -0.73 0.86 -0.45 -0.17 0.30 0.42 1998 0.10 0.59 0.21 -0.80 -0.73 0.70 -0.38 -0.96 0.63 0.17 1999 -0.04 0.02 0.38 -0.60 -0.86 0.30 -0.86 -0.94 0.89 0.01

2000 1.16 1.54 0.68 -0.38 -0.60 2.08 1.91 -0.94 0.87 -0.22

2001 -0.27 -0.56 -0.11 0.16 -0.47 -1.00 -0.45 -0.80 1.04 1.28 2002 -1.03 -1.01 -0.25 0.04 0.41 -1.24 0.46 -0.22 -0.37 -0.47 2003 -0.72 -0.54 -0.08 -0.02 0.00 -1.93 -0.20 0.13 0.21 -0.91

2004 0.67 0.28 0.11 -0.61 -0.00 0.49 0.14 0.22 0.67 -0.40

2005 0.45 0.97 -0.27 -0.60 -0.74 0.14 -0.43 -0.17 0.66 0.10 2006 0.82 0.76 0.76 -0.24 -0.50 0.63 -0.29 -0.57 1.35 -0.31

2007 0.20 0.54 0.69 0.07 -0.15 0.32 0.80 -0.84 0.40 -0.46

2008 -0.09 0.17 -0.59 0.55 -0.04 -0.81 0.35 -0.36 0.35 0.04 2009 -2.66 -2.64 -2.60 -0.88 1.20 -1.79 -0.94 1.32 -0.90 1.46

2010 0.51 0.73 0.92 -0.28 -1.30 1.02 0.91 -0.14 0.90 -1.44

2011 0.12 0.14 -0.19 0.23 -0.42 0.37 0.09 -0.69 -0.25 -1.03 2012 -0.91 -0.94 -1.29 -0.42 0.45 -1.04 -1.54 -0.24 -1.22 -0.38 2013 -1.13 -1.11 -0.74 -0.78 -0.32 0.02 0.39 -0.24 -0.07 -0.68 2014 0.04 0.34 -0.00 -0.83 -0.97 0.18 1.32 -0.34 0.38 0.59 2015 0.18 0.06 0.79 -0.78 -0.75 -0.06 -2.22 -0.56 0.88 0.85

After the data is standardized as presented in Table 6, a principal component analy- sis is performed. Since we select ten macro-economic variables the PCA returns ten principal components (PC) where each component describes how much it represents the macro-economic data. In Table 7 this is presented and we can see that the first principal component represents the most and the tenth represents the least. Handels- banken suggest the use of two PC’s as underlying variables for the data prediction and thus PC1 and PC2 are selected since those represents the macro-economic data the most with 41.64% respectively 21.20%.

(35)

5 METHOD

Table 7: The PC’s representation of the ten macro-economic variables.

PC Representation PC1 41.64%

PC2 21.20%

PC3 14.90%

PC4 7.82%

PC5 6.72%

PC6 3.45%

PC7 2.21%

PC8 0.91%

PC9 0.87%

PC10 0.23%

5.1.2 Regression Analysis

The principal components together with the historical default rates are used for the implementation of the Multidimensional Regression Analysis. The coefficients α, β1and β2are estimated for the Multidimensional Regression Analysis based on the historical data and the error term εt is assumed zero. To achieve coefficients with preferably small error marginal the OLS method is applied, see Section 4.2. The es- timated coefficients together with the principal components are used to predict new time series of default data for 2005-2015, 1990-2004 and 1991-1994. The Multidi- mensional Regression Analysis is repeated two more times however, now by using individual macro-economic variables instead of principal components to predict the data sets. One approach is to select GDP and unemployment as underlying variables while another approach is to select GDP and OMX30, proposed by Handelsbanken.

The data is predicted for the same time series.

Although default data for the time period 2005-2015 is estimated, we only utilize it as a validation to the historical data. The time period 1990-2004 is combined with the historical data in order to get the data set 1990-2015. The default data of 1991-1994 is estimated to evaluate how the correlations are affected during a financial crisis and when only using a few observations. Thus, in this article we will analyze seven different data sets, described in Table 8, to estimate asset and default correlation. We refer the predicted data sets as "Data 1", "Data 2" and "Data 3" depending on the two underlying variables utilized in the regression analysis and each data set is assumed homogeneous.

(36)

Table 8: The different data sets analyzed in this project sets.

Data sets Macro-variables Time period

Historical Data - 2005-2015

Data 1 -Full set PC1 & PC2 1990-2015 Data 1 -Extreme PC1 & PC2 1991-1994 Data 2 -Full set GDP & Unemployment 1990-2015 Data 2 -Extreme GDP & Unemployment 1991-1994 Data 3 -Full set GDP & OMX30 1990-2015 Data 3 -Extreme GDP & OMX30 1991-1994

To validate the predicted data sets of 2005-2015 it is compared to the historical default data. The spread between the different trajectories are measured by the Mean Average Error (MEA) and Root Mean Square Error (RMSE), as described in Section 4.5. The trajectories are also evaluated graphically and for a further validation it is of interest to interpret how the trajectories move over a longer time series. Hence the "Full" data sets are graphically illustrated over the time period 1990-2015 and compared to the number of bankruptcies for Swedish enterprises over the same time period, see Figure 6 in Section 6.1.

5.2 Estimating Asset Correlation

In this article we estimate asset correlation on the SME portfolio by fitting the de- fault data to two likelihood functions with different constraints. The models are derived from Merton’s formula and presented in Equation 15 and 22 and by apply- ing Maximum Likelihood Estimation we can approximate the asset correlation.

5.2.1 Binomial Likelihood

In this approach the Binomial Likelihood (BL) function is applied. For this model the number of defaults and the number of obligors each year must be known, in order to fit the likelihood function, as explained in Section 3.4.1. In the SME port- folio provided by Handelsbanken this is presented whereas for the predicted data sets from the regression analysis we only retrieve default rates. Therefore, the pre- dicted data sets are transformed into yearly number of obligors and defaults.

The likelihood function is an integral function including the parameters γ, asset correlation ρ and the realized number X in R. The default threshold γ is determined as the average default rate and the start value of the factor loading√

ρ = 0.1. The BL model is fitted into a maximum likelihood model to estimate the asset correlation ρ which is approximated for all seven data sets.

References

Related documents

The asset returns are then generated by the multivariate normal distribution, which in turn, via the asset value return thresholds, determine the credit ratings of the obligors at

The plots show how productivity risk premia, financing risk premia, investment and investment-cash sensitivities of financially constrained and unconstrained firms, change in good

The independent regressors are: credit risk of the seller, proxied as before by the CDS spread on the seller of protection quoted on Markit on date t − 1; wrong-way risk, measured as

Portfolios of this kind are often referred to as Low Default Portfolios (LDPs). LDPs include portfolios with exposures to e.g. banks, sovereigns and highly rated corporations.

The value at risk model is a method to measure the market risk of portfolios of financial assets by the way of specifying the size of a potential loss under a

In this work, we have conducted empirical studies on financial times series using deep ensembles and Bayesian neural networks using mean field approx- imation as well as

Technologies for machine recognition of urban patterns have been studied in photogrammetry and remote sensing (Wieland &amp; Pittore, 2014). PCT itself could be implemented

The dash line is closer to the historical curve and have a smaller error, which means that the historical realized volatility of correlation is higher than the maximum value allowed