• No results found

Type 1 error rate and significance levels when using GARCH-type models

N/A
N/A
Protected

Academic year: 2022

Share "Type 1 error rate and significance levels when using GARCH-type models"

Copied!
34
0
0

Loading.... (view fulltext now)

Full text

(1)

UPPSALA UNIVERSITY, DEPARTMENT OF STATISTICS

Type 1 error rate and significance levels when using GARCH-type models

Authors: Ellinor Gyldberg & Henrik Bark Supervisor: Johan Lyhagen

1/31/2019

The purpose of this thesis is to test whether the probability of falsely rejecting a true null hypothesis of a model intercept being equal to zero is consistent with the chosen significance level when modelling the variance of the error term using GARCH (1,1), TGARCH (1,1) or IGARCH (1,1) models. We test this by estimating “Jensen’s alpha” to evaluate alpha trading, using a Monte Carlo simulation based on historical data from the Standard & Poor’s 500 Index and stocks in the Dow Jones Industrial Average Index. We evaluate over simulated daily data ranging over periods of 3 months, 6 months, and 1 year. Our results indicate that the GARCH and IGARCH consistently reject a true null hypothesis less often than the selected 1%, 5%, or 10%, whereas the TGARCH consistently rejects a true null more often than the chosen significance level. Thus, there is a risk of incorrect inferences when using these GARCH-type models.

(2)

1

Contents

1. Introduction ... 2

2. Theoretical Framework... 4

2.1. The Capital Asset Pricing Model ... 5

2.2. Jensen’s alpha ... 7

2.3. GARCH-type models ... 7

2.4. The Monte Carlo method ... 12

3. Previous Research ... 13

4. Method ... 16

4.1. Dealing with convergence issues ... 20

5. Results ... 21

6. Discussion ... 26

References ... 28

Articles ... 28

Online resources ... 32

(3)

2

1. Introduction

One of the most central concepts of finance is risk-adjusted profit. As an investor, you make decisions based on your preferences regarding the trade-offs between higher risks and higher rates of return. Investors continually devise new strategies hoping to “beat the market”, which means achieving a profit exceeding that which is required to compensate the investors for the risks taken.

From works such as Engle (1982), French, Schwert & Stambaugh (1987), Schwert (1988), Schwert & Seguin (1990), we know that financial time series exhibit several characteristics that violate assumptions necessary for the ordinary least squares (OLS) estimator to be used. Thus, since Bollerslev introduced the generalized auto-regressive conditional heteroscedasticity (GARCH) model in 1986, such models have become the standard for making financial computations and estimations. Auto-regressive conditional heteroscedasticity means both the previous variance in error terms as well as the sizes of the error terms influence the current variance, which is a typical characteristic of financial data. As GARCH models are designed to model these types of volatility patterns, it makes sense to use this type of model for financial data. There are some extensions of the standard GARCH model which have become commonly used as well, such as the threshold-asymmetric GARCH (TGARCH) model developed by Rabemananjara & Zakoian (1993), and the integrated GARCH (IGARCH) model developed by Nelson (1990). The TGARCH model takes into account the fact that positive and negative shocks may have different effects on volatility. The IGARCH model is a restricted version of the GARCH model, where the persistent parameters sum up to one. In this thesis, we focus on the GARCH (1,1), TGARCH (1,1), and IGARCH (1,1) models, meaning that both the autoregressive and the moving average parts of the variance model are only directly dependent on the last period. The reason why we have chosen this lag length is because it is the standard in financial literature and because it has been shown to be superior in works such as “A Comparison of Volatility Models: Does Anything Beat a GARCH (1,1)?” by Hansen & Lunde (2005).

The purpose of this thesis is to test the null hypothesis rejection rate for a true null hypothesis (risk of type 1 error) when evaluating investment strategies using “Jensen’s alpha”, a measurement developed by Michael Jensen in his 1968 paper “The Performance of Mutual Funds”. Jensen’s alpha is estimated by regressing the development of the price of an asset against a market index, and the intercept term represents a positive risk-adjusted profit compared to the index. Here, the null hypothesis of no significant risk-adjusted profit is rejected

(4)

3 when we estimate an intercept value significantly different from zero in the “out of sample”

period.

In this thesis, we test whether the rate at which a true null hypothesis is rejected is consistent with the decided upon significance level when using GARCH, TGARCH, and IGARCH models for testing investment strategies using the measurement, “Jensen’s alpha”. We study this using a Monte Carlo simulation, where we know that any rejection of the null hypothesis is caused by chance. As the data is simulated, we can be sure that there are no underlying qualities in any of our “stocks” that make them perform better relative to our “index”.

Monte Carlo studies are performed by generating data as if random based on a known distribution in order to analyse different potential outcomes of a known process. This method is described, for example, in the book “Simulation Methodology for Statisticians, Operations Analysts and Engineers” by Lewis and Orav. In our thesis, we attempt to perform a study which is as close as possible to a real study evaluating the profitability of a trading strategy. In order to do so, we simulate stock return data and market index return data using estimates of the GARCH parameters of historical data. We base our simulations of the “market index” on the Standard &

Poor’s 500 Index, and our simulations of our “stocks” on the stocks included in the Dow Jones Industrial Average Index. The data on the Standard & Poor’s 500 Index and the stocks of the Dow Jones Industrial average was downloaded from Yahoo Finance through the R package

“tseries”, Trapletti and Hornik (2018).

We simulate data ranging over periods of 6 months, 1 year, and 2 years. The second half of each simulation is used as the evaluation period, so these periods are 3 months, 6 months, and 1 year long, respectively. Each analysis is based on 1000 replications. Using this simulated data, we perform an analysis which is designed as if we were testing an active trading strategy using historical alpha values as a basis of portfolio formation, where the three GARCH-type models mentioned above are used to model the variance of the error term.

Although we study the null hypothesis rejection rate in the context of using “Jensen’s alpha” to evaluate the profitability of alpha trading specifically, these results should be generalizable for any analysis where GARCH-type models are used to model the variance of the error term, and where the intercept value is of interest.

The contribution of this thesis to the field of financial econometrics is to test whether or not the rejection rate of a true null hypothesis of an intercept term being zero is equal to the decided

(5)

4 upon significance level using the GARCH, TGARCH, and IGARCH models, which has not been done before. If the rejection rate of a true null hypothesis is not equal to the decided upon significance level, it could lead to incorrect inferences and wrong conclusions about the usefulness of the active trading strategies being tested. This makes our research question central to the entire field of finance. If studies draw incorrect conclusions about the usefulness of investment strategies to achieve a positive risk-adjusted profit, this might influence not only the academic understanding of financial markets but also how individuals make their investment decisions, which in turn, could impact how financial markets function.

Our results show that the GARCH and IGARCH models consistently reject a true null hypothesis less often than the selected significance level, whereas the TGARCH model consistently rejects a true null hypothesis more often than the selected significance level.

We use a proportions test to determine if the observed deviations from the expected frequencies are significant. When testing how often the models reject true null hypotheses at the 5% and 10% significance levels, we can reject the null hypothesis that the type 1 error is equal to the desired type 1 error rate at every commonly used significance level. When the models are tested at the 1% significance level, our estimated type 1 error rate deviates significantly from the desired significance level in 7 out of 9 cases. We can thus conclude that none of these models are reliable with respect to rejecting a true null hypothesis of an intercept being equal to zero at the rate which corresponds to the desired significance levels. As mentioned before, this could mean that inferences regarding the profitability of active trading strategies evaluated using these models are incorrect.

This thesis is outlined as follows:

In Section 2, the theoretical framework is outlined, followed by Section 3, which deals with previous research. Section 4 describes our method, and Section 5 goes through our results.

Section 6 discusses the results and concludes the thesis.

2. Theoretical Framework

In order to understand the background of this study, we first need to take a look at the financial theory behind the estimations performed to test the rejection rate of the null hypothesis when using GARCH-type models to model the variance of the error term. This is done in Section 2.1

(6)

5 and 2.2, which is followed by a description of GARCH-type models in Section 2.3. Finally, Section 2.4 offers a description of the background of the Monte Carlo method.

2.1. The Capital Asset Pricing Model

The basis for the Capital Asset Pricing Model (CAPM) was laid out by Harry M. Markowitz in his paper, “Portfolio selection” (1952). CAPM was then introduced by four different authors, independently of each other: by Jack Treynor in his 1961 paper, “Market Value, Time, and Risk”, by William F. Sharpe in his 1964 paper, “Capital Asset Prices: a Theory of Market Equilibrium under Conditions of Risk”, by John Lintner in his 1965 paper, “The Valuation of Risk Assets and the Selection of Risky Investments in Stock Portfolios and Capital Budgets”, and by Jan Mossin in

“Equilibrium in a Capital Asset Market” from 1966.

The CAPM is used to determine the rate of return that would, in theory, be required for an individual asset to be added to a well-diversified portfolio. This model builds on the idea that rational investors will demand to be compensated for taking risks – the higher the risk, the higher the rate of return must be for investors to want to hold the asset. There are two forms of risk, systematic (non-diversifiable) and non-systematic (diversifiable). The systematic risk is common to all securities, while the non-systematic risk is the risk associated with owning individual assets. If you invest in a large number of assets, the specific risk of owning the individual assets will “even out” on average. The larger the number of assets you own, the closer you will be to only being subject to the market risk. When you own a portfolio that is only subject to market risk, the risk you are taking will depend on how sensitive your portfolio is to market risk. You know that your portfolio will follow the movements of the market, but the volatility of your portfolio will depend on how much your stocks react to the market volatility.

However, the composition of your portfolio will not depend on your risk preferences – you can decrease the risk of your investment by investing some money in risk-free assets. Thus, the composition of your portfolio will only depend on how well compensated you will be for accepting the systematic risk associated with your assets. In this situation, you will only want to add assets to your portfolio if they will compensate you for the market risk at least as well as the assets you already own. This suggests that there must be a unique expected return required for each unique level of systematic risk. This happened because assets with rates of return above this level will be in high demand, which will cause their prices to increase, thus lowering the rate of return, and assets below this level will be in low demand, causing their prices to decrease, which increases their rate of return.

(7)

6 Thus, the CAPM relates the expected excess return of the asset to the expected excess market return and the market risk of the asset. The excess return is calculated as the rate of return of an asset (or of the market), minus the risk-free rate. The relationship described by the CAMP implies that when we deflate the expected risk premium for securities with their beta coefficients (which measure the asset’s sensitivity to the market risk), the reward-to-risk ratio for any asset in the market is equal to that of any other security, and thus the market reward-to- risk ratio. This relationship is represented by the following model:

( ) (1)

where is the expected return of asset i, E is the expected market return, and is the risk-free rate. is a measure of how the volatility of the expected excess asset return relates to the expected excess market return, described by the following expression:

(2)

Some assumptions must be fulfilled for CAPM to hold. Investors must be rational and risk- averse; there must be no possibility of arbitrage; and returns must follow a normal distribution.

Investors must have access to a risk-free rate of return, and investors must have the option to take a long or short position of any size in any asset, including the risk-free asset. The capital market must be perfect, meaning that there are no “market imperfections” such as taxes or transaction costs, that information is freely available to all investors, and that there are a large number of buyers and sellers on the market. This ensures that all assets are being correctly priced. If the assumption of risk-averse investors is fulfilled, investors given a choice between two assets with equal expected returns but unequal variances will prefer the asset with the lower variance. This is a necessary assumption because if investors were instead risk neutral, they would always choose the asset with the highest expected return regardless of the volatility of the asset, which would make the CAPM model collapse. However, in reality, investors do take risk into account when making investment decisions, and they want to be compensated for each additional unit of risk.

There is some criticism of the CAPM model as well. Although the model is elegant, critics such as Roll (1977) and French (2016) argue that it might be incomplete, and cannot be tested on its own.

(8)

7

2.2. Jensen’s alpha

The financial term “alpha” was first introduced by Michael Jensen in his paper, “The Performance of Mutual Funds in The Period 1945–1964” from 1968. Since he was studying whether mutual fund managers could “beat the market”, he extended the market model from the CAPM by introducing the Greek letter alpha to capture the risk-adjusted abnormal returns. The estimate of alpha for a successful mutual fund manager would be greater than zero, while a negative estimate of alpha would suggest that the manager performed worse than the market. Note that in the CAPM the term is restricted to be equal to zero since the theory predicts that no assets should have (positive or negative) abnormal returns. However, empirical values may deviate from zero.

The same model and interpretation can be used to evaluate the performance of assets as well. If an asset’s return is greater than the risk adjusted return, the asset has a positive alpha. The formula for an asset i is:

(3)

where is the realized portfolio return, is the return of the market and is the risk-free rate of return. is the beta value of the asset, which is a measurement of the relationship between the asset volatility and market volatility. It is calculated using equation (2), found on page 6.

While captures the systematic risk of the asset, is the part of the asset return that is not explained by the systematic risk.

This extended market model is used in this thesis to model risk-adjusted profits. Thus, the estimated alpha value for our simulated “stocks”, , is used as a basis for portfolio formation as well as evaluating the performance of these simulated portfolios.

2.3. GARCH-type models

In order to understand why various GARCH-type models are used for modelling financial time series, we must first take a closer look at why we cannot use the ordinary least squares (OLS) estimator for financial data. As described by e.g. Varga & Rappai (2002), the assumptions that need to be fulfilled for OLS to be the best linear unbiased estimator are:

(9)

8 1. The linear regression model is linear in parameters.

2. The conditional mean of the error term is zero.

3. No perfect multicollinearity.

4. The error terms are normally distributed.

5. No homoscedasticity and no autocorrelation.

Authors such as Miller & Scholes (1972), Brenner & Smidt (1977), Martin & Klemkosky (1975), Belkaoui (1977), Brown (1977), and Bey & Pinches (1980) find evidence of heteroscedasticity when estimating the market model. For financial data, the current volatility depends on the volatility of previous periods as well as absolute values of previous returns. The normality assumption is often not fulfilled for this type of data, as financial data tends to have “fat tails”, which means that extreme outcomes are more common than in the normal distribution.

Heteroscedasticity and autocorrelation can also be present.

According to Varga & Rappai, had OLS been used while heteroscedasticity is present, we would risk the following issues:

1. Inefficient OLS estimators, which means that while the point estimates are unbiased, they will not have the minimum variance.

2. Standard errors estimates might be biased. This means that the risk of making a type 1 error, rejecting a true null hypothesis, would be different from the decided upon significance level. This could lead to incorrect inferences.

3. When regressing the return of an individual asset upon the market return, the results will suffer from an underestimation of the coefficient of determination of the effect of the market return upon the individual asset. This means that systematic (non- diversifiable) risk will be understated, and diversifiable (non-systematic) risk will be overstated.

Thus, using OLS for modelling financial time series would lead to problems in terms of incorrect standard errors for the estimated parameters, which leads to biased test statistics and a risk of erroneous conclusions.

In 1982, Robert F. Engle introduced the ARCH model, which he developed in order to handle conditional heteroscedasticity in e.g. financial time series. This model is described in his seminal paper, “Autoregressive Conditional Heteroscedasticity with Estimates of the Variance of United

(10)

9 Kingdom Inflation”. In the ARCH model, the conditional variance of the current time period is modelled as a function of the variance of the previous time periods.

In 1986, the ARCH model was extended to a generalized ARCH (GARCH) model by Tim Bollerslev in his paper, “Generalized autoregressive conditional heteroscedasticity”. Although the ARCH model can handle heteroscedasticity in an autoregressive time series, Bollerslev’s GARCH model also allows for an auto-regressive moving average (ARMA) term. Since then, GARCH models have become the standard for making financial computations and estimations. Because the volatility on the financial market depends on the volatility of previous periods as well as absolute values of previous returns, it makes sense to use this type of model in finance.

Over time, many different GARCH-type models have been developed, such as the threshold- asymmetric GARCH (TGARCH) model and the integrated GARCH (IGARCH) model. TGARCH models the conditional standard deviation rather than the conditional variance. This means that when using a TGARCH model, we take into account that positive and negative shocks may have different effects on the volatility of an asset price. This makes sense in a financial setting, because a negative shock to the price of a stock (or market index) is likely to cause an increase in the volatility of the consecutive periods, while a positive shock of the same size might cause a smaller increase in volatility. The IGARCH model is a restricted version of the GARCH model, where the persistent parameters sum up to one. This means that we import a unit root to the GARCH process.

In this thesis, we focus on testing the characteristics of GARCH-type (1,1) models, where the variance in the current period is modelled as a direct function of the last period’s variance and value of the error term.

The market model is estimated fitting generalized autoregressive conditional heteroscedasticity (GARCH), threshold-asymmetric GARCH (TGARCH), and integrated GARCH (IGARCH) models to the variance of the error term. The TARCH model, which is the foundation for the TGARCH model, was developed by Rabemananjara & Zakoian (1993). The TARCH model was then extended into the TGARCH model by Jean-Michel Zakoian in 1994. The IGARCH model was developed by Nelson (1990).

The Student’s t-distribution is used because it has fat tails (compared to the normal distribution), which is shown for example in Peiró’s paper “The distribution of stock returns:

international evidence” (1994) to be better suited for fitting stock market returns.

(11)

10 As mentioned above, GARCH models are designed to capture some common properties of financial data, such as volatility clustering, tail heaviness, leptokurtosis of the marginal distribution, as well as dependence without autocorrelation. In addition, according to Rabemananjara & Zakoian, the TGARCH model also captures what is called “leverage effects”.

These properties of financial data can be described in further detail. A small glossary is provided here:

Tail heaviness is when extreme values are more common than in the normal distribution. This is true, for example, for the Student’s t-distribution, which is used for the GARCH, TGARCH, and IGARCH models in this thesis.

Volatility clustering is when periods of high volatility are followed by additional periods of high volatility, and periods of low volatility are often followed by more periods of low volatility. Note that GARCH type models allow the conditional variance of the residual to evolve according to an autoregressive-type process, which captures volatility clustering.

Leptokurtosis of the marginal distribution means having greater kurtosis than in the normal distribution, which means that the distribution is less concentrated to the mean.

Dependence without autocorrelation means that there is some form of serial dependence that does not take the form of a linear correlation.

Leverage effects occur when there is a tendency for volatility to increase more following a large price decrease, compared to the period following a price increase of the same magnitude.

Note that the letters chosen to denote the GARCH parameters in this thesis are different from the standard representation. This is done in order to avoid confusion between the GARCH-type models and the market model, which are normally represented using the same Greek letters to represent the parameters.

The GARCH (1,1) model introduced by Bollerslev (1986) is:

(4)

where is the variance for the period t, is a long-run variance parameter, and represents the effect of , which is the square of the last period’s error term. The term represents the effect of , which is the last period’s variance.

(12)

11 For a GARCH-type model to be used, a variance greater than zero is required, which means that the time series must take on different values at different points in time. Mathematically, this means that the following three conditions must be fulfilled:

The expression for the unconditional variance in the GARCH (1,1) is given by the expression:

(5)

which implies that:

These terms are subject to the following four constraints:

The TGARCH (1,1) model introduced by Zakoian (1994) can be represented thus:

(6)

where is the leverage parameter which captures the asymmetric effects of past shocks. The size of determines the magnitude of the leverage effect, and when it is equal to zero, we get back to the standard GARCH. That parameter is a dummy variable used to differentiate positive and negative shocks, i.e. if and if . The interpretation of the other parameters is the same as for the GARCH (1,1) model. Just as for the GARCH, a positive variance is assumed.

As mentioned in the theoretical framework section, one assumption made when using CAPM is the risk of asset returns being fully captured by variance. If investors react differently to positive and negative shocks, variance may not be an adequate measurement of risk. In this case, asymmetric models such as the TGARCH model should be preferable to symmetric models, such as the GARCH and IGARCH models. In their 2013 paper, “Comparing the performances of GARCH-type models in capturing the stock market volatility in Malaysia”, Lim & Sek conclude that asymmetric GARCH type models (such as the TGARCH model) perform better at explaining stock market volatility during financial crises, whereas symmetric GARCH type models perform better during normal periods.

(13)

12 According to e.g. Bollerslev & Engle (1993), a common finding in studies of high-frequency financial data is the presence of an approximate unit root. The Integrated GARCH (IGARCH) model, developed by Nelson (1990), is a special case of the GARCH model, where the persistent parameters sum up to one and thus imports a unit root in the process. If the process describing the error variances is approximately a unit root process, this model will be a better fit than the standard GARCH model. The generalized condition in this case is:

For an IGARCH (1,1), this simplifies to:

where , as before, is the effect of the square of the last period’s error term, and denotes the effect of the last period’s variance.

2.4. The Monte Carlo method

The Monte Carlo method was developed by Stanislaw Ulam in 1946, and the paper Ulam wrote with Nicholas Metropolis, ”The Monte Carlo Method”, was published in 1949. The method relies on generating data as if random, based on a known distribution. Doing this allows us to study different potential outcomes of a process. This method is, among other things, commonly used for financial applications such as forecasting future development of e.g. stock prices or commodity prices. If the process of interest is known, we can study the distribution of outcomes by performing a large number of simulations. By turning chance into proportions, it is easier to interpret and compare different alternative courses of action.

In his 2003 paper, “So You Think You’ve Got Trivials”, Shlomo S. Sawilowsky discusses the characteristics of the Monte Carlo method and Monte Carlo simulations. The basic idea is to obtain information about the distribution of outcomes of a function or variable by allowing the function or variable to run its course enough times to produce a sufficiently sized sample in order to study the properties of the outcomes. A simple example of this concept is to imagine a coin being tossed several times, the results of which are then used to draw conclusions about the properties of the coin. A famous example of the Monte Carlo method is when William Sealy

(14)

13 Gosset (pen name Student) managed to support his derivation of the t-statistic sampling distribution by conducting a Monte Carlo simulation.

When computers became a common tool in statistics, several new opportunities were created for Monte Carlo simulations. For example, instead of actually going through the time-consuming process of tossing a coin many times, you could use a programmed computer process. In this case, you could let the computer draw pseudo-random numbers form a univariate distribution on the range [0,1], and let 0-0.499 represent heads and 0.500-1 represent tails. In “Recent advances and future prospects for Monte Carlo” by Forrest B. Brown, the author describes an example from history: during the Manhattan Project this form of simulation was used by having ENIAC computers simulate the distance a neutron could travel through different kinds of material. In this case, it was shown to be impossible to use mathematical methods, and a Monte Carlo method was required.

An important fact to point out is that variables produced by a computer are never truly random;

instead, they are pseudo-random, which means that they behave as if randomly drawn from a known distribution.

3. Previous Research

Since the works of Robert F. Engle (1982) and Tim Bollerslev (1986), there has been work done on developing improved GARCH-type models, as well as studying different kinds of GARCH processes and their properties. There have also been many studies using the various GARCH models, for example in finance. There have been simulation studies that have examined the properties of GARCH models when measuring volatility for different kinds of financial assets.

However, as far as we know, there are no published works on studies focusing on the estimate of the intercept term when GARCH is used for modelling the variance of the error term.

In this section, we will first give a brief overview of the literature about GARCH models, including the most commonly used extensions of the GARCH model except those presented in Section 2.3. We will then introduce the reader to financial context by giving a brief overview of studies testing trading strategies, followed by descriptions of selected studies regarding the GARCH-type models and their performance when used to model financial data and estimate the CAPM.

(15)

14 Some examples of extensions of the standard GARCH is the TGARCH and IGARCH models (which are outlined in Section 2.3), the GRJ-GARCH model and the EGARCH model. The GJR-GARCH is named after its developers Lawrence R. Glosten, Ravi Jagannathan and David E. Runkle, who introduced the model in their paper ”On the Relation between the Expected Value and the Volatility of the Nominal Excess Return on Stocks” (1993). The GJR-GARCH is similar to the TGARCH model in the sense that it captures leverage effects, but models conditional variance rather than conditional standard deviation. The Exponential GARCH (EGARCH) model was developed Daniel B. Nelson and presented in his article “Condtional Heteroskedasticity in Asset Returns: A new approach” (1990). The EGARCH is an extension of the GARCH model that allows the sign and the magnitude of the shocks to have separate effects on the volatility.

In addition to developing different extensions of the GARCH model, several studies have been made about the properties of ARCH/GARCH when predicting volatility on financial markets.

Some examples are Franses & Dijk (1996), Karolyi (1995), Chou (1988), Agnolucci (2009), Lee

& Liu (2014), and Lamoureux & Lastrapes (1990).

Let us now take a look at the financial context. The topic of risk-adjusted returns is a central part of the field of finance. Many different strategies of earning abnormal returns have been tested over the years, and some of these strategies use previous returns or return sequences when trying to form portfolios that might beat the market. Some authors test return-reversal strategies, that is, strategies investing in historical “loser portfolios”, and other authors test strategies of investing in historical “winner portfolios”. A few examples of authors that use historical returns for portfolio formation are De Bondt and Thaler (1985), Rosenberg, Reid &

Lanstein (1985), Lehmann (1990), Jegadeesh and Titman (1993), De Groot, Huij & Zhou (2011), and Piccoli, Chaudhury, Souza & Da Silvia (2017).

Since Tim Bollerslev’s 1986 paper, “Generalized autoregressive conditional heteroscedasticity”

from (1986), use of the GARCH model has become standard in financial studies. In their study,

“Heteroscedasticity and efficient estimates of beta” from 2002, Varga & Rappai use the Capital Assets Pricing Model (CAPM) to investigate whether the heteroscedasticity observed in the United States stock market is also visible in smaller markets, like the Budapest stock market. By doing so, the authors try to determine whether the use of GARCH models is motivated for estimating CAPM when studying markets other than, for example, the Dow Jones Industrial Average. Their results show that conditional heteroscedasticity is also a characteristic for financial markets outside the United States, which means that GARCH should be used to model

(16)

15 the variance of the error term when estimating models such as CAPM on data from other countries as well.

In the 2013 paper, “Comparing the performances of GARCH-type models in capturing the stock market volatility in Malaysia” by Lim & Sek, empirical data from the Malaysian stock market is used to test the performance of the GARCH, EGARCH and TGARCH models. The performance of these models are compared based on mean squared error, root means squared error and mean absolute percentage error. As mentioned in Section 2.3, their findings indicate that that asymmetric GARCH models such as the TGARCH perform better for capturing the volatility during financial crises, whereas symmetric GARCH models perform better during pre- and post- crisis periods.

In the paper, “Predicting the success of volatility targeting strategies: Application to equities and other asset classes” (2015) by Perchet, de Carvalho, Heckel & Moulin, the authors test the performance of different models from the GARCH family. Historical returns are used to perform a Monte Carlo study where 5 000 simulated scenarios are used to study the performance of a volatility targeting strategy when applied to equities, corporate and government bonds, and commodities. They study the GARCH, GJR-GARCH, and the IGARCH models. In addition to this, they test the GARCH model on simulated data with Student’s t-noise, which means that the distribution has fatter tails compared to the normal distribution. The authors find that taking volatility clustering and fat tails into account are the two most important aspects to increase the explanatory power when modelling this type of time series. Out of all the models tested, the IGARCH model performs best both in forecasting and controlling for variance on realised returns.

In the master’s thesis, “Forecasting the Volatility in Financial Assets using Conditional Variance Models” by Swanson & Hultman from 2017, the authors test the performance of the ARCH, GARCH, IGARCH, exponential GARCH (EGARCH), and GJR-GARCH models on the London Bullion Market Gold Price, the OMXS30, and the USD/EUR exchange rate. Their results are inconsistent, and suggest that none of these models consistently perform better than the others.

(17)

16

4. Method

Daily adjusted close data on the Standard & Poor’s 500 Index as well as stocks in the Dow Jones Industrial Average from between 2007 and 2017 was downloaded from Yahoo Finance (https://finance.yahoo.com) through the “tseries” package in R. This data is used to estimate the GARCH parameters as the basis for our simulations.

Out of 30 stocks of the of the Dow Jones Industrial Average composition selected from the 1st of February, only 28 are used. It is not possible to find data on Dow DuPont Inc. and Visa Inc. for the entire period. After the merger of Dow Chemical and DuPont in 2017, both of the old tickers were taken down from every webpage we searched. Visa was not publicly traded until March 2008, which might mean that we do not get a composition of simulated stocks which fully reflects the US stock market. It is hard to speculate about what kind of bias this might introduce.

Since the purpose of this thesis is not concerned with studying empirical data but rather using the data as a basis for our simulations to test the performance of the three GARCH-type models, the missing stocks should not be a serious problem.

Choosing the time frame is a compromise between using as much information as possible about how these time series have behaved in the past but also avoiding using data from both before and after a structural shift. By using data ranging from 2007 to 2017, we include the 2007-2008 financial crisis in our data set. This means our simulated “stocks” might be slightly more volatile than stock prices generally are between financial crises. Because of this, the results of our study should be generalizable for how the tested models behave, both when they are used on data from relatively volatile and relatively calm periods. However, our results might also not capture how these models perform at capturing the variance during crisis periods or more normal periods as accurately as they would have done if we had chosen to use either only data from the financial crisis of data from the post-crisis period.

Financial calculations are usually performed on excess return, which is the difference between the return of the stock or market index and the risk-free rate of return. In this thesis, US treasury bills with a maturity of 3 months are used as a proxy for the risk-free rate. This data was gathered from the website of the Federal Reserve Bank of St. Louis (https://www.stlouisfed.org) and converted from an annualized discount rate to a simple daily interest rate. Using the simple interest rate rather than the compound interest rate means that we do not calculate the interest as if the accumulated interest of previous periods is reinvested.

(18)

17 We use empirical data spanning 10 years (2520 observations) to estimate parameter values when simulating our time series. Using these values, we simulate time series which are 6 months, 1 year, and 2 years long, respectively. The first half of each simulated time series is then used as “in sample”, and the second half is used as the “out of sample” period used for the evaluation. This means that the evaluation periods are 3 months, 6 months, and 1 year long, respectively. The number of trading days differs between different years, but in this thesis we have chosen to use 252 trading days per year. Thus, our evaluation periods consist of 63, 126, and 252 observations respectively. These time frames are chosen because trading strategies are commonly evaluated over these lengths of time. By using set seed values for our simulations, we ensure that the time series data used when evaluating the different models is identical.

These time series are then fitted to the market model including alpha described by equation (3) found on page 7. This is done using various GARCH (1, 1) models to model the variance of the error term.

Note that we fit GARCH models to our data three times: firstly, we fit a GARCH (1, 1) model to our empirical data to find parameter values for our simulations; secondly, we fit the GARCH- type model being tested to the first half of our simulated data to form portfolios; and thirdly, we fit the GARCH-type model being tested to evaluate the performance of our portfolios. In the latter two simulations, we estimate the market model by using the simulated “market index” as an external regressor.

We use the packages “rugarch” by Ghalanos & Kley (2019) and “fGarch” by Wuertz, Setz, Chalabi, Boudt, Chausse & Miklovac (2017) in R to fit both historical and simulated data to the GARCH (1,1) model as well as for simulating our data, obtaining alpha values to facilitate portfolio construction, and evaluating our portfolios.

In this thesis, we simulate data on the excess return for the stocks and market index. This means that the simulated stock return data corresponds to the expression ), and the simulated market index return data corresponds to in model (3), found on page 7.

We want to test how these models perform in a setting which is as realistic as possible, and in this thesis we have chosen to design our study as if we had tested the profitability of a trading where stocks with historical alpha values greater than zero are held. The steps of portfolio creation are:

(19)

18 1. We calculate the historical alpha value for each simulated “stock”.

2. We fit the market model to the simulated “historical” data (the first half of each simulation), which gives us the estimates of historical alpha and beta values for each stock, as well as the p-values for the estimated alphas. These estimations are calculated using GARCH, TGARCH, or IGARCH to model variance.

3. Using this information, we construct portfolios consisting of stocks with historical alpha values greater than 0. Note that since we are working with simulated data, if these stocks continue to outperform the simulated market index during our evaluation period, we know that it is due to chance.

4. The weights for each portfolio sum to 1 in order to make the point estimates comparable even though the number of stocks differ. In this study, the stocks included in each portfolio are equally weighted.

The weights are calculated thus:

5. The market model is then estimated for the evaluation period data. If this investment strategy works, the alpha values for the long-held portfolios would be greater than zero.

Thus, when evaluating the performance of the portfolios during the evaluation period, one-sided t-tests are used for all portfolios.

Here, the null hypothesis is that the portfolio’s alpha value during the evaluation period is smaller than or equal to zero, and the alternative hypothesis is that it is greater than zero, , and .

The p-values for the estimates of the alpha are then saved, and a one-population proportion test is performed, where we test whether the share of alpha values significant at the 10%

significance level is equal to 0.1, at the 5% significance level is equal to 0.05, and at the 1%

significance level is equal to 0.01.

The one-population proportion test is used to test whether a population proportion is equal to a pre-specified value. This test is described in the 2014 course book “Statistical hypothesis testing with SAS and R” by Taeger & Kuhnt. The formula for the test statistic is:

̂

where ̂ is the population proportion, is the pre-determined value, and n is the sample size.

As usual for Z-statistics, we reject the null hypothesis if , where is the percentile of the standard normal distribution.

(20)

19 In this thesis, 1000 simulations are performed for each unique length of evaluation, and the same simulated data is used when evaluating the GARCH, TGARCH, and IGARCH models for 3 months, 6 months, and 1 year, respectively. This means that the differences between the models’ performance does not depend on random differences in the simulations.

For our simulation, we chose to work with daily data, which is the highest frequency at which stock price data is freely available. In their 2008 paper, “Temporal Aggregation of GARCH Models: Conditional Kurtosis and Optimal Frequency”, Breuer and Jandacka conclude that the assumption of strong GARCH effects seems to be better justified for high frequency data. They also conclude that although a high frequency time series may exhibit strong GARCH characteristics, an aggregated version of these time series will not necessarily remain a strong GARCH process. Based on empirical data (3-month US treasury bills, Swiss Market Index, Dow Jones Industrial Average, JPY/EUR), it is suggested that daily data satisfies the assumption of strong GARCH effects better than other frequencies.

As our study uses data on the stocks in the Dow Jones Industrial Average, just like the study by Breuer & Jandacka, we also decided to use daily data, which their study indicated is the optimal frequency.

There is no perfect alternative method, but the closest alternative would be to run the same calculations on historical data of some closely watched stocks, such as those in the Dow Jones Industrial Average Index. Because of the role of the US in the world economy, the US stock market is likely more closely watched than many other markets, and the largest companies should attract the most interest. However, we would then just have one outcome per time period, and we cannot know if there are indeed real underlying factors that contribute to the stock return for a given stock, such as the company having an unusually skilled CEO. For this reason, we would not be sure whether or not a rejection of the null hypothesis would truly be due to chance. Using simulated data takes care of this issue, which is why this method is used in this thesis.

The purpose of our simulation is to create time series which behave like financial time series, but for which the null hypothesis of alpha being equal to zero is true. This means that alpha value estimates greater than zero are due to chance, and if the estimated relationship should continue in the consecutive periods, this too will be due to chance. Thus, the alpha values for our simulated portfolios during the evaluation period should, on average, be close to zero. Although

(21)

20 there is no way to directly test the successfulness of our simulation design on its own, we know that if it works as intended, the average estimated alpha value for our portfolios during the evaluation period should be equal to zero. If the average estimated alpha value is close to zero, approximately 50% of our estimated alpha values should be greater than zero. This means that examining the size of the share of estimated alpha values being greater than zero could hint at how our simulation performs in terms of successfully producing data for which the null hypothesis is true. If our study design is successful, this share should be close to 50%. The reason why we look at the share of estimates of alpha being greater than zero rather than the average estimated alpha value is because it is hard to interpret the size of this estimate without having any value to relate it to. The results of this evaluation of our method are presented in the Results section.

4.1. Dealing with convergence issues

Unfortunately, we encountered an issue we are unable to fully redeem: estimations calculated using the “rugarch” package do not converge sometimes. According to the creator of the package, this is a known problem, and happens around 5-10% of the time. Information about this package can be found on webpages such as Unstarched (http://www.unstarched.net) and Cran R-project (https://cran.r-project.org).

In order to mitigate this problem, we determine our estimations in three steps:

1. We estimate the relationship between the simulated stock and the simulated market index using OLS.

2. We use the error terms from the regression estimated in Step 1 to estimate the market model using a model from the GARCH family to model error term variance.

3. The parameter estimates from Step 2 are then used as starting values when fitting the market model to our simulated stock, using a model from the GARCH family to model the variance of the error term. This means that the iteration starts from the values estimated in Step 2.

We also use the “hybrid” solver, which cycles through all available solvers if the first one tested fails to converge. By doing so, we encounter convergence issues more rarely, but they still occur.

Issues with convergence are said to be caused by a poor model fit, e.g. due to outliers, and one way to deal with this problem is by using only the “middle” 98% of the data. However, this solution would mean that we are no longer studying data which behaves like financial data.

(22)

21 Instead, we run the estimations on the data as is, and report the cases where we have convergence issues as a separate variable of evaluation.

Because the observations suffering from convergence issues must be excluded, we run the risk of a biased result. Although it is impossible to know what the lost data looks like, it would be reasonable to think that in the case of convergence issues, results with significant parameter estimates would be overrepresented in the analyses which do converge and underrepresented in the analyses which do not converge. If this is the case, the estimated share of significant p- values should suffer from an upwards bias, which would increase with the share of non- converging estimations. In our Results section, we compare how the different models perform in this regard, and how the number of observations influences the rate of non-convergence.

5. Results

In this section, the results are presented. First, we look at the share of significant estimations of portfolio alpha for our various models and lengths of evaluation at the 10%, 5%, and 1%

significance levels. In order to ensure that our simulations have been successful in producing data for which the null hypothesis of the intercept of alpha being equal to zero, we look at the share of estimations of alpha being greater than zero. If our study design is successful, this share should be close to 50%. Finally, we take a look at the rate of convergence issues and discuss the implications.

(23)

22 Table 1.

Share of portfolios formed which estimated to significantly outperform the market index at the 10% significance level, using GARCH, TGARCH and IGARCH for evaluation periods of 1 year, 6 months, and 3 months, respectively.

A proportions test is used to evaluate these differences from the desired share of 0.1.

The significance level of this test is represented using asterisks. Estimates significantly different from 0.1 at the at the 5% level are marked with one asterisk (*), estimates significant at the 1% level are marked with two asterisks (**), and estimates significant at the 0.1% level are marked with three asterisks (***).

Model 1 year 6 months 3 months

GARCH 0.0070 *** 0.0433 *** 0.0125 ***

TGARCH 0.1800 *** 0.1902 *** 0.1729 ***

IGARCH 0.0080 *** 0.0072 *** 0.0098 ***

As we can see in Table 1, the share of estimated intercepts significantly greater than zero at the 10% level when using the GARCH model are between 0.7% and 4.3%, which is less often than the expected 10% of the time. When using the TGARCH model, this share is instead between 17% and 19%, which is more often than the expected 10% of the time. Using the IGARCH model, this number is instead between 0.7% and 1%, which is less often than the expected 10% of the time.

When we use a proportions test to analyse these probabilities, we can conclude that all three models produce significant estimates of the intercept term at frequencies significantly different from the selected 10% level.

(24)

23 Table 2.

Share of portfolios formed which estimated to significantly outperform the market index at the 5%

level, using GARCH, TGARCH and IGARCH for evaluation periods of 1 year, 6 months, and 3 months, respectively.

A proportions test is used to evaluate these differences from the desired share of 0.5.

The significance level of this test is represented using asterisks. Estimates significantly different from 0.1 at the at the 5% level are marked with one asterisk (*), estimates significant at the 1% level are marked with two asterisks (**), and estimates significant at the 0.1% level are marked with three asterisks (***).

Model 1 year 6 months 3 months

GARCH 0.0040 *** 0.0206 *** 0.0062 ***

TGARCH 0.1460 *** 0.1552 *** 0.1434 ***

IGARCH 0.0040 *** 0.0031 *** 0.0054 ***

As we can see in Table 2, when GARCH and IGARCH models are used, estimated intercepts significantly greater than zero at the 5% level are estimated between 0.31% and 2.06% of the time, which is less often than the expected 5% of the time. When the TGARCH model is used, this happens between 14.34% and 15.52% of the time, which is more often than the expected 5% of the time. We conclude that all three models produce significant estimates of the intercept term at frequencies significantly different from the selected 5% level.

Table 3.

Share of portfolios formed which estimated to significantly outperform the market index at the 1%

significance level, using GARCH, TGARCH and IGARCH for evaluation periods of 1 year, 6 months, and 3 months, respectively.

A proportions test is used to evaluate these differences from the desired share of 0.01.

The significance level of this test is represented using asterisks. Estimates significantly different from 0.1 at the at the 5% level are marked with one asterisk (*), estimates significant at the 1% level are marked with two asterisks (**), and estimates significant at the 0.1% level are marked with three asterisks (***).

Model 1 year 6 months 3 months

GARCH 0.0000 ** 0.0041 0.0021 *

TGARCH 0.1130 *** 0.1031 *** 0.1047 ***

IGARCH 0.0010 ** 0.0000 ** 0.0033

(25)

24 As we can see in Table 3, for the 1 year evaluation period, the GARCH model produces significant estimates of the intercept term being greater than zero 0% of the time. For the shorter evaluation periods, this number is between 0.21% and 0.41% of the time. Thus, it consistently produces significant estimates less often than the expected 1% of the time. The TGARCH model gives significant estimates of the intercept term being greater than zero between 10% and just over 11% of the time, which is more often than the expected 1%. The IGARCH model gives significant estimates at the 1% level of the intercept term being greater than zero between 0% and 0.33% of the time, which is less often than the expected 1% of the time.

Using a proportions test to analyse these probabilities, we can conclude that in 7 out of 9 cases, the models produce significant estimates of the intercept term at frequencies significantly different from the selected 1% level.

As we can see in the three tables above, there is a clear pattern of the GARCH and IGARCH models rejecting the true null hypothesis less often than expected, whereas the TGARCH rejects the null more often than expected. This is true for all three tested significance levels, which means that there is a risk of drawing incorrect inferences when using any of these models.

Although we do not examine the power of the test (the probability to reject a false null hypothesis) in this thesis, we know there is a trade-off between a stricter significance criterion and a high power, which is taken into account when deciding upon which significance level to use. This means that when using the GARCH and IGARCH models, the test of the intercept term is likely to have a much lower power than desired, whereas the use of a TGARCH model is likely to give us a higher power and higher risk of making a type 1 error.

Table 4.

Share of portfolios formed for which the estimated alpha is greater than zero

Model 1 year 6 months 3 months

GARCH 0.5266 0.4923 0.5010

TGARCH 0.4920 0.5115 0.5066

IGARCH 0.5111 0.5232 0.5092

The purpose of our simulation was to create a setting where the null hypothesis of alpha being equal to zero is true. As we can see in Table 4, approximately 50% of the estimated alpha values are greater than zero. This means that the average estimate of alpha is close to zero, which suggests that we have been successful in creating a simulation with the desired properties. Had

(26)

25 these values been much larger or smaller than 50%, we would have had reason to doubt the success of our study design.

Table 5.

Number of simulations when one or more models failed to converge (out of 1000)

Model 1 year 6 months 3 months

GARCH 3 31 38

TGARCH 0 1 17

IGARCH 6 29 81

As we can see in Table 5, longer evaluation periods make convergence issues less likely. The model which converges most often is the TGARCH model, followed by the GARCH model. For the evaluations performed using data ranging over 1 year and 3 month, the IGARCH model fails to converge more often than the other two models. For the evaluations performed using data ranging over 6 months, the GARCH model fails to converge more often than the other two models.

Although there is risk of bias due to the missing values for the shorter evaluation periods, the loss for estimations using 1 year of data is between 0% and 0.6%, so it is unlikely that such would cause substantial bias in these cases. For evaluations using 6 months of data, the loss ranges between 0.1% and 3.1%, and for evaluations using 3 months of data, the loss is between 1.7% and 8.1%. If we go back to Tables 1-3 and compare the analyses using different lengths of evaluation, we can also note that there is no clear pattern in what happens to the significant share of p-values associated with the point estimates when the evaluation period is shortened.

This suggests the missing values do not introduce a significant bias to our analyses using shorter evaluation periods. Even if we choose only to trust the estimations using an evaluation period of 1 year, we see a clear pattern of the GARCH and IGARCH models rejecting the true null hypothesis significantly less often than when compared to the chosen significance level, and the TGARCH model rejects the true null hypothesis significantly more often. We can thus conclude that none of these models perform as expected with respect to the type 1 error being equal to the chosen significance level.

(27)

26

6. Discussion

The purpose of this thesis is to determine if the rate at which we reject the true null hypothesis of the intercept term being zero is equal to the desired level of significance when using GARCH (1,1), TGARCH (1,1), or IGARCH (1,1) to model variance of the error term. One context where the intercept value is of particular interest is when using “Jensen’s alpha” to evaluate asset performance. In this case, the development of the asset of interest is regressed against a market index, and the intercept value represents risk-adjusted profit. This is a common measurement for evaluating asset performance relative to an index.

We examine if the risk of type 1 error is equal to the significance level using a Monte Carlo simulation. Historical data on the Standard & Poor’s 500 Index and individual stocks of the Dow Jones Industrial Average are used to estimate GARCH parameters to be used as a basis for our simulations in order to simulate realistic stock and market index returns ranging over periods of 6 months, 1 year and 2 years. In order to make our study as similar as possible to a real study testing a trading strategy, we use the first half of the simulated data as “historical” data and consequently use the estimated alpha values from these periods to form portfolios, which are then evaluated over the second half of the simulated time series. The second half of each time series are then used for evaluation, so the performance of the GARCH models is tested on data ranging over 3 months, 6 months, and 1 year, respectively. The alpha value of the portfolio during the evaluation period is estimated using three GARCH type models. The significance level of the point estimate of alpha is reported using a one-sided test. Finally, we test whether the proportion of p-values smaller than the desired significance level (falsely rejecting the true null hypothesis) is indeed equal to 0.1, 0.05, and 0.01 respectively.

Our results indicate that none of the three GARCH type models perform as expected. The TGARCH model consistently estimates significant intercept terms at higher rates than the desired significance level, whereas the GARCH and IGARCH models consistently estimate significant intercepts at lower rates than the desired significance levels. At the 5% and 10%

levels, these deviations are always significant at every commonly used significance level. At the 1% level, the deviations are significant in 7 out of 9 analyses. We can thus conclude that none of these models reject the true null hypothesis at the rate we would expect for the given significance level. This means that there might be a problem with incorrect inferences when these GARCH-type models are used to evaluate the intercept term, which is done e.g. in the context of portfolio evaluation.

(28)

27 Ideas for future research would be to test other GARCH models, such as the NGARCH, EGARCH, and GJR-GARCH. It might be revealing to test if the pattern of symmetrical models having too low rejection rates and asymmetrical models having too high rejection rates prevails when using other GARCH-type models. Another important topic is to develop methods to deal with the issue of rejection rates of a true null hypothesis being significantly different from the chosen significance level. It would also be interesting to test the power of the test for the intercept of the market model using different GARCH models for the error term.

(29)

28

References

Articles

Agnolucci, Paolo ”Volatility in crude oil futures: A comparison of the predictive ability of GARCH and implied volatility models.” (2009)

Energy economics, 31 (2), pp 316-321

Belkaoui, Ahmed “Canadian Evidence of Heteroscedasticity in the Market Model” (1977) Journal of Finance, 32 (4), pp 1320-1324

Bey, Roger P. and Pinches, George E. “Additional Evidence of Heteroscedasticity in the Market Model” (1980)

Journal of Financial and Quantitative Analysis, 15 (2), pp. 299-322

Bollerslev, Tim “Generalized Autoregressive Conditional Heteroscedasticity” (1986) Journal of Econometrics, 31 (3), 307-327.

Bollerslev, Tim and Engle, Robert F. “Common Persistence in Conditional Variances” (1993) Econometrica, 61 (1), pp. 167-186.

De Bondt, Werner F. M. and Thaler, Richard “Does the Stock Market Overreact?” (1985) The Journal of Finance, 40 (3), pp. 793-805

Brenner, Menachem and Smidt, Seymour “A Simple Model of Non-Stationarity of Systematic Risk”

(1977)

The Journal of Finance, 32 (4), pp. 1081-1092

Breuer, Thomas and Jandacka, Martin “Temporal Aggregation of GARCH Models: Conditional Kurtosis and Optimal Frequency” (2008)

(Accessed at http://ssrn.com/abstract=967824)

Brown, Forrest B. “Recent advances and future prospects for Monte Carlo” (2010) (Accessed at https://ccse.jaea.go.jp/ja/conf/snamc2010/brown.pdf)

(30)

29 Brown, Stephen “Heteroscedasticity in the Market Model: A Comment” (1977)

The Journal of Business, 50 (1), pp. 80-83

Chou, Ray Y. “Volatility Persistence and Stock Valuations: Some Empirical Evidence Using GARCH”

(1988)

Journal of Applied Econometrics, 3, pp. 279-294

Engle, Robert “Autoregressive Conditional Heteroscedasticity with Estimates of the Variance of United Kingdom Inflation” (1982)

Econometrica, 50 (4), pp. 987-1007

Franses, Philip H. and van Dijk, Dick. “Forecasting stock market volatility using (non-linear) Garch models (1996)

Journal of Forecasting, 15, pp. 229-235

French, Jordan “Back to the Future Betas: Empirical Asset Pricing of US and Southeast Asian Markets” (2016)

International Journal of Financial Studies, 4 (15)

French, Kenneth R. and Schwert, G. William and Stambaugh Robert F. “Expected stock returns and volatility” (1987)

Journal of Financial Economics, 19 (1), pp. 3-29

Glosten, Lawrence R. and Jagannathan, Ravi and Runkle, David E. “On the Relation between the Expected Value and the Volatility of the Nominal Excess Return on Stocks” (1993)

The Journal of Finance, 48 (5), pp. 1779-1801

De Groot, Wilma and Huij, Joop and Zhou, Weili “Another Look at Trading Costs and Short-Term Reversal Profits” (2011)

Journal of Banking & Finance, 36 (2), pp. 371-382

Hansen, Peter and Lunde, Asger “A forecast comparison of volatility models: does anything beat a GARCH(1,1)?” (2005)

Journal of Applied Econometrics, 20 (7), pp. 873-889

(31)

30 Jegadeesh, Narasimhan and Titman, Sheridan “Returns to Buying Winners and Selling Losers:

Implications for Stock Market Efficiency” (1993) The Journal of Finance, 48 (1), pp. 65-91

Jensen, Michael "The Performance of Mutual Funds in the Period 1945–1964" (1968) The Journal of Finance, 23 (2), pp. 389-416

Karolyi, G. Andrew “A Multivariate GARCH Model of International Transmissions of Stock Returns and Volatility: The Case of the United States and Canada” (1995)

Journal of Business & Economic Statistics, 13 (1), pp. 11-25

Lamoureux, Christopher G and Lastrapes, William D. “Persistence in variance, structural change, and the GARCH model” (1990)

Journal of Business & Economic Statistics, 8 (2), pp. 225-234

Lee, Dongkeun Liu, David ”Monte-Carlo Simulations of GARCH, GJR-GARCH and constant volatility on NASDAQ-500 and the 10 year treasury” (2014)

(Accessed at https://sites.duke.edu/djepapers/files/2016/10/leedongkeundjepapers.pdf)

Lehmann, Bruce N. ”Fads, Martingales, and Market Efficiency” (1990) The Quarterly Journal of Economics, 105 (1), pp. 1-28

Lim, Ching Mun and Sek, Siok Kun “Comparing the performances of GARCH-type models in capturing the stock market volatility in Malaysia” (2013)

Procedia Economics and Finance, 5, pp. 478-487

Lintner, John “The Valuation of Risk Assets and the Selection of Risky Investments in Stock Portfolios and Capital Budgets” (1965)

The Review of Economics and Statistics, 47 (1), pp. 13-37

Markowitz, Harry "Portfolio Selection" (1952) The Journal of Finance, 7 (1), pp. 77–91.

(32)

31 Martin, John D. and Klemkosky, Robert C. “Evidence of Heteroscedasticity in the Market Model”

(1975)

The Journal of Business, 48 (1), pp. 81-86

Metropolis, Nicholas and Ulam, Stanisław ”The Monte Carlo Method” (1949) Journal of the American Statistical Association, 44 (247), pp. 335-341

Miller, M. H., and Scholes, M. “Rates of Return in Relation to Risk: A Reexamination of Some Recent Findings.” (1972)

Studies in the Theory of Capital Markets (23), edited by Jensen, M. C. Praeger: New York

Mossin, Jan “Equilibrium in a Capital Asset Market” (1966) Econometrica, 34 (4), pp. 768-783

Nelson, Daniel B. “Stationarity and Persistence in the GARCH(1,1) Model” (1990) Econometric Theory, 6 (3), pp. 318-334

Peiró, Amado “The distribution of stock returns: international evidence” (1994) Applied Financial Economics, 4 (6), p431-439.

Perchet, Romain and de Carvalho, Raul Leote and Heckel, Thomas and Moulin, Pierre “Predicting the success of volatility targeting strategies: Application to equities and other asset classes” (2015) The Journal of Alternative Investments, 18 (3), pp. 21-38

Piccoli, Pedro and Chaudhury, Mo and Souza, Alceu; da Silva, Wesley Vieira “Stock overreaction to extreme market events” (2017)

North American Journal of Economics and Finance, 41, pp. 97 – 111.

Rabemananjara, R. and Zakoian, J. M. “Threshold Arch Models and Asymmetries in Volatility”

(1993)

Journal of Applied Econometrics, 8 (1), pp. 31-49

Roll, Richard "A critique of the asset pricing theory's tests Part I: On past and potential testability of the theory" (1977)

Journal of Financial Economics, 4 (2), pp. 129–176

References

Related documents

Previously the insulin used for treatment was isolated from pancreatic insulin producing cells from pigs, today the insulin used for treatment is human insulin and synthesised

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Exakt hur dessa verksamheter har uppstått studeras inte i detalj, men nyetableringar kan exempelvis vara ett resultat av avknoppningar från större företag inklusive

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Av tabellen framgår att det behövs utförlig information om de projekt som genomförs vid instituten. Då Tillväxtanalys ska föreslå en metod som kan visa hur institutens verksamhet

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Measured zero flow velocities plotted against number of primary velocity values aver- aged for measurements using 1,000 and 5,000 sing-around loops.. Standard deviations for

The pre-registration concerns a Type A power-generating facility which must meet all requirements of Commission Regulation (EU) 2016/631 establishing a network code on requirements