• No results found

Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power

N/A
N/A
Protected

Academic year: 2021

Share "Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power"

Copied!
88
0
0

Loading.... (view fulltext now)

Full text

(1)

Department of Business Administration

Master's Program in Accounting & Master's Program in Finance Master's Thesis in Business Administration III, 30 Credits, Spring 2020

Supervisor: Jörgen Hellström

Implied Volatility and

Historical Volatility

An Empirical Evidence About The

Content of Information And

Forecasting Power

Mohammad Aljaid & Mohammed Diaa Zakaria

(2)

ABSTRACT

This study examines whether the implied volatility index can provide further information in forecasting volatility than historical volatility using GARCH family models. For this purpose, this research has been conducted to forecast volatility in two main markets the United States of America through its wildly used Standard and Poor’s 500 index and its corresponding volatility index VIX and in Europe through its Euro Stoxx 50 and its corresponding volatility index VSTOXX. To evaluate the in-sample content of information, the conditional variance equations of GARCH (1,1) and EGARCH (1,1) are supplemented by integrating implied volatility as an explanatory variable. The realized volatility has been generated from daily squared returns and was employed as a proxy for true volatility. To examine the out-of-sample forecast performance, one-day-ahead rolling forecasts have been generated, and Mincer–Zarnowitz regression and encompassing regression has been utilized. The predictive power of implied volatility has been assessed based on Mean Square Error (MSE). Findings suggest that the integration of implied volatility as an exogenous variable in the conditional variance of GARCH models enhances the fitness of models and decreases volatility persistency. Furthermore, the significance of the implied volatility coefficient suggests that implied volatility includes pertinent information in illuminating the variation of the conditional variance. Implied volatility is found to be a biased forecast of realized volatility. Empirical findings of encompassing regression tests imply that the implied volatility index does not surpass historical volatility in terms of forecasting future realized volatility.

Keywords: Implied Volatilty, Mincer–Zarnowitz regression, GARCH Model, Realized Volatility, Predictive Power.

(3)

Table of Contents

1 Introduction ... 1

1.1 Background ... 1

1.2 GARCH and EGARCH Models ... 4

1.3 Problematization ... 4

1.4 Knowledge Gap ... 5

1.5 Research Question ... 6

1.6 The Purpose of The Study ... 6

1.7 Delimitations ... 7

1.8 Definitions of Terms ... 7

2 Theoretical Methodology ... 9

2.1 Choice of Research Area ... 9

2.2 Research Philosophy and Perspectives. ... 10

2.3 The Paradigm ... 10

2.4 Ontological Assumptions ... 11

2.5 Epistemological Assumption ... 11

2.6 Axiological Assumption ... 12

2.7 Rhetorical Assumption ... 13

2.8 Research approach and methodological assumption ... 13

2.9 Research Method ... 14

2.10 Research design ... 15

2.11 Data Collection Method in Quantitative Research ... 16

3 Theoretical Framework ... 18

3.1 Review of Prior Studies ... 18

3.2 Efficient Market Hypothesis ... 20

3.3 The Random Walk Theory... 22

3.4 No riskless arbitrage ... 23

3.5 Black and Scholes model ... 23

3.6 Choice of The Theory ... 25

4 Fundamental Concepts of Volatility ... 27

4.1 Volatility ... 27

(4)

4.3 Modelling Volatility ... 30

5 Practical Methodology ... 33

5.1 Research question and hypotheses ... 33

5.2 Data sample ... 35

5.3 Input Parameters ... 38

5.4 Statistical Tests ... 39

5.5 Preliminary Data Analysis ... 44

5.6 Model Specifications ... 47

6 Empirical Testing and Findings ... 54

6.1 In sample performance ... 54

6.2 Out-of-sample Volatility Forecasts Evaluation ... 62

6.3 Minzer-Zarnowitz regression test ... 63

6.4 Encompassing Regression Test ... 66

7 Conclusion ... 69

7.1 Conclusion ... 69

7.2 Further research ... 71

7.3 Implications ... 71

7.4 Contributions ... 71

7.5 Ethical and Societal Considerations... 72

(5)

8 References ... 75

List of Figures

Figure 1.1 Markets wake up with a jolt to the implications of COVID-19, (The economist, 2020). ... 1

Figure 4.1 Time plot of the VIX index of the European equity index, Jan 2005 to Dec 2019. ... 29

Figure 4.2 Time plot of the returns of OMX30 the stock market index for the Stockholm, Jan 2005 to Dec 2019. ... 30

Figure 5.1 Time plots of stock indexes for each of the European stock market and S&P 500, Jan 2005 to Dec 2019. ... 35

Figure 5.2 Daily co-movement of S&P 500 index and its respective volatility index, i.e., VIX index, Jan 2005 to Dec 2019. ... 36

Figure 5.3 Daily co-movement of Euro Stoxx 50 index and its respective volatility index, i.e., VSTOXX index, Jan 2005 to Dec 2019. ... 37

Figure 5.4 Time-Series plots of returns for each S&P 500 Index and Euro STOXX 50 Index, Jan 2005 to Dec 2019. ... 46

Figure 6.1 Time-series plot of daily squared returns for the Eurozone Stock market, Jan 2005 to Dec 2019. ... 63

Figure 6.2 Time-series plot of daily squared returns for the S&P 500 Stock market index, Jan 2005 to Dec 2019. ... 63

List of Tables

Table 5.1 Descriptive statistics: January 2005- December 2019... 45

Table 5.2 Different specifications of ARIMA (p,d,q) models Eurozone (Euro Stoxx 50) ... 48

Table 5.3 Different specifications of ARIMA(p,d,q) models USA (S&P 500) ... 49

Table 6.1 Estimation of GARCH, EGARCH and their corresponding VIX ... 56

Table 6.2 Estimation of GARCH, EGARCH and their corresponding VIX ... 59

Table 6.3 Descriptive statistics for realized volatility ... 62

Table 6.4 Mincer- Zarowitz regression test ... 64

Table 6.5 Encompassing regression test ... 67

(6)
(7)

1 Introduction

1.1 Background

For so long, the United States has been considered the financial hub of the world and the most influential player in the global markets. As we are writing these words, the world is facing a global health disaster resembled in the COVID-19 outbreak that originated in China. The COVID-19 outbreak has cast its shadows on the global financial markets. The news from Italy, which at the time has the highest cluster of infections outside Asia, has led to an 8.92 % fall in S&P 500 index on February 28 (Clifford T, 2020). Amid this global health crisis, markets are nervous, and investors are looking for a safe resort for their investments. As the uncertainty levels increase, investors are trying to assess the most affected assets by the shock. Declining copper prices is an influential indicator that markets are slowing down. So far, it appears that the most affected stocks are of firms that depend on remote supply chains like carmakers, airlines who had to cancel flights in and out of China as well as many countries that have restricted airlines movement in an effort to limit the virus spread (Economist,2020). Consequently, most interconnected stocks with china have shown a sharp plunge, like oil firms. Meanwhile, gold prices have reached its highest levels in seven years (Lewis, 2020), US dollar exchange rate has declined as the federal reserve bank tries to save the stock market (Smialek & Tankersley, 2020), and the yield on ten-year Treasury bonds declined to reach 1.29% on February 27 (Smith, 2020).

Till the moment we do not know the full impact or repercussions of the COVID-19 crisis on the global market, a global sense of unease and discomfort is prevailing among investors expecting a further and deeper crack in the global economy caused by the virus outbreak. Before this crisis, the markets were going well and SEB bank has forecasted a soft landing and a reduced recession (SEB Bank, 2019, p 6). Now, the dominant thoughts among investors are the vague structure of financial instruments that depend on low volatility and the bloated credit market.

(8)

Also, volatility has increased significantly as shown in Figure 1.1 highlighting the significance of fluctuations and time-varying financial uncertainty and their impacts on other economic variables (Economist, 2020). Generally speaking, uncertainty is defined as the volatility of random variables that is unexpectable from the market participants. It is worth noting that uncertainty may give rise to negative impacts on employment rates, investment, consumptions, as well as decisions of allocating funds (Jurado et al., 2015, p. 1177). In the same line, Kearney (2000, p. 31-32) argues that significant movements in the stock returns are able to imply crucial effects on a firm’s financial decisions, investment decisions, and other economic cycles. Furthermore, the importance of studying volatility dates back to 1987 financial crash, scholars have paid a great attention to volatility in terms of its causes and effects.

It is well acknowledged that stockbrokers particularly in the stock market, base their investment decisions not only on domestic information from the local market but also on information generated by global markets. This is due to the accelerating globalization of financial markets caused by the comparatively free flow of commodities and funds alongside upheaval in information technology (Koutmos & Booth, 1995, p. 747). Since rational investment demands quick reaction and accurate evaluation of new information. This implies that the market assessment is reasonable and logical and every stock will be valued to gain rational returns consistent with its risk. This suggests that only anticipated events will shift prices, shocks or unexpected events cannot follow a certain forecast model. Thus, financial shocks are not correlated. According to the efficient market theory, any new information arriving on the market should quickly be incorporated in the stock prices. However, not all events are predictable. Once new information arrives in the stock market prices, adjust rapidly to reflect the new information. The significance of new information arriving in the stock market is assessed by their ability to change the investor’s attitude toward risk and return (Birru & Figlewski, 2010, p. 1).

It is important to realize that due to the disastrous impacts of financial crises, which is often represented in the collapse of assets market prices, decreases in real estate prices, fall in stock prices, alongside a collapse in banking sectors, which had been precipitated by increases in unemployment rates and recession in production (Reinhart & Rogoff, 2009, p. 466). Under these circumstances, financial traders and organizations seeking to develop tools to help them to assess what risk they are exposed to, and to what extent expected returns are proportioned with its risk. Perhaps the most important of these tools is bond duration, which in practice is used to measure the sensitivity of the market value of the bond toward the changes in market interest rates. More broadly, the portfolio managers can immunize the value of his portfolio by matching the duration of his portfolio with the desired time interval (Reilly & Sidhu, 1980, p. 58). Then, other tools have emerged in an effort to manage risk including but not limited to the CAPM model, the Black -Scholes for options pricing, stress testing, value-at-risk (VAR), RiskMetrics, and CreditMetrics.

Grounding on the basis that evaluation models in practice measure the risk by the market volatility, and the fluctuations in market volatility influence the expected returns on all financial instruments. Accurate measuring of changes in the volatility may lead to a coherence illustration of the variations in the return over time (Harvey & Whaley, 1992, p.43). Observably, the volatilities embodied in options prices are often used to generate information about the future market volatility. Broadly spoken, implied volatilities resemble the expectations of the market traders about the market volatility in the short

(9)

term. It is worth noting that implied volatility is usually extracted by matching the observed market options prices and the theoretical prices calculated according to the Black-Sholes model (Goncalves & Guidolin, 2006, p. 1591). More importantly, implied volatilities are adequately crucial in a manner that is commonly reported and disclosed in financial news services, and many financial professionals closely follow these reports. For all these together, the accurate estimation of implied volatility and the content information of implied volatility has held great attention in the financial literature (Corrado & Miller, 2005, p. 340).

In the background, the implied volatility that is extracted from the options price is generally accepted as a device used to predict the future volatility of the underlying financial assets over the lifetime of the option. More importantly, under the efficient market hypothesis, the implied volatility should contain all the information incorporated in all other variables, which include the historical volatility in explaining the future variation in volatility (Yu et al., 2010, p.1; Christensen & Prabhala, 1998, p.126). Consistently, Becker et al. (2006, p. 139) argue that the implied volatility should absorb all relevant conditioning information in order to be a satisfactory tool to predict the future volatility of underlying assets. Comparatively, Canina and Figlewski (1993, p. 659) argue that the main reason behind the general acceptance of implied volatility as the best predictor of future volatility simply due to being represents the best market expectation of future volatility.

By the same token, Day and Lewis (1992, p. 267) considered the volatility that is embodied in the options prices as a measure used to predict the average volatility of underlying assets over the remaining life of the option. Furthermore, they argue that the predictive power of implied volatility reflects the extent of the informational content of options prices is subsumed in their volatilities. Moreover, Fleming (1998, p. 318) argues that, given that all inputs except the volatility of the underlying asset for the option pricing model, they are objectively determined. This follows that the implied volatility subsumes all available information in the options prices only if the model of option pricing is correct, and the market is efficient, i.e., all relevant information is collapsed in the option prices. Hence, that implied volatility overcomes historical volatility in predicting the future volatility of the underlying assets. In contrary, Pong et al. (2004, p. 2541-2542) who start their argument by clarifying that future volatility of stock prices can be forecasted either by studying the past behaviour of this price, i.e., historical information about this price or by utilizing the implied volatility that can be reverted from the value of the option written on this stock.

In recent years, Chicago Board Options Exchange (CBOE) in 1993 has launched the CBOE Volatility index, i.e., VIX index, which was primarily constructed to reflect the market’s prediction of future volatility over 30-days for at-the-money S&P 100 Index option prices. The VIX index quickly turned into the principal barometer for US stock market volatility. It is commonly displayed in the leading financial publications such as Wall Street Journal, Barron’s in addition to financial news introduced on CNBC, Bloomberg TV, and CNN/Money, given that VIX index is practically considered as “fear gauge” (CBOE Volatility index, 2019, p. 3). Later, after ten years, the CBOE, on September 22, 2003, has updated the structure design of the implied volatility index and introduced an advance volatility index based on S&p 500 index, which is widely known

(10)

as VIX (Pati et al., 2018, p. 2553). As an illustration, the driving factor behind using options on S&P 500 rather than S&P 100 has mainly come from the fact that S&P 500 is the primary stock market index in the US not only for options market but also for hedging (Fernandes et al., 2104, p. 1).

It is important to realize that the only principal difference between the VIX and stock price index, such as (DJIA), is that the first index (VIX) measures the volatility while the second measures the stock prices (Whaley, 2009, p 98). Furthermore, The VIX is regularly used by investors and reflect their vision about future stock market volatility (Whaley,2000, p. 12). Generally speaking, High VIX values normally indicate anxiety and turmoil in the stock market, resulting in stock prices to decline to unprecedented levels and thereby followed by sharp increases. On the other hand, low VIX values would reflect the optimism within the market traders. Preparing the market for stability and increasing the probability of market progress (Fernandes et al., 2104, p. 2).

1.2 GARCH and EGARCH Models

Volatility is measure of the risk when valuing a certain asset; hence, it is crucial to measure volatility connected to asset prices. One of the most distinctive features of the asset’s price volatility is that it is unobservable. Many models have been developed in order to estimate and measure the volatility of asset price as well as providing an accurate estimation for the future volatility of the asset price (Tsay, 2013, p. 176-177).

Following with Day and Lewis (1992, p. 268), One of the most famous models to forecast future (expected) market volatility is the GARCH model, which was established on the relationship between historical volatility of the market and the conditional or expected market risk premium. The unique feature of a GARCH forecasting model that its statistical technique that allows asset returns to change over time according to a generalized autoregressive conditional heteroscedasticity. This feature makes the GARCH model the most appealing model for the purpose of this study as the researchers intend to examine the possibility of enhancing this forecasting model. However, the GARCH model does not differentiate between positive and negative shocks in the asset’s price volatility. The researchers consider using EGARCH model with the intention of enhancing predictive power and comparing both models.

1.3 Problematization

Given the fact that in practice, traditional finance scholars have employed the past behaviour of asset prices to develop models to help estimate future volatility. This implies that forecasting future volatility is looking back by nature. In turn, the other method to project the future is suggesting utilizing the implied volatility that is embodied in the option prices. The driving factors behind this approach represent the common thought that options prices are basically dependent on future expectations of volatility. Thus, the expected volatility can be estimated by recovering the market option prices to the options prices suggested by the Black-Scholes formula (Dumas et al., 1998, p. 2060). As per Liu et al. (2013, p.861), the reason that implied volatility makes a better estimation for the market uncertainty is that implied volatility includes historical volatility information as well as the investor’s anticipation of future market circumstances reflected through

(11)

current option prices. It is widely believed that the implied volatility represents the best proxy for future volatility of market’s options. Justifying its point view, that if the options market is completely efficient, the implied volatility embodied in the option prices should reflect the best forecast of future volatility (Bentes, 2017, p. 243).

On the contrary, the findings documented by Becker et al. (2006, p. 152) point out that there is no significant relationship between implied volatility and future volatility reported that implied volatility index does not contain further information more than other variables. Alternatively, Christensen and Prabhala (1998, p. 125) concluded that implied volatility surpasses historical volatility in forecasting future volatility. They added that, implied volatility includes all past information, unlike volatility extracted based on historical information. They adopted the following argument, if the options market is efficient the implied volatility should have served as the best estimation of future volatility. They were justifying the difference in their results from the previous studies by using a longer time series of data and the changes in the financial system around the 1987 crash. More recent evidence, Pati et al. (2018, p. 2566) concluded that implied volatility contains relevant information beyond that was suggested by the GARCH family models. However, the volatility that is estimated based on past behaviour of the assets also contains relevant information, and the implied volatility is considered biased forecasting of future volatility.

Our problematization in this research stems from that there is a common thought according to previous researches that implied volatility gives or produces a better future forecast of the volatility of the underlying asset over the lifetime of the option (Yu et al., 2010, p.1; Liu et al., 2013,p.861), which means that it is a short term forecast. On the other hand, volatility forecasts based on historical behaviour can produce forecasts for longer periods in the future compared to implied volatility at the expense of the accuracy of the forecasts itself. In other words, we have two estimation approaches, first, implied volatility can give accurate estimations of volatility, but for short term. Second, is historical volatility can give us a longer estimation period but less accurate. The dilemma we have at hand, can we derive or develop an estimation that can combine the advantages of both approaches that can work as a hybrid model for forecasting volatility with a longer period than the lifetime of the option and gives accurate estimation at the same time.

1.4 Knowledge Gap

The knowledge gap is basically streamed from the following arguments firstly, due to the growing turmoil and shakiness of the stock market, volatility has attracted great attention from researchers, the core issue of this debate centers around the predictive power of the implied volatility, and the informational content of volatility (Bentes, 2017, p. 241). Furthermore, the empirical evidence about this issue, i.e., the importance of implied volatility in forecasting the future, as well as its ability to absorb all available information, is still inconclusive and non-consensus. By looking back at the previous studies, for example, Latane and Rendleman (1976), Fleming (1998), Christensen and Prabhala (1998), Jiang and Tian (2005), Fung (2007), Chang and Tabak (2007), Pati et al. (2018) who presented results that emphasized with various degrees that implied volatility is better than historical volatility in forecasting future volatility. In contrast, other studies such as Canina & Figlewski (1993), Pong et al. (2004), Becker et al. (2006), Becker et al. (2007), Mohammad & Sakti (2018) documented results that confirm that historical

(12)

volatility is better than implied volatility in forecasting the future volatility, and contain additional relevant information compared to the volatility that is extracted from option prices. Clearly stated, the competing empirical explanations for the implied volatility phenomenon, are still inconclusive and incomplete. Thus, the failure of literature to reach a definite conclusion about the predictive power of implied volatility has motivated us to conduct this study as we see that further research in this area of the field is still required to settle confusion and contribute to consensus creation.

Secondly, we found that one of the basic limitations of implied volatility in forecasting the future is that it is limited to use over the remaining life of the option, i.e., short-term prediction. Thus, increasing the prediction time horizon using implied volatility will form a considerable contribution to finance literature. Thirdly, in line with Pati et al. (2018, p. 2553) we argue that implied volatility that is produced by indexes were not much researched in the literature. Consequently, examining the values of implied volatility indexes in comparison to their corresponding stock market indexes, it will lead to new contributions to finance theory and literature. Considering the difference in depth, liquidity and regulatory structure between the two stock markets it will be interesting to conduct a study using a new set of data of those stock market indices. This may lead to support the reported results and increase the robustness of our study.

1.5 Research Question

All the above mentioned discussed led us to develop the following research question:

“Can implied volatility be used to improve in- and out-of-sample performance of the GARCH (1,1) and EGARCH (1,1) models?

To answer this question we formulated sub-questions as follows:

• Does the implied volatility add more information when included as explanatory in the conditional variance equation?

• Does the implied volatility have some information about the future realized volatility? • Does the forecasting performance of implied volatility is unbiased?

• Does the forecasting performance of implied volatility is better than historical volatility in explaining the future realized volatility (i.e., has the predictive power)?

• Does the volatility forecast based on implied volatility encompass all relevant information contained in historical volatility?

1.6 The Purpose of The Study

This study seeks to address whether implied volatility includes further information than historical volatility in forecasting stock price movements. And provide new insights into the volatility of the underlying assets that are options written on. Also, we aim in this study to investigate the predictive power of implied volatility by studying two main global indices, namely S&P 500, which represents one of the most followed stock market indices in the USA, and Euro Stoxx 50 which represent the stock market index in the Eurozone.

(13)

Additionally, we investigate the informational content of implied volatility in comparison to historical volatility. For this purpose, we will examine whether there is biasedness or not. By exploring the extent of the reaction of implied volatility to both bad and good events in the stock markets. Furthermore, the empirical evidence helps in providing more explanations about the implied volatility movements in countries such as Europe and the US, which constitute the overwhelming majority of the financial transactions at the global level.

For the purpose of assessing the predictive performance of forecasting models, we will utilize the Mincer-Zarnowitz (MZ) regressions. Equally important, we aim to employ several econometric models in such a way that it gives more robustness of documented results. By all means, this study seeks to provide useful implications of portfolio managers, risk managers, options traders by formulating trading strategies that produce earnings by recognizing mispriced options, as well as academic researchers.

1.7 Delimitations

Here, there are some expected limitations that should be noted. First, we will use the data of countries that have introduced the implied volatility indices, so we have limited our search in these countries. Furthermore, the expected findings that would be extracted will be only validated in these countries, or countries with similar economies. Secondly, more importantly, there is a lack of theories in finance that constitute foundations for this type of study. Thus, documented results will be practical implications in nature more than theoretical.

1.8 Definitions of Terms

Implied volatility: The volatility that is embodied in option prices, which in usual contain information about the market’s expectations about the future volatility of underlying assets over the lifetime of the option. In practice, implied volatility is calculated by first matching market options prices to Black-Scholes model and theoretical prices. Thereafter, solving for the unknown volatility, based on the prices of underlying assets and information of options contracts (Goncalves and Guidolin, 2006, p. 1591).

Implied volatility index VIX: It is like other equity market indexes (S&P 500), (DJIA), (OMXS30), but with the difference that VIX measures the volatility while others measure the prices. It was launched in 1993 for the sake of providing a benchmark of the expected volatility of the underlying assets over the short term. More important, identifying which options contract on volatility can be written. It is worth noting that VIX is considered forward-looking, providing the market’s expectations about the volatility in the short term (Whaley, 2009, p.98). High levels of VIX mean pessimistic expectations, give rise to make the equity prices go down sharply. Whereas low levels of VIX reflect optimism view causing the equity prices to go up (Fernandes et al., 2014, p. 2).

Historical volatility: Conceptually, in finance literature, is often used to indicate the deviations from the mean value of all observations over time interval chosen Statistically (Poon & Graner, 2003, p.480).

(14)

ARCH model: The name of the model stands for (autoregressive conditionally heteroscedastic). It is obtained by arguing that given the stylized facts that are shown by time series of return, such as “volatility clustering”. The general assumption of the classical linear regression model (CLRM) that variance of the error term is constant does not make sense for the time series. Hence, the model assumes that the variance of the error term is non-constant (Brooks, 2014, p. 423, Engel, 1982, p.987).

GARCH model: It is considered extended of Engel’s work ARCH by allowing conditional variance to follow the ARMA model (Enders, 2015, p.129). Hence, the general idea behind the GARCH model is that the variance is a function not only for lagged squared return but also is a function for lagged variance as well (Bollerslev, 1986, p. 309).

(15)

2 Theoretical Methodology

This chapter is mainly concerned with the philosophical standpoint of this research. This chapter will deliver the framework used to construct and conduct this research in terms of research philosophy and perspectives, and the paradigm used, the ontological and epistemological stance, and sampling method. Furthermore, this chapter includes motivation for each aspect the researchers have adopted to present the validity of adopted knowledge stance and give a complete picture of the methodological stance.

2.1 Choice of Research Area

The estimation and forecasting of future volatility of financial variables have an important contribution to make to the finance theory, due to the growing role of volatility in the various financial applications, such as pricing of financial derivatives, portfolio management, risk management, hedging and asset allocation (Pati et al., 2018, p. 2552). Accordingly, correct estimation of this factor is necessary to sound financial decisions making. Broadly, academics and other financial market practitioners, such as investors, speculators, market regulators, and policymakers, are traditionally building their expectations about volatility based on the past behaviours of the financial variables. To put it differently, using historical behaviours of financial instruments to explain the expected variation and forecast the future changes. Despite the useful contribution of previous approaches, they are by nature reverse looking to expect the future. On the other hand, a second way to explore volatility is to estimate future volatility depending on options prices on the understanding that options pricing basically depends on the expected forward volatility. Accurate options prices mean accurate expected future volatility. Consequently, accurate estimation of the future movements of the financial variables (i.e., the volatility of these variables) basically relies on accurate prices of the options that are written on these financial variables (Dumas et al., 1998, p. 2059).

Furthermore, options traders in practice consider the implied volatility as the most important input not only in determining the option price but also in determining the future fluctuations of the underlying asset. Hence, promoting their ability to forecast the future of the stock market (Mohammad & Sakti, p.431, 2018). All these together motivate us to select the implied volatility as a preferable area of research. Further studying is needed for exploring implied volatility for the sake of assessing its role in improving forecasts of the stock market. More important, with an educational background in finance and accounting, we find it both interesting and important to explore and examine the implied volatility, realized volatility, modelling the forecasting volatility, and the content information of implied volatility in the European countries and the USA. Contributing to the ongoing discussion about the information efficiency regarding the implied volatility, compared with realized volatility, as well as seeking to develop the concept of implied volatility that could be implemented in the financial industry.

(16)

2.2 Research Philosophy and Perspectives.

Conducting research on a specific matter is an intricate matter, indeed. Since there would never be one way or angle or perspectives for any subject of study, even there any unified literature on how to define research (Collis & Hussey, 2013, p. 3). In everyday life, the perception and philosophical view of the subjects and events, and its purposes and adopted view would eventually dictate the way we will choose to provide a solution to our problem. Hence, the philosophical and theoretical framework is essential for scientific research and provides the founding stones and guidelines to undertake scientific research (Collis & Hussey, 2013, p. 43), it provides the rationale behind the research as well as a framework to interpret and illuminating the results of our research process as well as understanding the context of social phenomenon (Bryman, 2012, p 19).

2.3 The Paradigm

Paradigm is a set of criteria that guidelines how research should be undertaken (Collis & Hussey, 2013, p. 43), according to Saunders et al. ( 2012, p723) it is “A set of basic and taken-for-granted assumptions which underwrite the frame of reference, mode of theorizing and ways of working in which a group operates.” Likewise, Bryman (2003, p 4) has defined paradigm as the set of concepts and conventions that guides or dictates what should be studied, how research should be carried out, and how to illustrate the research findings. Hence, the philosophical assumptions of the research will forge the research strategy and data collection methods and data processing (Saunders et al., 2012, p104). The researcher should be able to argue for the chosen philosophical standpoint and motivate his choice of certain points of view over other philosophical stances (Saunders et al., 2012, p104).

Generally, we have two main paradigms interpretivism and positivism. Picturing a positivist researcher, is like looking at a biology or experimental physics scientist in his lab conducting an experiment. Positivist researcher tries to examine observable and measurable factors under controlled environment circumstances, describe and measure the effect of a distinctive changed factor upon the whole phenomenon, and record his feedback. This is natural due to the fact that this approach has been developed within the laboratory passages to serve to understand a natural phenomenon. Positivism perceives the world governed by cause and effect in a mechanical relationship between its components (Saunders et al., 2012, p105). Knowledge originates from experiments to test an existing theory with results that can be scientifically verified (Collis & Hussey, 2013, p. 44). The main trait of this paradigm is that it perceives the social reality as “objective and singular” and examining this reality will not affect this reality (Collis & Hussey, 2013, p. 43).

On the other hand, with the rise of social sciences and dealing with variables like culture, language, ethnic orientations and other human characteristics like values, customs, traditions, and religion, the perception of social phenomenon reality is not singular, it is merely subjective since social reality is the perception formed in our minds about an event (Collis & Hussey, 2013, p. 43). For example, the same word, gesture, or behaviour could be considered friendly or hostile depending on the social circumstances. Scientists realized that social behaviour could not be studied in isolation from the society and

(17)

needed to be studied in its social context and understanding the morale behind this behaviour is essential (Saunders et al., 2012, p107). Positivism could not serve well in this matter as those social and cultural values are relative; it is almost impossible to have a consensus over a specific perception of a social phenomenon. Positivism was deemed inadequate, and interpretivism had emerged to investigate social and behavioural phenomena and integrate the geographical, social, phycological, and cultural aspects. Researcher’s opinions in scientific research became a key factor in interpreting the social phenomenon (Collis & Hussey, 2013, p. 45).

In this study, our research question is, “Can implied volatility be used to improve in- and out-of-sample performance of the GARCH (1,1) and EGARCH (1,1) models?” we choose to undertake our research under the positivism approach. Since we are seeking an objective view of the social reality and would enable us to test previous theories to gain further evidence and provide more extensive analysis and scientifically verify previous conflicting researches shreds of evidence, as well as our developed hypothesis. We think that this approach is adequate for our research. Meanwhile, the interpretivism approach is much less appropriate to process such quantitative research and would distort the objectivity of this research.

2.4 Ontological Assumptions

Ontology is mainly concerned with the nature of social reality relative to social entities (Bryman, 2012, p 32). This issue is important to present the researcher’s position compared to social reality and social actors. An objective ontological stance would realize the social reality as external to social entities, and couldn’t be influenced by actors, and represent a restraint to them and it exists regardless of actors acknowledge it or not (Collis & Hussey, 2013, p. 49).

On the other side, social entities or actors are the building blocks of interpretivism essential (Saunders et al., 2012, p106). Social reality could be influenced by social entities. Hence, social reality is forged and described according to the respective perception of the social entities to each other (Bryman, 2012, p 32). This results in a more subjective view of social reality.

In this research, objectivity is more appropriate to use since we intend to conduct an empirical test to scientifically validate previous evidence and test our developed hypothesis. We need to acquire a further comprehension of implied volatility and its relation to change in relation to its underlying assets. We believe that in order to clarify the nature of the relation and behaviour between factors and give our verdict upon the research question, objectivity is necessary. Meanwhile, interpretivism would induce distortions in social reality. Hence, it could trigger flows and limitations in our research process.

2.5 Epistemological Assumption

Epistemology not only refers to the know-how to acquire knowledge but also it forms a yardstick that we can use to substantiate this knowledge and create a science (Collis &

(18)

Hussey, 2013, p. 47). Similarly, according to Lexico Dictionary (www.lexico.com) by oxford, “Epistemology is the theory of knowledge, especially with regard to its methods, validity, and scope as it distinguishes justified belief from opinion.”. In other words, we are talking about the accredited tools that we can use to earn information.

For positivist researchers in order to acknowledge the information, it has to be observable, measurable, and verifiable via appropriate empirical tools, Also, the independence of the researcher plays a vital role in maintaining the objectivity of the research (Collis & Hussey, 2013, p. 47).

Contrariwise, since the interpretive researcher realizes social reality as subjective, and he interferes in formulating it, the involvement and opinions of the researcher are important to provide his understanding of social reality. Hence, the interpretive researcher is not independent, he also acquires evidence from participants in social reality (Collis & Hussey, 2013, p. 47).

In our research, we intend to use a more positivist epistemological approach. We believe that this approach would be appropriate for our empirical testing. Generalizability is a big advantage of the positivist approach, and we are ambitious that our research findings would be useful and generalizable. Needless to say, that our independence is important in this research so, using interpretivism as a general approach would be inadequate.

2.6 Axiological Assumption

Axiology refers to the role of the researcher’s values and ethics in the research process (Saunders et al., 2007, p 134). Since our values and ethics vastly contribute not only our view of them but also our vision for right or wrong and eventually, our moral compass that guides our behaviour and actions. The researcher must be aware of the impact of his ethics and values on his research and decide whether a positive or negative effect on his research (Saunders et al., 2007, p 134).

Due to that, the interpretive researcher considers the social reality is independent and beyond reach, and he can’t influence it (Collis & Hussey, 2013, p. 49). Positivist researchers often have more proclivity towards neutralising the effect of their values on the research (Collis & Hussey, 2013, p. 48).

In contrast, interpretive researchers acknowledge that they have an axiological stance, and research couldn’t be value-free (Bryman, A,2016, p 39). Furthermore, he sees that the involvement of values and morals are essential to provide a meaningful explanation for the research findings (Collis & Hussey, 2013, p. 49). Those values and morals could be for both the researcher as well as the research subjects (Bryman,2012, p 40).

In this research, the researchers are concerned with studying the relationships between the change options price and relative to the change in the underlying assets. We see ourselves as independent from the subject of study and keen to present an objective view of our research findings, we consider our research is quasi value-free. Hence, we will have more of an interpretive approach overall our research.

(19)

2.7 Rhetorical Assumption

Following Collis & Hussey (2013, p. 48). Language serves as a medium of exchange to communicate knowledge. Hence, it is important to consider the language used in an academic paper as a complementary part of the whole research paradigm. As we mentioned, positivists see themselves independent from the social reality and the research subject and seeking more objectivity and less bias. This has made it more logical to use a passive voice presenting the ideas and explaining the findings.

On the contrary, since interpretivist can involve his ideas and beliefs and moral stances, the interpretivists can use an active voice to express their ideas and also incorporate their values to explain the research findings (Collis & Hussey, 2013, p. 48).

Considering the above, we believe that the positivist approach is more optimal to use in our research. It will help us produce more measurable and generalizable results.

2.8 Research approach and methodological assumption

Every researcher needs to use theory in his research. Hence, the question about the research approach is a question about the purpose of the research relative to the used theory. This is an important aspect as the researcher needs to be aware of whether he will build a theory or test a theory as this would reflect on the research design (Saunders et al., 2012, p 134). Also, the research approach helps the researcher to make an enlightened decision in terms of research strategy, a distinct comprehension for the study findings, and a sound understanding of the sort of conclusions of the research (Sekaran & Bougie, 2016, p 30). This gives the researcher to choose between the inductive, deductive, abductive approach to conduct his research.

Inductive research occurs when a researcher wonders about the reason behind the occurrence of his observation (Sekaran & Bougie, 2016, p 26). Usually, this approach is not based on previously developed theories (Saunders et al., 2016, p 52), and it is usually consistent with generating a theory or evolving the theory to get a richer theoretical perspective to answer his questions (Sekaran & Bougie, 2016, p 26). In the inductive approach or qualitative research, the researcher studies the participant’s morals and the liaison between them (Saunders et al., 2016, p 168). Data collection under the inductive approach doesn’t have a single standard and could be changed during the research as it is an interactive and natural process (Saunders et al., 2016, p 168). Since the inductive approach is an exploratory approach, it may use interviews, either structured or semi-structured to gather information (Collis & Hussey, 2013, p. 4).

On the other hand, the deductive approach is “Scientific research pursues a step‐by‐step, logical, organized, and rigorous method (a scientific method) to find a solution to a problem” (Sekaran & Bougie, 2016, p 23). Hence, when the researcher embraces an obvious theoretical stance and begins the process of collecting and analyzing data with the intention to test this theory (Saunders et al., 2016, p 52). Furthermore, a deductive strategy is more often coherent with a quantitative research approach (Bryman,2016, p 32). The deductive approach usually tries to illustrate the cause and effect relationship between concepts and research objects (Saunders et al., 2016, p 146). Unlike the inductive

(20)

approach, testing theories using a deductive approach needs gathering a huge amount of data, processing this data needs an advanced structured methodology object (Saunders et al., 2016, p 150).

Between testing theory with deduction and generating theory with induction lies the abduction research strategy and it is “A form of reasoning with strong ties with induction that grounds social scientific accounts of social worlds the perspectives and meanings of participants in those social worlds” (Bryman,2016, p 709). The abduction came in order to act as a mediator between the previous research approaches is the Abductive research strategy. Likewise, researchers gather data to excavate social phenomenon, describe main themes, and illuminate patterns, to build new or reshape existing theory then, testing this newly reshaped theory (Saunders et al., 2016, p 145).

In our research, we will use an abductive approach as it would enable us to customize our data collection method research design as well as data processing. This will make us able to have a crafted research strategy that is tailored to fit our research question regarding the proportionate change implied volatility and its relation to the change underlying asset price. We believe that the abductive approach would give us an agile research strategy to undertake our research. The ability to form a special compounded research strategy combining quantitative and qualitative techniques when needed, would empower us to shed more light on the conflict of evidence we have.

2.9 Research Method

There are two main categories to choose from when conducting research quantitative and qualitative methods of research design (Saunders et al., 2016, p 164). The difference between the Quantitative and qualitative research methods emerges from the kind of data itself (Neuman, 2014, p 167), quantitative research usually distinguished by collecting and processing data numerical data meanwhile, qualitative uses nonnumeric data (Johnson & Christensen, 2014, p 82). However, practically speaking, business researches often needs to combine both research methods (Saunders et al., 2016, p 165).

Positivism is mostly associated with quantitative approach (Saunders et al., 2016, p 167). Furthermore, the quantitative approach is mostly connected to the deductive approach, which is usually used when the purpose of the research is testing theory. Meanwhile, the qualitative approach is often associated with interpretivism (Saunders et al., 2009, p. 52). The qualitative approach is often used to gain a deeper understanding of the theory itself or to investigate certain phenomenon (Johnson & Christensen, 2014, p 82).

In our research, we believe that the quantitative approach will be the main approach in our research. This approach will enable us to use statistical and mathematical models in order to test our formulated hypothesis of the applicability of extending the predictive power of implied volatility over periods that exceeds the lifetime of the options itself.

(21)

2.10 Research design

Research design is often described as the way or the road map that the researcher would follow to answer his research question (Saunders et al., 2016, p 145). Similarly, “is the plan or strategy you will use to investigate your research” (Johnson & Christensen, 2014, p 182). As per Collis & Hussey (2013, p 97), the importance of choosing a research design lies in illuminating the way to the researcher to draw a comprehensive plan that is used as a guidance in answering the research question with the best possible way. And since research approaches are quite contrasting, this could result in “miscommunication and misunderstandings” (Neuman, 2014, p 167), if the research design weren’t accurately chosen. Furthermore, the kind of research relies on the intended goal of the research, which can be separated into four types: descriptive, explanatory, exploratory, and predictive (Collis & Hussey, 2013, p 3).

The main goal for descriptive research is to depict a detailed narration of the status of the components of the social phenomenon (Johnson & Christensen, 2014, p 547). Furthermore, Descriptive research provides a description of the characteristics of a situation, social phenomenon, or relationship (Neuman, 2014, p 167). The main target of descriptive research is not explaining the causal relations between variables but to portray the variables and illustrate the relationships between those variables (Johnson & Christensen, 2014, p 547). Usually, the outcome of this kind of study is an overall depiction of the social phenomenon (Neuman, 2014, p 39).

After the descriptive research comes the explanatory research design or analytical research. It is thought this kind of research serves as an extension for descriptive research (Collis & Hussey, 2013, p 5). Also, Johnson & Christensen (2014, p 547) define explanatory research as “Testing hypotheses and theories that explain how and why a phenomenon operates as it does.” Hence, as it appears from the definition that the researchers, according to this design not only keen to describe the social phenomenon as it is but also keen to understand and analyse the relationships between its variables, furthermore, studying the mechanism upon which the social phenomenon happens and testing their formulated hypothesis (Neuman, 2014, p 40).

Thirdly, comes the exploratory research, as the name speaks for itself, this type of research is to shed light upon a social phenomenon or a certain case that we have little knowledge on and begins investigating this phenomenon (Johnson & Christensen, 2014, p 582). In other words, it’s the kind of research that investigates less understood subjects and aims to produce an initial idea about it (Neuman, 2014, p 38). Thus, the exploratory approach provides a useful way and wonder about the phenomena around us and acquire new knowledge (Saunders et al., 2016, p 174). Furthermore, it provides the researcher with more malleability and flexibility in the research process (Saunders et al., 2016, p 175).

The last type of research design is the predictive research, and it refers to “a research focused on predicting the future status of one or more dependent variables based on one or more independent variables” (Johnson & Christensen, 2014, p 870). The main aim of this type of research is to generate a generalization based on a previously formulated

(22)

hypothesis that could give the ability to predict the future outcome or behaviour of a certain phenomenon (Collis & Hussey, 2013, p 5).

In our study, we consider our research as exploratory research. Although there could be many pieces of literature about implied volatility, and it has been proven that it provides a superior result when compared to historical volatility. However, implied volatility measures are only valid during the options life. Consequently, there is so much little we know about the long-term predictive power of it. Hence, we would use quantitative data to investigate and test our formulated hypothesis. Our aim is to provide a piece of empirical evidence on the predictive power of implied volatility in the long term.

2.11 Data Collection Method in Quantitative Research

Under the positivist paradigm with a quantitative approach, the first concern is collecting a representative sample that captures exemplify the features in concern of the desired population (Neuman, 2014, p 38). Since it is not possible to study every individual information available in the population, we will have to take a sample.

Sample method

When sampling for quantitative research, the researcher needs to be aware of the kind of sample that he chooses for his research. There is more than one type of sampling representative sampling and biased sampling (Johnson & Christensen, 2014, p 344). In other words, the researcher has two methods to use in order to collect data, probability or representative sampling, and non-probability sampling (Saunders et al., 2016, p 275). A representative sample is a sample that captures all the features and characteristics of the original population but smaller in size (Johnson, B & Christensen. L, 2014, p 344). Similarly, it is defined as “the chance, or probability, of each case being selected from the population is known and is usually equal for all cases” (Saunders et al., 2009, p. 213). Hence, a representative sample is more and achieve high precision, and efficient to use in quantitative research, also representative sampling is considered to be highly cost-effective relative to the efficiency level it provides (Neuman, 2014, p 247- 248).

Alternatively, a biased sample is a sample chosen by the researcher that has a common criterion of selection that makes it consistently different from the original population (Johnson & Christensen, 2014, p 344). Furthermore, nonprobability sampling techniques are considered less demanding compared to probability sampling in terms of the mathematical processing of data (Neuman, 2014, p 248). There are three sub-techniques to reach a nonprobability sampling convenience sampling, snowball sampling, and quota sampling. Convenience sampling is “A non-random sample in which the researcher selects anyone he or she happens to come across.” (Neuman, 2014, p 248). Hence, the selection criteria are accessibility, readiness, and effortless available individuals of the sample (Neuman, 2014, p 248). However, this method is not suitable for all researches as it may generate a very distorted picture of the population (Neuman, 2014, p 248).

(23)

In contrast, quota sampling is “A non-random sample in which the researcher first identifies general categories into which cases or people will be placed and then selects cases to reach a predetermined number in each category” (Neuman, 2014, p 249). Thus, it’s a designed selection that aims to formulate a more representative picture of the population based on targeting a specific criterion in each group that would eventually resemble the population studied (Johnson & Christensen, 2014, p 363). Although it might seem an accurate method representing all the possible categories in the population seems to be hard to achieve sometimes (Neuman, 2014, p 249)

Furthermore, snowball sampling is a method was the researcher asks the participants to refer to individuals from their social network who have a certain criterion the researchers are interested in and who are willing to participate in the study (Johnson & Christensen, 2014, p 365). This method depends on the exponential effect of the human social network to acquire more participants at the beginning then getting bigger by time (Neuman, 2014, p 275).

Due to the fact that we will conduct our research on testing the formulated research question, “Can implied volatility be used to improve in- and out-of-sample performance of the GARCH (1,1) and EGARCH (1,1) models?”. Since quantitative research has the ability to employ a single data collection technique, we choose to base our research on a 15 years sample of data gathered from Eikon According to our research we will draw the samples based on the most popular stocks that we can consider as a representative for the studied markets. Hence, we chose two exchange market indexes of standard and poor’s 500 for the USA and Euro Stoxx 50 for Europe. We think that the chosen time interval of 15 years will provide us with enough data to conduct our research. The researchers think that this time span, including the rise and fall of the international financial crisis and its repercussions, would enable us to validate our findings.

Literature and source criticism

To conduct this research, the researchers only relied on appropriate literature and theories that have a sound basis. We have chosen Umeå University library to be the main source of data. Besides, we have used Google scholar. To make sure of the accuracy and efficiency of the data collected, we have used Eikon to extract our data.

Researchers have used articles from the most respectable journals to depend on or to refer to in our research. We have chosen the most relevant and most useful resources available that we could find on implied volatility, the cost of volatility, volatility as a fear gauge, the predictive power of volatility, and mathematical models that we have used. We chose to use GARCH and Black-Scholes models variants and MZ in calculating and processing data as it was the most appropriate model we could find. And can provide our research with a sound basis we can build on. The researchers were keen to provide the most adequate keywords and definitions and terminologies that promote cohesion to the research as well as homogeneity. Citations and references were applied according to Umeå business school thesis manual.

(24)

3 Theoretical Framework

The main goal of this part is to present a review of previous literature about the subject of the research, as well as theories that are in line with our study, coupled with the selected theory part. The choice of theory part involves our motivations arguments behind our choice, as well as the driving factors that support our viewpoint about the theoretical foundation of our study. In other words, the theory is selected in such a way that tailored to our expected findings.

3.1 Review of Prior Studies

In the financial literature, there is a common thought amongst researchers that there is a negative relationship between the implied volatility index and stock returns. Also, researchers have found that volatility of stock markets respond differently or disproportionately relative to the direction of the change in stock returns, they found that volatility is more sensitive to adverse return shocks than positive return shocks (Shaikh & Padhi, 2016, p. 28). However, the relationship between implied volatility and realized volatility in terms of informational content and forecasting future is not as clear yet. There is conflicting empirical evidence on the role of implied volatility in forecasting future volatility and its informational content. In the following lines, we present a summary of selected articles that studied those concepts.

Latane and Rendleman (1976) suggested that the volatility embodied in the option prices can be calculated based on the assumption that all investors and options traders behave according to the Black-Scholes model. In other words, they assume that the implied volatility is equal to the actual volatility of the underlying stock. They were motivating their assumption by illustrating that the ability to estimate implied volatility correctly relies on a reasonable estimation of the impact of dividend payments, transaction costs, time differences, taxes, etcetera. They were suggesting a methodology based on utilizing the weighted average of implied standard deviations as a tool to measure the future volatility of stock returns. The reported findings confirm that standard deviations of market prices differ from these suggested by the Black and Scholes model. Hence, the prices suggested by the model does not capture the real factors that determine the decisions of options traders. More importantly, they suggested that implied standard deviations are superior in forecasting volatility than those that derived depending on historical volatility.

Similarly, Harvey and Whaley (1992) investigated the dynamic behaviour of implied volatility to evaluate market options. They have built their study on implied volatility on the S&P 100 options index. The documented findings indicate that the options market is efficient. They reported that the changes in future volatility could be forecasted using implied volatility. Alternatively, Day and Lewis (1992) examined the informational content of implied volatility that is derived from call options on S&P500 index using GARCH and EGARCH models. They have added the implied volatility as an explanatory

(25)

variable to the conditional variance equation to capture the informational content of the implied volatility. However, the documented results showed conflicting evidence, that neither implied volatility nor the conditional volatility of GARCH and EGARCH could capture the realized volatility completely.

Moreover, Canina and Figlewski (1993) have encountered the previous researches. They have analysed over 17000 OEX call options prices over S&P 100 and provided a surprising result that sharply conflicts with the traditional perception. They reported that implied volatility has very little informational content about the future realized volatility. They also refuted the assumption of irrationality of options trader’s decisions; however, they could not eliminate irrationality completely, and considered traders to be efficient. Consistently, Lamoureux and Lastrapes (1993) explored the behaviour of implied volatility compared to the volatility of the underlying assets. They rejected the joined hypothesis that states that the options markets are efficient and options pricing models are work correctly. On the other hand, Jorion (1995), has presented a supportive findings when he found that implied volatility had more informational content than historical volatility. He investigated the predictive power and the informational content of implied volatility derived upon options on foreign currency from the Chicago Mercantile Exchange. However, implied volatility was found to be prejudiced when it comes to forecasting future volatility.

In the same fashion, Xu and Taylor (1995) compared the predictive power for the implied volatility and historical volatility predictions using a sample of four exchange rate spans from 1985 to 1991. They have shown that volatility forecasting extracted based on implied volatility has more relative information than to historical volatility. However, the documented results also showed the failure to reject the hypothesis that past returns have no relative information besides the information provided by options prices. Furthermore, Christensen and Prabhala (1998), supported the findings provided by Xu and Taylor (1995), by confirming that implied volatility subsumes all available information in options markets. They reported that the implied volatility dominates the historical volatility as a forecasting tool for future market volatility. They were enhancing their results by employing larger sample data, justifying the deviations of previous studies by the regime shift around 1987.

Fleming (1998), investigated the forecasting performance provided by S&P 100 implied volatility. He has found that despite the increasing trend in biasedness of implied volatility forecasting compared to realized volatility, the implied volatility forecasting subsumes valuable information relative to realized volatility. He suggested that a linear model that can correct the biasedness of implied volatility would be a useful tool in forecasting market expectations of future volatility. Contrary, Pong et al. (2004) provided evidence quite different from previous studies suggesting that historical volatility is more accurate from implied volatility in forecasting future market volatility. Together with that, the informational content of historical volatility in forecasting the future volatility surpasses its corresponding derived from implied volatility.

Beker et al. (2006) challenged Fleming (1998) findings. They examined the ability of implied volatility to incorporate all available information based on the VIX index. Their hypothesis was, that implied volatility that extracted conditionally on the options prices

(26)

should not only reflect available published information, but also represents the best market expectations about the future volatility of underlying assets. They concluded that implied volatility has a reduced efficiency regarding all factors to be used in volatility forecasting. They have shown that although there is a positive correlation between implied volatility index and future volatility of the underlying asset, implied volatility does not improve the forecasting performance of future volatility forecasts. Becker et al. (2007), continued their research to obtain a better comprehension of the informational content of implied volatility. They investigated the VIX index and its relevance to forecasting models, and if VIX index would have more information than the historical volatility. Their research showed that implied volatility does not include any further information than historical volatility.

Yu et al. (2010), they have investigated the predictive power of volatility embodied in options that are traded in both OTC and exchange. The documented findings were indicating that the predictive power of implied volatility is superior to the historical volatility regardless of the options were traded in OTC or exchanges. Adversely, Bentes (2017) provided a contradicting evidence in his study about the relationship between implied volatility index and realized volatility. He used monthly data from the BRIC countries. He sought to explore the informational content of implied volatility and its role in explaining the realized volatility. More importantly, He has utilized useful statistical methods such as Autoregressive distributed lag (ADL) and correction error (EC) and paralleled the extracted results with ones that employ the OLS regression methods. The results evidenced that implied volatility is not efficient for any of BRIC countries, albeit it was unbiased for India.

Recently, Mohammad and Sakti (2018), they have investigated the informational content of volatility embodied in the call options in the Malaysian stock market. They used daily data for 100 trading days between 2013 and 2014. The documented findings indicate that implied volatility does not contain relevant information about the future market volatility of underlying assets. Furthermore, the forecasting based on implied volatility is less accurate in relative to predictions by historical volatility. Alternatively, Pati et al. (2018) had a piece of conflicting evidence, they studied the informational content of implied volatility index relative to their corresponding stock market indexes for three Asian countries, i.e., India, Australia, and Hong Kong. The documented findings showed that implied volatility is biased in forecasting the future stock market volatility. However, it has contained relevant information that can justify the future realized volatility.

3.2 Efficient Market Hypothesis

The efficient market hypothesis had emerged in the mid-60s through the efforts of the Nobel prize winner Paul Samuelson when he promoted Bachelier’s work among other economists with empirical its studies (MacKinlay, & Lo, 1999, p. 3). The efficient market hypothesis (EMH) has paved the way for economists and mathematicians like Fama (1965 p. 55) to formulate the random walk theory based on the same fundamental assumptions of the efficient market hypothesis.

References

Related documents

As all the plots in this thesis show, there appears to be a skew in the implied volatility when using a pricing model for the underlying which allows only negative jumps. From all

Schwab US Dividend Equity ETF Schwab US Large-Cap ETF Schwab US Large-Cap Growth ETF Schwab US Large-Cap Value ETF Schwab US Mid-Cap ETF Schwab US REIT ETF Schwab

This paper aims to answer our two main research questions about implied volatility: if and how well it can predict 1) realised volatility and 2) realised returns. This

An attempt to apply these traditional theories to bitcoin proves to be even more complicated when considering that the base of the EMH lies with the fundamental value of the

Abstract: In this paper we examine a jump diffusion model for option pric- ing to determine if the commonly observed presence of a skew in implied volatility graphs is attributable

In this thesis signals for a month into the future are generated at each date in a test data set by forecasting difference between realized and implied volatility.. The difference is

What also is interesting with the calibrated SSVI method is that it improves the initial fit so much that it is closing in to the SVI model which as we know from Section 5.3 gives

Then we forecast underlying asset value S t+1 using GARCH(1,1) and the implied volatility using the SVI Kalman filte, which are an input is input into the Black-Scholes formula