• No results found

How Google Search Trends Can Be Used as Technical Indicators for the S&P500-Index: A Time Series Analysis Using Granger’s Causality Test

N/A
N/A
Protected

Academic year: 2022

Share "How Google Search Trends Can Be Used as Technical Indicators for the S&P500-Index: A Time Series Analysis Using Granger’s Causality Test"

Copied!
52
0
0

Loading.... (view fulltext now)

Full text

(1)

IN

DEGREE PROJECT TECHNOLOGY, FIRST CYCLE, 15 CREDITS

STOCKHOLM SWEDEN 2018,

How Google Search Trends Can Be Used as Technical Indicators for the S&P500-Index

A Time Series Analysis Using Granger’s Causality Test

ALBIN GRANELL

FILIP CARLSSON

(2)
(3)

How Google Search Trends Can Be Used as Technical Indicators for the S&P500-Index

A Time Series Analysis Using Granger’s Causality Test

ALBIN GRANELL FILIP CARLSSON

Degree Projects in Applied Mathematics and Industrial Economics Degree Programme in Industrial Engineering and Management KTH Royal Institute of Technology year 2018

Supervisors at KTH: Jörgen Säve-Söderbergh, Julia Liljegren Examiner at KTH: Henrik Hult

(4)

TRITA-SCI-GRU 2018:182 MAT-K 2018:01

Royal Institute of Technology School of Engineering Sciences KTH SCI

SE-100 44 Stockholm, Sweden URL: www.kth.se/sci

(5)

Abstract

This thesis studies whether Google search trends can be used as indicators for movements in the S&P500 index. Using Granger’s causality test, the level of causality between movements in the S&P500 index and Google search volumes for certain keywords is analyzed. The result of the analysis is used to form an investment strategy entirely based on Google search volumes, which is then backtested over a five year period using historic data. The causality tests show that 8 of 30 words indicate causality at a 10% level of significance, where one word, mortgage, indicates causality at a 1% level of significance. Several investment strategies based on search volumes yield higher returns than the index itself over the considered five year period, where the best performing strategy beats the index with over 60 percentage units.

(6)
(7)

Hur Google-s¨oktrender kan anv¨andas som tekniska indikatorer f¨or SP500-indexet: en tidsserieanalys med hj¨alp av Grangers

kausalitetstest Sammanfattning

Denna uppsats studerar huruvida Google-s¨oktrender kan anv¨andas som indikatorer f¨or r¨orelser i S&P500-indexet. Genom Grangers kausalitet- stest studeras kausalitetsniv˚an mellan r¨orelser i S&P500 och Google- s¨okvolymer f¨or s¨arskillt utvalda nyckelord. Resultaten i denna analys anv¨ands i sin tur f¨or att utforma en investeringsstrategi enbart baserad p˚a Google-s¨okvolymer, som med hj¨alp av historisk data pr¨ovas ¨over en fem˚arsperiod. Resultaten av kausalitetstestet visar att 8 av 30 ord in- dikerar en kausalitet p˚a en 10%-ig signifikansniv˚a, varav ett av orden, mortgage, p˚avisar kausalitet p˚a en 1%-ig signifikansniv˚a. Flera invester- ingsstrategier baserade p˚a s¨okvolymer genererar h¨ogre avkastning ¨an in- dexet sj¨alvt ¨over den pr¨ovade fem˚arsperioden, d¨ar den b¨asta strategin sl˚ar index med ¨over 60 procentenheter.

(8)
(9)

Acknowledgements

We would like to thank our supervisors at the Royal Institute of Tech- nology (KTH), P¨ar J¨orgen S¨ave-S¨oderbergh and Julia Liljegren for their support before and throughout the study.

(10)
(11)

Contents

1 Introduction 9

1.1 Background . . . 9

1.2 Objective . . . 9

1.3 Problem Statement . . . 10

1.4 Limitations . . . 10

1.5 Previous Research . . . 10

2 Theoretical Framework 12 2.1 Technical Indicators . . . 12

2.2 Financial Theory . . . 12

2.2.1 Efficient Market Hypothesis (EMH) . . . 12

2.2.2 Behavioural Finance . . . 13

2.3 Mathematical Framework . . . 13

2.3.1 Vector Autoregression (VAR) . . . 13

2.3.2 VAR Order Selection . . . 14

2.3.3 Stable VAR process . . . 15

2.3.4 Stationarity . . . 16

2.3.5 Augmented Dickey-Fuller Test . . . 16

2.3.6 OLS Estimation of VAR Parameters . . . 16

2.3.7 Breusch-Godfrey test . . . 17

2.3.8 Granger-Causality . . . 17

2.3.9 F-Statistics for Granger-Causality . . . 18

3 Method 19 3.1 Word selection . . . 19

3.2 Data collection . . . 21

3.2.1 Search data . . . 21

3.2.2 S&P500-Index . . . 22

3.3 Investment Strategies . . . 22

3.4 Outline . . . 24

4 Results 26 4.1 Transformation of Data . . . 26

4.2 Selection of lag order . . . 26

4.3 Model Validation . . . 27

4.4 Granger-Causality Tests . . . 28

4.5 Backtesting Investment Strategies . . . 29

4.5.1 Strategy 1 . . . 29

4.5.2 Strategy 2 . . . 30

4.5.3 Strategy 3 . . . 31

5 Discussion 32 5.1 Interpretation of Results . . . 32

5.1.1 Granger-Causality Test . . . 32

5.1.2 Investment Strategies . . . 33

5.1.3 Comparison to Previous Findings . . . 34

5.1.4 Financial Implications . . . 35

5.2 Sources of Errors . . . 36

(12)

5.2.1 Mathematical Sources of Errors . . . 36

5.2.2 Errors From Data Collection . . . 36

5.2.3 Nature of the Financial Market . . . 37

5.3 Further Research . . . 37

5.4 Conclusion . . . 38

References 40 A Appendix 41 A.1 Augmented Dickey-Fuller test . . . 41

A.2 Strategy 1 Returns . . . 43

A.3 Strategy 2 Returns . . . 44

A.4 Strategy 3 Returns . . . 45

(13)

1 Introduction

1.1 Background

In the beginning of the 21st century, papers, books, tv broadcasting and radio were the main sources of information. Today, this has changed as the Internet has developed and changed our way of living. Nowadays, top news are shown as a pop-up notification in smartphones only minutes, sometimes even seconds after the occurrence and information is never more than an online search away.

Simultaneously with this rapid change Google has become the number one search engine worldwide with trillions of searches every year and a 91% online search market share by February 2018.[1]

In 2010 Google’s Executive Chairman Eric Schmidt claimed that the information gathered over two days, equals the accumulated amount from the dawn of mankind up to 2003.[2] The new era of big data creates new possibilities and several businesses see it as the holy grail for finally being able to predict who, where and when customers will buy their products.[3] Despite the emergence of big data, the increase of information used does not compare as only about one percent of the data collected is analyzed.[4] Thus, there is a lot of unexplored possibilities in the new era of big data.

Today’s most commonly used technical trading indicators have not been influenced by the increase in big data, as they are still mainly based on momentum calculated from trading volumes, volatility and historical returns of the considered asset.[5] Such indicators are used by investors in order to analyze price charts of financial assets to be able to, for example, predict future stock price movements. Unlike fundamental analysis, in which investors try to determine whether a company is under- or overvalued, technical analysis does not consider the fundamental value of the stock. Instead indicators are used to identify patterns and in that way predict short term movements in the price of the considered asset.[6]

1.2 Objective

The thesis investigates whether there exists a causal relationship between online search activity and the overall performance of the stock market. Today, many investors base their trading on technical indicators or key performance indicators, such as price-earnings ratios, earnings per shares, historic returns etc. However, as a result of the Internet’s, Google’s in particular, increasing influence on peoples day-to-day life it is reasonable to believe that data from online activity potentially could reflect the overall state of the economy.

As further discussed in section 1.6, there is no prevailing consensus on the topic, as previous studies come to different conclusions using various methods. The objective of the thesis is to find mathematically substantiated evidence, through Granger-causality tests, that Google search volumes can be used as a technical indicator for movements on the S&P500 index. Furthermore, based on the results of the causality tests, the thesis aims to find a trading algorithm using Google search volumes that, using a backtest strategy, can be

(14)

shown to give a higher return than the index itself over a 5-year period.

1.3 Problem Statement

The problem statement is broken down into two general questions underlying the thesis:

• Can Google search volumes be used as a technical indicator for the S&P500 index?

• Can a successful investment strategy be based on these potential findings?

1.4 Limitations

The thesis only considers the S&P500 index and 30 selected keywords. Search volumes and index prices are limited to the period of March 24th2013 to March 24th2018. As the stock markets and overall economy in different countries may vary it is not reasonable to assume that there is an overall global trend in the economy, in the sense of cause-effect mechanisms from Google searches. Thus, in order for the search data to best represent the trend of the American stock market (i.e. the S&P500 index), the search data is geographically limited to within the United States.

1.5 Previous Research

Previous research on Google search trends and their predictive ability on the financial market has been conducted in different scale, using various approaches, leading to different conclusions. This section presents a selection of the studies, the tests conducted and their findings.

Several studies have managed to prove the predictable properties of Google search volumes on different economic and social indicators. Varian et al. showed how Google trends can be used to forecast near term economic indicators such as automobile sales, travel destinations, and unemployment rates.[7] Four years later L. Kristoufek et al. used an autoregressive approach to show that Google search volumes also significantly increases the accuracy of the prediction of suicide rates in the UK, compared to only using historical rates for forecasting.[8]

In 2013 Moat, Preis et al. empirically studied the relationship between Google search trends and the financial market with a quantifying approach.

By analyzing the search volumes the study identified patterns that could be interpreted as “early warning signs” of upcoming stock market moves. This was done using a hypothetical trading algorithm “Google Trends Strategy”, determining the type of investment action (buy/sell) based on whether a week’s search volume of a certain word is higher or lower than the average volume of the past three weeks. The strategy was implemented theoretically on the Dow Jones Industrial Average (DJIA) over the time period 2011-2014, using numerous keywords (98 in total). The results indicates that certain words might serve as technical indicators. The strategy for the word “debt” yielded a return of 326 %, compared with the buy-and-hold strategy which yielded only 16 %.[9]

(15)

Perlin et al. modeled Google search volumes together with volatility, log-returns on market indices and trade volume respectively as bivariate vector autoregression models. In addition to the modelling, two-way Granger-Causality tests were also performed on the models. The study was performed on four different markets, using the most frequently occuring, financially related, words in four economic textbooks. The findings stated that a causality for some words and the stock market had been identified. The keyword “stock” was claimed to have the most significant overall impact on the market, where an increase in search volumes caused volatility to increase and the weekly returns to decrease.[10]

Challet et al. were not able to show whether Google search volumes better predicts future returns than historic returns themselves. Using non-linear machine learning methods and backtesting strategies their final conclusion was that search volumes and historical returns indeed do share many properties and Google trends data could sometimes even be equivalent to returns themselves.

However, they were not able to show any predictable properties of the Google trends data.[11]

(16)

2 Theoretical Framework

2.1 Technical Indicators

In order to determine whether Google search volumes can be used as a technical indicator for the S&P500 index, the definition of a technical indicator has to be introduced. A technical indicator is a tool for predicting short term movements in asset prices using historical trading data. A technical indicator is described as follows:

The essence of technical indicators is a mathematical transformation of a financial symbol price aimed at forecasting future price changes.

This provides an opportunity to identify various characteristics and patterns in price dynamics which are invisible to the naked eye.[12]

Thus, in order to be valid as a technical indicator, the data has to have predictive properties on the considered asset. As there are no stated rules or definitions of how these predictive properties are measured, it will throughout this thesis be measured by causality, where high levels of causality implies good predictive properties.

2.2 Financial Theory

There are several theories behind the mechanics of the financial markets. This section presents some of the most well known theories, in order to give further insight to what causes market movements according to conventional financial theory.

2.2.1 Efficient Market Hypothesis (EMH)

The efficient market hypothesis (EMH) states that at any given time, in any liquid market, the prices of securities reflect all available information. Eugene Fama presents EMH in three different degrees depending on the information set that is of interest: weak, semi-strong and strong form.[13]

The weak form of EMH has its foundation in the historical price data, available on the securities market, and claims that previous information of the stock price is not sufficient for determining future direction of security prices.

Returns in the stock market are instead modelled as a “fair game”. In other words, the prices of securities follow a random walk and its expected return conditioned on today’s information is zero.

By including all publically available information on securities in the information subset the semi-strong form of EMH is obtained. The model states that stock prices adjust rapidly after release of new public information. As a consequence the current stock prices are reflections of the information subset and a fundamental analysis cannot achieve excess returns.

The strong form of EMH assumes that all available information, both public and private, are factored in the securities price. An additional assumption for this model is, however, that no investor or group have monopolistic access to

(17)

certain information.

The main implication of Fama’s theory is that there is no systematic way (e.g. stock picking using fundamental analysis) for investors to outperform the financial market in the long run. This is a consequence of the prevailing competition on the market, adjusting the prices instantly after new information is made available.

2.2.2 Behavioural Finance

Since the development of the EMH, new theoretical models have emerged which in some cases contradict the fundamentals of the EMH. Behavioural finance is a relatively new field that, with a combination of psychology and conventional economics, attempts to explain why investors act irrationally. R.J. Schiller’s paper “Do Stock Prices Move Too Much to be Justified by Subsequent Changes in Dividends?” is commonly cited as the beginning of behavioural finance as he demonstrated that stocks fluctuate too much to be justified by rational theory of stock valuation.[14] After the release of his paper, much more research on the subject has been done and the psychological aspect of the economy has been recognized as an important influence in the aftermath of several financial crises, as investors psychology has tended to deepen the crisis.[15]

In 1986, Fisher Black introduced the concept of noise, a contradiction to information, which in its sense of a large numbers of small events often is considered as a causal factor much more powerful than a small number of large events. Factors such as hype, inaccurate data or inaccurate ideas are the essence of noise. According to Black, the noise is what keeps us from knowing the expected return of a stock or portfolio and thus it is what makes our observations imperfect. Different beliefs must arise from different information, which Black explains as noise often being treated as information. In this way, noise can explain some of the anomalies in the EMH.[16]

2.3 Mathematical Framework

This section presents the multiple time series framework used in the mathematical part of the thesis. If nothing else is specified, the theory is collected from the literature written by Helmut L¨uthkepol.[17]

2.3.1 Vector Autoregression (VAR)

In order to perform a structural analysis of the respective time series, the concept of a vector autoregressive (VAR) model has to be introduced. In this setting K autoregressive time series are expressed as linear combinations of each other with a predetermined order:

yt= ν + A1yt−1+ . . . + Apyt−p+ ut, t = 0, ±1, ±2, . . . , (1) Here yt = (y1,t, · · · , yK,t) is a (K x 1) vector containing K autoregressive models, A is a (K x K) coefficient matrix, the variable p defines the model’s order (i.e. the number of lags used to model yi,t) and lastly ut represents a k-dimensional and serially uncorrelated innovation process with expected value

(18)

zero and non-singular variance.

In order to derive the properties of the VAR model it is convenient to look at the model of order one. Breaking down yt starting at some point, say t = 1, then yields the following:

y1= ν + A1y0+ u1,

y2= ν + A1y1+ u2= ν + A1(ν + A1y0+ u1) + u2

= (IK+ A1)ν + A21y0+ A1u1+ u2, ...

yt= (IK+ A1+ · · · + At−11 )ν + At1y0+

t−1

X

i=0

Ai1ut−1

...

2.3.2 VAR Order Selection

There are several criteria that can be used for VAR order selection. Two commonly used are presented by L¨utkepohl: the Akaike Information Criterion (AIC) and the Schwarz Criterion (SC). The different properties of these criteria will affect the estimated lag order depending on the properties of the time series on which they are applied.

Akaike Information Criterion

Based on the idea to optimize the maximum likelihood estimate while withholding the model parsimony (simplicity), Akaike derived a criterion for model selection.

AIC(m) = ln|eΣu(m)| + 2/T (number of f reely estimated parameters)

= ln|eΣu(m)| + 2mK2/T

Here eΣu represents the maximum likelihood estimate of the white noise covariance matrix, T is the sample size and K is the dimension of the time series. The lag m is selected in order for AIC(m) to be minimized. Hence, there is a trade-off between model variance and number of estimated parameters. In addition, it can be shown that AIC estimates asymptotically overestimates the true lag.

Schwarz Criterion

Schwarz derived a slightly different criterion for model selection, that also deals with the trade off between lack-of-fit and number of estimated parameters, but penalizes harder for adding additional lag into the model. As a consequence, the lag order estimated by SC rarely overestimates the true lag.

SC(m) = ln|eΣu(m)| + ln(T )/T (number of f reely estimated parameters)

= ln|eΣu(m)| + ln(T )mK2/T

(19)

Comparison of Criterion

In addition to AIC and SC, there exists other criteria for VAR order selection, such as the Final Prediction Error (FPE) (similar to AIC) and the Hannan- Quinn Criterion (HQ) (similar to SC). Though, as stated by L¨utkepohl, there is no common consensus regarding which criteria to use. However, the criterion do have properties that affect their functionality under different circumstances.

When the sample size increases (i.e. T → ∞), the probability of selecting the true lag differs. AIC, as well as FPE, are said to be inconsistent estimators for larger sample sizes, meaning that its asymptotic estimator does not converge with the reality, whereas SC and HQ are strongly consistent. However, since AIC and FPE put larger emphasis on the forecast prediction error, models with order selection based on these criteria often have better predictive capabilities, even though the lag order is not necessarily correct.

2.3.3 Stable VAR process

It can be shown that, in order for the VAR process to be well defined (stable), the eigenvalues of A1need to have absolute values less than one. This condition can then be generalized for the V AR(p) model using the fact that it can be expressed as a V AR(1) model by making yi,t a Kp-dimensional vector. The corresponding V AR(1) can be defined as:

Yt= ν + AYt−1+ Ut (2)

where,

Yt:=

 yt yt−1

... yt−p−1

(Kp × 1)

ν :=

 ν 0 ... 0

(Kp × 1)

A :=

A1 A2 · · · Ap−1 Ap IK 0 · · · 0 0

0 IK · · · 0 0 ... ... ... . .. ... 0 0 · · · IK 0

(Kp × Kp)

Ut:=

 ut

0 ... 0

(Kp × 1)

(20)

In the generalized form, the model is said to be stable if:

det(IKp− Az) 6= 0, f or z ≤ 1 (3)

2.3.4 Stationarity

L¨utkepohl proposition 2.1 states that if a VAR process is stable, then it is also stationary. Hence, the stability condition is often referred to as the stationary condition. The reverse relationship, however, is not always necessarily true. As a consequence stationarity in the subprocesses of a VAR model is a necessary condition for the model’s stability.

In order for a process to be stationary its mean and autocorrelation should be time invariant. Thus, the following two equations must hold:

E[yt] = µ (4)

E[(yt− µ)(yt−h− µ)0] = Γt(h) (5) 2.3.5 Augmented Dickey-Fuller Test

A way to analyze whether a time series is stationary or not is by evaluating the possible presence of a unit root. The presence of a unit root implies that the time series is integrated of order one, meaning that its first difference will be stationary. A unit root for a process yt is said to exist if its characteristic function has a root equal to unity, z = 1. That is:

yt= v + a1yt−1+ a2yt−2+ ... + apyt−p+ t (6) with the corresponding characteristic function,

1 − α1z − α2z2− ... − αpzp= 0 (7) A way for evaluating the presence of unit roots is by performing an Augmented Dickey-Fuller test. The test evaluates the null hypothesis of a unit root present by first modelling the time series as:

∆yt= γyt−1+

p

X

s=1

as∆yt−s+ t (8)

The test statistic (tau statistic) is then defined as DTτ = ˆγ/SE(ˆγ), and follows a Dickey-Fuller distribution which critical values can be collected from a Dickey-Fuller table.[18]

2.3.6 OLS Estimation of VAR Parameters

The least squares estimators B = [νi, Ai,1, · · · , Ai,t−p] can be obtained with several approaches where one is to use a multivariate approach and solve for all yi,t simultaneously. However, the approach can also be rewritten so that

(21)

the coefficients can be obtained using ordinary least-squares on each equation individually. First, the LS-estimator is rewritten in a vectorized form as per below:

ˆb = vec( ˆB0) = (IK⊗ (ZZ0)−1Z)vec(Y0) (9)

Let b0K be the k-th row of B, implicating that bk contains all parameters of the k-th equation. With y(k) defined as the time series available for the k-th variable, i.e y(k) = [yk1, · · · , ykT], the following expression for the OLS estimator of the model y(k)= Z0bk+ u(k) is received:

ˆbk= (ZZ0)−1Zy(k) (10)

where u(k)= [uk1, · · · , ukT]0. 2.3.7 Breusch-Godfrey test

The Breusch-Godfrey test is used to test for residual autocorrelation, which may have a negative impact on the VAR model. To construct the test a VAR model for the error vector is assumed, i.e ut = D1ut−1+ · · · + Dhut−h+ vt, where vt is white noise. Then, in order to test for autocorrelation a null hypothesis stating no autocorrelation in the residuals is set. This is expressed as:

H0: D1= · · · = Dh= 0

H1: Dj 6= 0, f or some j = (1, 2, · · · , h)

Using the Lagrange multiplier principle, L¨uthkepol Proposition 4.8 states that under the null hypothesis the following asymptotic distribution for the residual autocorrelation holds:

λLM(h)−→ χd 2(hK2) (11)

This property is used to calculate the probability p of faulty rejecting the null hypothesis. If p, defined as Pr[X > λLM(h)] where X follows a χ2(hK2) distribution, is greater than α, the null hypothesis of no autocorrelation cannot be rejected at a α level. Simultaneously, if Pr[X > λLM(h)] is less than α, there may be autocorrelation between regressors at a α confidence level.

2.3.8 Granger-Causality

Granger-causality is a concept of causality defined by Clive WJ Granger.[19]

It formally explains and gives a mathematical definition of causality, which under suitable conditions works well with the VAR-framework. Let Ωt denote the information set containing all relevant information available up to time t. Furthermore, let zt(h|Ωt) denote the MSE-minimizing h-step predictor of the process zt at time t, based on the information given by Ωt. Thus, the

(22)

corresponding MSE-forecast is denoted as Σz(h|Ωt). According to Granger’s definition of causality, xtis said to cause ztif it can be shown that:

Σz(h|Ωt) < Σz(h|Ωt\{xs|s < t}), f or at least one h = 1, 2, 3 . . . (12)

A practical problem with the implementation of this definition is the choice of Ωt. Usually all relevant information up to time t is not available and thus Ωt

is replaced by {zs, xs|s ≤ t}, which is all past and present information in the considered process.

2.3.9 F-Statistics for Granger-Causality

In time series analysis the F-statistics is used to test the null hypothesis that there is no Granger causality. Formally, this hypothesis can be expressed using the VAR-coefficients as:

H0: α12,j = 0, f or i = (1, 2, ..., P ) H1: α12,j 6= 0, f or some j = (1, 2, ..., P )

The F-statistic for the test can be derived from the distribution of the Wald statistic, given as Proposition 3.5 in L¨utkepohl:

λW = (C ˆβ − c)0[C((ZZ0)−1⊗ ˆΣu)C0]−1(C ˆβ − c)−→ χd 2(N ) (13)

Here C represents a (N x (K2p + K))-matrix, consisting of ones and zeros such that C corresponds to the coefficients that causality is tested for. From the properties of χ2, the distribution of λW can be expressed in terms of a F random variable by noting the following relationship between the two distributions:

N F (N, T )−−−−→d

T →∞ χ2 (14)

Here N is given by the lag order times K (number of time series) and T represents the sample size. Thus, for a large sample size the the variable λF = λw/N will be approximately F-distributed. Just like in the F-statistics for regression with non stochastic regressors the denominator degrees of freedom are set equal to the sample size minus the number of estimated parameters. Hence, the approximate distribution becomes as per below:

λF ≈ F (N, KT − K2p − K) (15)

The null hypothesis is rejected at a significance level of α if λf > F (α/2, N, KT − K2p − K). This can be described with the p-value, which is defined as the probability of faulty rejecting the null hypothesis.

(23)

3 Method

3.1 Word selection

There is a limited amount of previous research made on the predictive properties of Google search data against stock markets. Thus, there is no natural way or stated theory on how to choose words for analyzing causalities. Without previous evidence based selection criteria there are many words that potentially could have predictive properties on the index.

Thus, an aim to cover a diverse set of words is applied where the selection is based on common financial words supported by a set of intuitively chosen words.

To cover some of the most basic and common financial terms, 20 words listed as 20 English Words for Finance You Simply Must Know by FluentU will be tested. The words, with FluentU’s definitions, are presented below [20]:

• Debt - Debt refers to any kind of borrowing such as loans, mortgages, etc.

Debts are a way for you or your company to borrow money (usually for large purchases) and repay it at a later date with interest.

• Interest rate - Interest is the amount the bank (or other moneylender, which is any person or organization that gives you money) will charge you or your company for the money you borrow from them. That amount, or interest rate, is expressed as a percentage of the loan.

• Investment - The noun investment refers to money that you put into your business, property, stock, etc., in order to make a profit or earn interest.

• Capital - Capital refers to your money or assets.

• Cash outflow - Cash outflow refers to the money that your company spends on its expenses and other business activities.

• Revenue - Your revenue is the amount of money your company makes from the sale of goods and services.

• Profit - Profit describes the amount of revenue your company gains after excluding expenses, costs, taxes, etc. The goal of every business is to make profit.

• Loss - In finance, we often hear the phrase profit and loss. Loss is when you lose money. It’s the opposite of profit, and it’s a word that no one in finance ever wants to hear. Still, it’s something that can happen when a company makes less money than it spends.

• Bull market - A bull market is a financial market situation where stock prices are up (just like the bull’s horns) as a result of investor confidence and the expectations of a strong market.

• Bear market - A bear market is the opposite of a bull market. In a bear market, stock prices are falling and the financial market is down—the bear’s paws are facing downwards, and coming down on its enemies.

(24)

• Rally - As you know, stock markets go up and down. A stock market rally is when a large amount of money is entering the market and pushing stock prices up.

• Stocks - The word stocks is a general term used to describe the ownership certificates of any company. The holder of a company’s stocks is a stock- holder. As a stockholder, you’re entitled to a share of the company’s profit based on the number of stocks you hold.

• Shares - Some companies divide their capital into shares and offer them for sale to create more capital for the company.

• Overdraft - An overdraft is when you spend more money than you have in your bank account. The bank will often make you pay an overdraft fee if you do this.

• Credit rating - The credit rating of a person or company is either a formal evaluation or an estimate of their credit history, and it indicates their potential ability to repay any new loans.

• Long term loan - Sometimes businesses need to buy assets, equipment, inventory and other things. Banks offer long term loans for businesses that need to borrow a large amount of money for a longer period of time.

• Short term loan - As a business or individual, you can borrow money from the bank for short periods of time. A short-term loan is usually repaid in less than five years.

• Mortgage - A mortgage is a loan in which your property—most commonly your house—will be held by a bank or other moneylender as collateral.

You’ll receive a loan for the value of the property. This means the mon- eylender will hold your property until your loan has been fully repaid.

• Collateral - Collateral is something valuable, such as a property you own, that you pledge (temporarily give to) a bank, financial company or other moneylender as a guarantee of your loan repayment.

• Recession - When we talk about a recession, we’re referring to a period of significant (major) decline in a country’s economy that usually lasts months or years.

Furthermore, there are words, not included in the 20 already chosen, that intuitively are considered interesting to investigate. These are described and motivated below:

• Crisis - Searches for crisis, in financial terms, could reflect an overall con- cern for the near future of the stock market and might reflect a constrained behaviour among investors

• S&P500 - S&P500 is a stock index containing 500 American stocks and is the index that this thesis tests causality against. It is believed that increased searches on this index potentially could signal an increased appetite for investments.

(25)

• SPX - SPX is the market ticker for the S&P500-index. It serves as an abbreviated unique identifier for the index, used for getting real time in- formation on the security

• Amazon - One of the biggest companies in the world, that as well constitue a significant part of the S&P500-index. Its online search volumes could also reflect consumer behaviour in the economy due to its businesses within e-commerce

• Restaurants - It is believed that in an economy doing well, people tend to go out to eat to greater extent.

• Risk - Risk and reward often goes hand in hand. A sudden change in its search volumes for risk could reflect either a change of appetite for risk, or a bigger concern for future fluctuations in the economy.

• Dividend - It is believed that if people are willing to investment more, then it is intuitive to assume that more investors will research certain stocks and their dividend policies.

• Gold - A sudden change in the demand for gold could either symbolize an increased demand for investments in general, or the opposite, an increased demand for fixed assets.

• Taxes - The tax system is a relatively complex system, both concerning taxes and returns on investments. An increased interest in these details could reflect an higher returns on the market

• Inflation - The inflation prognoses are often seen as indicators of the future state of the economy. Hence, searches for inflation could potentially reflect investors’ view on the future of the market.

For the words selected by choice, the category under which the search data will be collected from Google differ. Some of the words are obviously finance related and their search data will thus be collected using the ”Finance” category. This will prevent searches including other topics, e.g. risk that your food gets burnt in the oven, to be falsely included. However, Amazon, Restaurants and Gold will be collected from unbiased searches.

3.2 Data collection

The analysis will consider two types of data which are Google search volumes and historical prices of the S&P500 index. Below, the data collection method and sources of data are presented.

3.2.1 Search data

The search volume data will be obtained from the open source tool Google Trends which provides historical Google search volumes for different words.

Those can be segmented using several criteria such as location or category. In order to make comparison between different terms easier, Google normalizes the search volumes on a scale from 1-100 by dividing each data point by the total search volume of the specified time period and geographical area.

(26)

The presented data from Google Trends is an unbiased sample of Google search data and only a percentage of the total search volumes is used to compile the trend data. The non real time data can be collected from 2004 up to 36 hour prior the search and weekly data is presented each Sunday. Google trends does not provide a tool for gathering weekly search data on a time period longer than five years, as for longer time periods the data is presented on a monthly basis. In the period of March 24th2013 to March 24th2018, 260 data points are provided which will be considered enough to get reliable results in the causality test. Thus, the data used in the analysis will be from the period of March 24th2013 to March 24th2018.

Search volumes for words with a small amount of searches will not be available and in case of multiple searches from the same person, for the same word, during a short time period, the algorithm is built to only account for the first search. However, the search does not need to be only for the specific word as the algorithm also accounts for searches with the word included in a sentence.[21]

3.2.2 S&P500-Index

The S&P500 index consists of 500 American stocks and is widely considered to be the best single gauge of large cap in the U.S.[22] The index is capitalization-weighted, meaning that the proportion of the stocks are weighted by their respective market value of outstanding shares. It is designed to measure the performance of the broad domestic economy through changes in the stock market and contains stocks from all industries with a market capitalization of at least $6.1 billions.[23] The index expresses the total market value of the shares against a base level of 10 set in the base period of 1941-1943.

It opened 2018 at 2,683.73 the 2nd of January.[24]

The historical prices of S&P500 will be collected from Yahoo Finance for the same period as the search volume data. The index is only open during business days and therefore the close price of the last business day before Sunday will be used in order to match the search volumes, which are presented per Sundays.

3.3 Investment Strategies

This section describes three different investment strategies that will be backtested using historic data. In order to perform the desired backtests a simplification is made by ignoring transaction fees. This is believed to have a minor impact on the actual returns as the fees are usually small in comparison to invested capital.

To simplify the descriptions of the investment strategies, the following

(27)

four notations used in this section are introduced.

p(i) Price of S&P500 index week i n(i) Google search volume week i v(i) Value of portfolio week i

R(i) Return of investment made week i

Strategy 1

As stated in section 1.6, Preis et al. claimed that it was possible to yield a 326% return over a three year period, by selling and buying the DIJA index according to changes in Google search volumes. To verify whether it is possible to achieve the same returns on the S&P500 index, the same strategy will be tested on the best performing words in the causality tests.

Preis et al. used a strategy where they week i invested in the index at price p(i) and sold one week later at price p(i + 1), if n(i) > n(i − 1).

Simultaneously, if n(i) < n(i − 1), they used the possibility of shorting, i.e selling week i at price p(i) and buying it back at price p(i + 1) one week later. Thus, depending on whether a long or short investment is made, the corresponding return is given by R(i) = 1 + [p(i + 1) − p(i)]/p(i) if long, or R(i) = 1 + [p(i) − p(i + 1)]/p(i) if short. Hence, v(i + 1) = v(i) ∗ R(i).

This strategy description has assumed a positive causality, i.e that higher search volumes indicate a rising price of the S&P500 index. However, this may not be the case as some words may have a negative causality implicating that higher search volumes cause a decrease in price. Thus, before the strategy is tested the coefficient from the corresponding VAR-model will be analyzed. If it is negative, the method described in the previous section will be changed to buying the index if n(i) < n(i−1) and taking a short position if n(i) > n(i−1).

Strategy 2

The second strategy will use a three week moving average of the Google search volume in order to better capture the trend in the search activity. Thus, if n(i) is greater than the three week moving average (i.e [n(i − 3) + n(i − 2) + n(i − 1)]/3) an investment will be made at p(i) and sold one week later at p(i + 1).

Simultaneously, if n(i) < [n(i − 3) + n(i − 2) + n(i − 1)]/3 a short position will be taken, selling the index at p(i) and buying it back one week later at p(i + 1).

Just like in Strategy 1 it has to be taken into consideration whether the considered search volume has a positive or negative causality. In the case of a negative causality, the same changes as in Strategy 1 will be made.

Strategy 3

In the third strategy combinations of technical indicators (i.e search trends) will be used. The idea of this algorithm will be to stay fully invested at all times and when both search trends indicates a negative movement of the index take a short positions. Thus, for positively causal search trends we have that if n1(i) < n1(i − 1) and n2(i) < n2(i − 1) a negative indication is given and a short position is taken. The return the following week

(28)

is hence R(i) = 1+[p(i)−p(i+1)]/p(i), otherwise R(i) = 1+[p(i+1)−p(i)]/p(i).

The idea of this algorithm is to have a return that at most times mimics the index and then identify opportunities where a short position would yield positive returns. However, since all possible combinations of the search trends are large in number, a limitation to only test this strategy for pairwise combinations of the five best performing search trends from the causality test will be made.

3.4 Outline

When testing for Granger-Causality certain key assumptions regarding the characteristics of the underlying data (see section 2.3) will have to be ensured for valid results. Hence, the testing in this thesis will be divided into six phases based on the Box-Jenkins method: stationary-check of the data, transformation of non-stationary data, selection of the appropriate lag-order for each VAR process, estimation of model parameters, model checking (i.e.

resudual analysis) and the Granger-Causality test itself.

Generally, historic data from stock prices show clear trends and is thus not considered as a stationary process. However, a stable process can normally be obtained by taking the first order difference of these prices. Thus, the tests performed towards the stock market throughout this thesis will reference to the weekly returns of the S&P500-index.

To ensure stationarity of the Google search trends data, individual Augmented Dickey-Fuller (ADF) tests will be performed on each time series. The ADF test will try the null hypothesis of the presence of a unit root, equivalent to that the process is non-stationary, and reject the hypothesis if the time series is stationary. If the hypothesis is not rejected at a significance level of 5 % or better, the data will be considered non-stationary and a need for further transformation will be declared.

As described, some search data will be considered non-stationary, which would imply uncertainties into the VAR model. Hence, for the data where the null hypothesis is not rejected in the ADF test, transformations of the data sets will be necessary. As for the clearly non-stationary S&P500 data, a transformation by taking the first order difference will be made on the search data that do not pass the ADF test.

Vector autoregression and statistical analysis is conveniently performed in R, by installing the ’VARS’ package. As stated, in advance of estimating the VAR model, an appropriate lag for the model has to be determined. This will be done with the package’s built in VARselect function, which provides the lag estimates from the four different criterias, AIC, HQ, SC and FPE, briefly defined in section 2.3.2. The lag order will be decided from the Schwarz criteria (SC) since it penalizes the most for introducing more lag in the model. The motivation is based on the following three aspects:

• It is believed that too much lag will introduce more disturbance from noise

(29)

in the model (i.e. other factors affecting the index price)

• The tests are based on weekly data. Hence, allowing too much lag in the model and the actual delayed effects it implies might grow up to the size of months

• AIC (and FPE), that allows more lag, often yield inconsistent estimates for larger datasets

When the VAR order has been selected, the model parameters will be estimated.

This will be done using ordinary least squares as described in section 2.3.5 with the VAR function.

In order to validate the selected model and to ensure that the model suitably describes the data, residual analyses will be performed. These analyses will evaluate the whiteness of the residuals from the fitted model using the Lagrange Multiplier test, which tests the null hypothesis of uncorrelated residuals. If the test rejects the null hypothesis of uncorrelated error terms at a significance level of 5 % or better, the model will have to be reevaluated with a higher lag order.

Finally, the test for causality will be performed. A VAR model with the appropriate lag order, determined in the previous stage, will be estimated for every search trend data together with the weekly returns of the S&P500-index.

This will be done using the package’s VAR function, modelling each respective pair of time series as described in equation (1) and (2). Finally, for each VAR model, the Granger-Causality will be evaluated using the package’s built in function causality.

The steps are summarized as follows:

1. Make sure that the each time series are stationary. If not, transform the data by taking first order difference

2. Select appropriate VAR order using Schwarz criterion

3. Fit the VAR model with the determined lag order from step two

4. Analyze the whiteness of the residuals from each model. If errors are serially correlated, go back to step two and increase the lag until the issue is resolved

5. Perform Granger-Causality tests on the resulting models

(30)

4 Results

4.1 Transformation of Data

As stated, an initial test of the stationarity of the data was performed. An overview of the tests are shown below, however the detailed test statistics are found in Appendix A.1. The tests showed that most search data were non-stationary and hence required a transformation by taking the first order difference. For four keywords, however, the hypothesis of a unit root was rejected at the desired level. These words were cash outflow, bear market, long term loan and taxes. Thus, no transformation was made for the search volumes of those words.

Table 1: Test summary: Stationarity

Search Word Differenced Data (Yes/No)

Debt Yes

Interest Rate Yes Investment Yes

Capital Yes

Cahs Outflow No

Revenue Yes

Profit Yes

Loss Yes

Bear Market No Bull Market Yes

Rally Yes

Stocks Yes

Shares Yes

Overdraft Yes

Credit Rating Yes Long Term Loan No Short Term Loan Yes

Mortgage Yes

Collateral Yes

Recession Yes

Crisis Yes

S&P500 Yes

SPX Yes

Amazon Yes

Restaurants Yes

Risk Yes

Dividend Yes

Gold Yes

Taxes No

Inflation Yes

4.2 Selection of lag order

As stated in section 3.4, the selection of lag order was based on the Schwarz criteria (SC). However, the received lag orders were preliminary, as they may had to be changed due to systematic model errors found in the model validation.

The results from the SC selection are presented in Table 2 (a).

(31)

4.3 Model Validation

After selecting the appropriate VAR order using the Schwarz criterion, each model was fit using least-squares. Subsequently, the models had to be validated to ensure that there were no systematic errors in the fitted models. This was done by analyzing the whiteness of the residuals using a Lagrange Multiplier test, more specifically the Breusch-Godfrey LM test. The results are shown below; where the left table displays the initial tests where the VAR order was determined using SC and the right table displays the models that did not pass the initial test and had to be re-fitted using a higher order of lag.

Table 2: Test Summary: Whiteness of Residuals

(a) Test of primary models

Search Word Lag Chi-Squared P-value

Debt 1 8.2712 0.08214

Interest Rate 1 6.9889 0.1365 Investment 2 20.779 *** 0.00776

Capital 4 20.952 0.05108

Cash Outflow 1 6.0312 0.1968

Revenue 1 5.9297 0.2045

Profit 1 10.658 ** 0.03069

Loss 1 9.5166 ** 0.04941

Bear Market 1 9.2606 0.05491

Bull Market 2 23.795 *** 0.002481

Rally 2 10.267 ** 0.03617

Stocks 2 3.2734 0.916

Shares 2 15.86 ** 0.04442

Overdraft 1 18.156*** 0.00115 Credit Rating 2 22.317 *** 0.004362 Long Term Loan 1 3.4674 0.4829 Short Term Loan 3 14.265 0.2841

Mortgage 1 4.3012 0.3668

Collateral 2 15.519 ** 0.04981

Recession 3 15.799 0.2006

Crisis 2 15.456 0.05086

S&P500 2 17.164 ** 0.02844

SPX 2 24.047 *** 0.00225

Amazon 1 2.5649 0.633

Restaurants 2 16.773 ** 0.03256

Risk 1 14.107 *** 0.006961

Dividend 1 9.7634 ** 0.04461

Gold 1 13.316 *** 0.009829

Taxes 1 9.1184 0.05821

Inflation 1 12.676 ** 0.01297

Significance codes: ’***’ 1 %, ’**’ 5 %, ’*’ 10 %

(b) Re-fitted models with smallest approved lag order

Search Word New Lag Chi-Squared P-value

Investment 3 20.987 0.05057

Profit 3 9.6057 0.6505

Loss 2 9.9324 0.2698

Bull Market 3 18.987 0.08885

Rally 4 20.324 0.206

Shares 3 12.963 0.3717

Overdraft 3 15.66 0.2073

Credit Rating 3 15.403 0.2201

Collateral 3 10.982 0.5305

S&P500 4 23.403 0.1034

SPX 4 19.964 0.2218

Restaurants 7 24.605 0.6492

Risk 3 19.218 0.08339

Dividend 3 12.839 0.3809

Gold 2 14.147 0.07802

Inflation 2 10.295 0.2449

(32)

4.4 Granger-Causality Tests

When all models had been fitted correctly the tests for Granger-Causality could be performed. Eight out of the 30 search words tested indicated a causal ability on the index price. For three words, stocks, mortgage and restaurants, the hypothesis of a non-causal behaviour could be rejected at a level of five percent or better. The summarized test result containing the F-statistics and the corresponding p-values are shown below, together with properties of the final models.

Table 3: Test summary: Granger-Causality

Search Word Order of Difference Lag-Order Causality F-statistic P-value

Debt 1 1 1.7599 0.1852

Interest Rate 1 1 1.3425 0.2471

Investment 1 3 3.1188* 0.09111

Capital 1 3 1.5677 0.1963

Cahs Outflow 0 1 0.36579 0.5456

Revenue 1 1 0.72322 0.3955

Profit 1 3 0.39771 0.7547

Loss 1 2 0.83172 0.4359

Bear Market 0 1 2.2307 0.1359

Bull Market 1 3 0.83109 0.4772

Rally 1 4 0.47143 0.7567

Stocks 1 2 3.5698** 0.02888

Shares 1 3 1.6444 0.1782

Overdraft 1 3 1.8456 0.1379

Credit Rating 1 3 1.7837 0.1493

Long Term Loan 0 1 0.10893 0.7415

Short Term Loan 1 3 2.1005* 0.09927

Mortgage 1 1 7.4454*** 0.00658

Collateral 1 3 0.54829 0.6495

Recession 1 3 2.3391* 0.0727

Crisis 1 2 3.0007* 0.05064

S&P500 1 4 2.0484* 0.0865

SPX 1 4 0.53746 0.7083

Amazon 1 1 1.4008 0.2371

Restaurants 1 7 2.0753** 0.04477

Risk 1 3 1.4277 0.2338

Dividend 1 3 1.7055 0.1649

Gold 1 2 0.21574 0.806

Taxes 0 1 0.058257 0.8094

Inflation 1 2 0.59977 0.5493

Significance codes: ’***’ 1 %, ’**’ 5 %, ’*’ 10 %

(33)

4.5 Backtesting Investment Strategies

This section presents the results of the three investment strategies defined in section 3.3.

4.5.1 Strategy 1

In the below plot an overview of resulting 5-year returns from Strategy 1 are displayed for each keyword that had a significance of 10 % or better in the Granger-causality test. It is seen that the search data for mortgage, which had the highest level of Granger-causality, aslo yielded the highest return, over-performing the S&P500-index with 30 percentage units. On the remaining search trends Strategy 1 performed significantly worse. For some keywords the strategy resulted in negative returns despite that the index increased with 80

% over the testing period.

A full list of all returns from Strategy 1 is found in Appendix A.2.

Figure 1: Investments based on weekly differences

(34)

4.5.2 Strategy 2

The results from applying Strategy 2 (i.e. basing the trading decisions on the three week’s moving average of the search volumes) are displayed in the plot below. The overall trend is similar to the result of Strategy 1 where investing based on the search volumes of mortgage was the only strategy that could beat the buy-and-hold strategy of the index. Though, now this strategy outperformed the index with 42 percentage units. The general return of the strategy applied on the remaining search trends was also slightly improved.

A full list of all returns from Strategy 2 is found in Appendix A.3.

Figure 2: Investments based on three weeks’ moving average

(35)

4.5.3 Strategy 3

In Strategy 3 an attempt to yield excess return was made using combinations of the search trends. The overall result of this strategy was significantly better than the previous two investment strategies and is displayed in the plot below. The best performing combination was mortgage and recession which out-performed the index with over 60 percentage units. The second best and the third best strategies are also combinations of mortgage, with stocks and crisis respectively.

The detailed returns from Strategy 3 are found in Appendix A.4.

Figure 3: Pairwise combination of top-five search trends

(36)

5 Discussion

In this section the results from Section 4 are explained, contrasted and evaluated from both a mathematical and financial perspective. This is followed by a recommendation for further research on the subject and lastly the conclusions of the thesis are presented.

5.1 Interpretation of Results

Here the results from the causality tests and the following investment strategies are analyzed. The indicated causalities are explained and an attempt is performed in order to explain the either successful or non-successful performance of the investment strategies. A comparison with the previous findings on the topic is also presented. Lastly, implications on current financial theory is discussed.

5.1.1 Granger-Causality Test

The interpretation of a Granger-causality test is that it evaluates whether the future value of a time series can be predicted with better precision (smaller prediction error) provided an additional set of data from a second time series.

If this is the case, the second time series is said to precede the desired time series, which throughout this thesis is a stock index.

For eight out of the 30 series of search data evaluated, the hypothesis of non-causality of the S&P500-index could be rejected at a significance level of 10 % or better. The keywords related to these search trends were investment, stocks, short term loan, mortgage, recession, crisis, S&P500 and restaurants.

Thus, better predictions of the future movements of the index could be made using the historical data from these search trends. For restaurants and stocks, the hypothesis was rejected at a level of 5 % and for mortgage it was rejected at a level of 1 %.

The relationship between certain search trends on Google and the performance of the stock market is not evident and hard to explicitly explain. However, based on the results presented in this thesis, speculations on a general level can me made regarding why individuals perform certain searches at a given time and how this could be reflected in the stock market.

When the subprime mortgage market collapsed in 2007 a series of events followed that eventually led to what is claimed to be the worst financial crisis since the great depression in the 30s. Hence, it is not unrealistic that the search volumes of an emotionally charged word such as mortgage, closely associated with a financial crisis, shows the highest level of causality. The fact that mortgage has a negative causality is also reasonable. Increasing mortgage related searches could be reflected by a more restrictive and pessimistic view on the financial market, where investors worry about the future state of the economy. Furthermore, the search volumes for recession and crisis, words which as well can be associated with negative views on the future economic state, also indicated a negative causal behaviour.

(37)

S&P500 and investment can both be related to an overall demand on the financial market, from which their positive causal relationship to the stock index could be explained; there is a higher level of investors seeking investments, which creates an increased demand that drives up the general stock prices. With the same reasoning increased searches for short term loans and stocks should also be associated with a higher demand for short term investments. However, the resulting VAR model used for the Granger-test indicated a negative causal relationship for these two words.

The only non-finance related search word for which the null hypothesis could be rejected was restaurants. The logic behind including restaurants in the test was that it was believed that the searches for restaurants would be related to individuals tendency to ”eat out” and that this could reflect an increased willingness to spend and invest money. It should be highlighted that here the model required a lag of seven weeks, which in itself makes sense since the hypothetical relationship is indirect, however it also somewhat complicates the model. Furthermore, the coefficients for different lags were non-consistent and took both positive and negative values, meaning that an increase in search volumes one week could indicate a positive return of the index the next week, but a negative return the second week. This makes this particular result harder to interpret. However, since the summarized coefficients for different lags in the model was positive, search volumes for restaurants were believed to have a positive correlation with the stock index.

In addition to the successful test results, some results were not in line with what one might had expected. For instance, indicated in previous studies was that a causal behaviour of the word debt could be expected. This was not the case, whereas the hypothesis of non-causality could be rejected for short term loan, a financial contract that incurs a debt upon the credit taker.

Perhaps the fact that a debt can be a result of many different types of loans etc. makes the correlation more complex and harder to mathematically detect.

Looking at the words for which the hypothesis was rejected an embryo of a trend can be found: either the words are directly related to the demand on the stock market or closely related to an economic crisis.

5.1.2 Investment Strategies

As presented in section 4.5 there were search volumes, or combinations of search volumes, in all three strategies that yielded greater returns than the 77.17 % return yielded by the index itself. For Strategy 1 and 2, where the investment decisions were based on search volumes from one keyword, only the strategies applied on searches for mortgage yielded greater return than the index itself over the five-year testing period. This is an interesting result, out of the eight words for which the null hypothesis was rejected, only the Granger-causality test for the mortgage-data rejected the hypothesis at a level of 1 %. In addition, mortgage had the only historic search data that yielded excess returns over the testing period. A natural trail of thought is to assume that a certain level of causality (i.e. significance level of 1 %) is necessary in order to be compatible with these two rather naive investment strategies.

References

Related documents

Thus, these equations are used with intent to capture Granger causality between the two sectors manufacturing value added and agriculture value added together with gross

Finally the conclusion to this report will be presented which states that a shard selection plugin like SAFE could be useful in large scale searching if a suitable document

First of all, we notice that in the Budget this year about 90 to 95- percent of all the reclamation appropriations contained in this bill are for the deyelopment

Stöden omfattar statliga lån och kreditgarantier; anstånd med skatter och avgifter; tillfälligt sänkta arbetsgivaravgifter under pandemins första fas; ökat statligt ansvar

Tillväxtanalys har haft i uppdrag av rege- ringen att under år 2013 göra en fortsatt och fördjupad analys av följande index: Ekono- miskt frihetsindex (EFW), som

Regioner med en omfattande varuproduktion hade också en tydlig tendens att ha den starkaste nedgången i bruttoregionproduktionen (BRP) under krisåret 2009. De

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft