• No results found

Rebalancing 2.0-A Macro Approach to Portfolio Rebalancing

N/A
N/A
Protected

Academic year: 2021

Share "Rebalancing 2.0-A Macro Approach to Portfolio Rebalancing"

Copied!
88
0
0

Loading.... (view fulltext now)

Full text

(1)

DEGREE PROJECT IN MATHEMATICS, SECOND CYCLE, 30 CREDITS

STOCKHOLM, SWEDEN 2020

Rebalancing 2.0-A

Macro Approach to

Portfolio Rebalancing

RAWAND SULTANI

KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ENGINEERING SCIENCES

(2)
(3)

Rebalancing 2.0-A Macro

Approach to Portfolio

Rebalancing

RAWAND SULTANI

Degree Projects in Financial Mathematics (30 ECTS credits) Master's Pogramme in Applied and Computational Mathematics KTH Royal Institute of Technology year 2020

Supervisors at COIN AB: Joakim Ahlinder Supervisor at KTH: Boualem Djehiche Examiner at KTH: Boualem Djehiche

(4)

TRITA-SCI-GRU 2020:047 MAT-E 2020:013

Royal Institute of Technology School of Engineering Sciences KTH SCI

SE-100 44 Stockholm, Sweden URL: www.kth.se/sci

(5)

ii

Abstract

Portfolio rebalancing has become a popular tool for institutional investors the last decade. Adaptive asset allocation, an approach suggest by William Sharpe is a new approach to portfolio rebalancing taking market capitalization of as- set classes into consideration when setting the normal portfolio and adapting it to a risk profile. The purpose of this thesis is to evaluate the traditional ap- proach of portfolio rebalancing with the adaptive one. The comparison will consist of backtesting and two simulation methods which will be compared computationally measuring time and memory usage (Monte Carlo and Latin Hypercube Sampling). The comparison was done in Excel and in R respec- tively. It was found that both of the asset allocation approaches gave similar result in terms of the relevant risk measurements mentioned but that the tradi- tional was a cheaper and easier alternative to implement and therefore might be more preferable over the adaptive approach from a practical perspective.

The sampling methods were found to have no difference in memory usage but Monte Carlo sampling had around 50% less average running time while at the same time being easier to implement.

(6)

iii

Svensk titel: Rebalansering 2.0-En makro strategi till portfölj rebalansering

Sammanfattning

Portfölj rebalansering har blivit ett populärt verktyg för institutionella investe- rare det senaste årtiondet . Adaptiv tillgångsallokering, en taktik föreslagen av William Sharpe är en typ av rebalansering där hänsyn tas till marknadsvär- det av tillgångsklasserna samtidigt som man anpassar det efter en riskprofil.

Syftet med detta arbete är att evaluera den traditionella strategin kontra den adaptiva strategin där jämförelsen kommer bestå av backtesting (tillämpa stra- tegin på historisk data) samt två simulationsmetoder(Monte Carlo och LHS).

Simulationernas implementering kommer jämföras med avseende på tid och minnesanvändning. Jämförelserna gjordes i Excel och i R respektivt. Resulta- tet av studien visar att att båda strategierna gav liknande resultat med avseende på de riskmått som finns med men att den traditionella strategin var billigare och enklare att implementera och kan därför vara den strategi att föredra från ett praktiskt perspektiv. Simulationsmetoderna visade ingen skillnad i minne- sanvänding men däremot att Monte Carlo var både lättare att implementera samt hade ca 50% mindre körtid i genomsnitt.

(7)

iv

Acknowledgements

First and foremost, I want to thank my supervisor at COIN Investment Consult- ing Group Joakim Ahlinder for his continuous support, feedback and giving me the opportunity to conduct this thesis. I would also like to express my sin- cerest gratitude to Pawel Herman at EECS for his support and willingness to help even though he had no obligation to do so. Last but not least, I would like to thank my supervisor Boualem Djehiche for his guidance during the process.

(8)
(9)

Contents

1 Purpose and research question 1

1.1 Purpose . . . 1

1.2 Research Question . . . 2

2 Introduction 3 2.1 Asset Allocation . . . 3

2.2 Rebalancing . . . 5

2.3 Previous Research . . . 6

2.3.1 Adaptive Asset Allocation Policies . . . 6

2.3.2 Research Gap . . . 8

2.4 Limitations . . . 8

2.4.1 Assumptions and Restrictions . . . 8

2.4.2 Dataset . . . 9

2.5 Algorithms . . . 9

2.5.1 Monte Carlo algorithms . . . 10

2.5.2 Las Vegas Algorithms . . . 10

2.5.3 Running Time and Memory Usage Analysis . . . 10

3 Methods 12 3.1 Tools . . . 12

3.2 Adaptive Asset Allocation in Practice . . . 12

3.3 Types of Rebalancing . . . 13

3.4 Sampling Distribution . . . 14

3.5 Backtesting . . . 14

3.6 Risk measurements . . . 15

3.6.1 Yearly return . . . 15

3.6.2 Yearly standard deviation . . . 15

3.6.3 Value-at-Risk (VaR) . . . 16 3.6.4 Empirical Value-at-Risk (V aR) . . . 16d

v

(10)

vi CONTENTS

3.6.5 Expected-Shortfall (ES) . . . 16

3.6.6 Empirical Expected-Shortfall (ES) . . . 17d 3.6.7 Sharpe ratio . . . 17

3.7 Algorithms . . . 17

3.7.1 Monte Carlo Sampling Algorithm . . . 19

3.7.2 Latin Hypercube Sampling Algorithm . . . 19

4 Results 20 4.1 Backtesting . . . 21

4.1.1 Time & Threshold, 60-40 policy . . . 21

4.1.2 Time & Threshold, 50-50 policy . . . 22

4.1.3 Time & Threshold, 40-60 policy . . . 23

4.1.4 Summary of Time & Threshold Strategy . . . 24

4.1.5 Time . . . 25

4.1.6 Time,60-40 policy . . . 26

4.1.7 Time,50-50 policy . . . 27

4.1.8 Time,40-60 policy . . . 28

4.1.9 Summary of Time Strategy . . . 30

4.2 Distribution Selection . . . 31

4.2.1 Density Plots . . . 32

4.2.2 QQ-plots . . . 35

4.2.3 Autocorrelation and Heteroscedasticity . . . 36

4.2.4 Empirical CDF . . . 40

4.3 MC simulations . . . 43

4.3.1 Time Threshold, 60-40 policy . . . 43

4.3.2 Time Threshold, 50-50 policy . . . 44

4.3.3 Time Threshold, 40-60 policy . . . 45

4.3.4 Summary of Time and Threshold Strategy . . . 47

4.3.5 Time strategy, 60-40 policy . . . 48

4.3.6 Time strategy, 50-50 policy . . . 50

4.3.7 Time strategy, 40-60 policy . . . 52

4.3.8 Summary of Time strategy . . . 54

4.4 LHS simulations . . . 55

4.4.1 Time Threshold, 60-40 policy . . . 55

4.4.2 Time Threshold, 50-50 policy . . . 56

4.4.3 Time Threshold, 40-60 policy . . . 57

4.4.4 Summary of Time and Threshold strategy . . . 58

4.4.5 Time strategy, 60-40 policy . . . 59

4.4.6 Time strategy, 50-50 policy . . . 61

(11)

CONTENTS vii

4.4.7 Time strategy, 40-60 policy . . . 63

4.4.8 Summary of Time strategy . . . 65

4.5 Comparison of MC & LHS . . . 67

4.5.1 Average Running Time . . . 67

4.5.2 Memory Usage . . . 67

5 Discussion 68 5.1 Traditional Asset Allocation vs Adaptive Asset Allocation . . 68

5.2 Monte Carlo Sampling vs LHS Sampling . . . 70

6 Conclusions 72

Bibliography 73

(12)
(13)

Chapter 1

Purpose and research question

1.1 Purpose

The purpose of this thesis is to compare an adaptive asset allocation policy with a traditional asset allocation policy building mainly on the work from Sharpe [1]. The argument Sharpe [1] provides for motivating this strategy is that even though the value of products in asset classes can change often over time this does not imply that the value of the asset class itself changes. Look- ing at the value of the asset class itself is therefore of interest.

Rebalancing strategies have gained popularity among institutional investors over the recent years where one of the reasons is that investors can maintain the same risk-profile in a portfolio over time while using diversification between different asset classes [2]. The comparison will be conducted by backtest- ing and two different sampling methods which are referred to as Monte Carlo sampling and Latin Hypercube Sampling in this thesis. The reason for using backtesting alongside simulations is to see what the outcome of our strategies implies in a stochastic or future sense but at the same time compare it with how the outcome of the different rebalancing strategies strategies would have looked like from a real-world historical perspective. This study will further- more compare the two simulation methods with regards to results, implemen- tation and average running time and memory usage in order to evaluate which of them is fitting for these kinds of simulations. Besides the simulations and the backtesting this study will also compare the results by looking at some risk measurements such as Value-at-Risk and Expected-Shortfall to name a few.

1

(14)

2 CHAPTER 1. PURPOSE AND RESEARCH QUESTION

1.2 Research Question

From the purpose stated this study will focus on answering two questions.

The first one is the comparison of the outcome between the two different asset allocation strategies with regards to the risk measurements and how feasible they are in practice. The motivation for this question is that even though one strategy might outperform the other it might be a riskier strategy which is rel- evant to investors since it would imply comparing performance without taking into account the risk that comes with it. The second question is related to the sampling methods seeing how they differ in results, implementing them and looking at average running time and memory usage. From this the following research questions can be formulated:

• How well does an adaptive asset allocation policy compare to a tradi- tional asset allocation policy with regards to the risk measurements and their practical implications?

• What are the differences between the two sampling methods in regards to their implementations, computational aspects and effect on the results?

(15)

Chapter 2

Introduction

2.1 Asset Allocation

Asset allocation can be described as a risk-and-return balancing investment philosophy focusing on asset classes such as equity or bonds and is today a portfolio policy widely used by both institutional and individual investors [3].

A possible explanation for this can be derived from the fact that the view of diversification has changed where the traditional approach to diversification was that as long as all of the investment capital was not allocated in one sin- gle product this implied diversification. Diversification has now expanded to a broader view where diversification is done between asset classes rather than within them [3]. The origin of this new paradigm of diversification was the famous paper Portfolio Selection by Harry Markowitz [4] released in 1952.

Markowitz’s paper is said to have laid the foundation for modern portfolio theory and is still today relevant for portfolio optimization where he amongst other things brought to light the effects diversification between asset classes has on a portfolio. Markowitz showed and recognised the interrelationship be- tween different asset classes which in its turn introduced a new, third dimen- sion to portfolio theory which is the diversification effect. The diversification effect is a measure of how the overall return and volatility characteristics is impacted when a different asset class is added to the portfolio. In relation with this Gibson [3] adds that the difference in return and volatility characteristics between the asset classes will lead to smoothing the overall portfolio volatility leading to less risk in a portfolio.

According to Gibson [3] the investment management business is changing by becoming more focused on achieving clients financial goals through placing

3

(16)

4 CHAPTER 2. INTRODUCTION

more of an emphasis on fitting asset allocation policies rather than beating the market. There are however some critique to asset allocation. Booth [5] argues that even though asset allocation have been promoted for years it has done very little to prevent investors from significant losses and that the set of asset classes is fairly arbitrary and static. Furthermore, Booth [5] argues that the type of passive investing which asset allocation gives rise to does not exclude the possibility that active investing can outperform a passive one consistently and that there is support for both claims. The equal importance of asset al- location and active management [6] reaffirms this in their study when trying to derive the relative importance of the two different strategies when explain- ing variability of returns within a peer group where the authors conclude that both of the strategies are of equal relative importance when determining port- folio return differences within a peer group. Although there is critique there have been occurrences where rebalancing strategies have been relatively well- performing in terms of buying low and selling high. The following picture illustrates this:

Figure 2.1: Example of 50-50 rebalancing between fixed income and stocks Although the critique of asset allocation is justified as is its upsides it should be mentioned that asset allocation comes in many forms. For example, Perold and Sharpe [7] mentions that there are buy-and-hold strategies as well as more dynamic ones. In this thesis the focus and the context will involve the latter one exploring a particular strategy, namely portfolio rebalancing or sometimes called Strategic Asset Allocation.

(17)

CHAPTER 2. INTRODUCTION 5

2.2 Rebalancing

Within the dynamic strategies of asset allocation it is common amongst as- set allocating investors to set target percentage values for the value of each asset class which will constitute the portfolio [1]. Rebalancing a portfolio is done when the value of the asset classes deviates from the target percentage which can either be done by buying an asset because of a decrease in its price or selling an asset because of an increase in its price. How much the asset is allowed to deviate from its target percentage is defined as the limits of the asset class. Which limits and with what interval of time between rebalancing the portfolio is a point of investigation. It is important to recognize that with- out rebalancing the portfolio the risk and return characteristics of the portfolio change over time. If we for example have a 50-50 portfolio consisting of stocks and bonds the risk and return characteristics will change over time depending on how the prices of the assets change over time. The consequence of this is that the risk profile change which can plausibly not be the desired risk and return profile desired from the investor [2]. Rebalancing is a strategy that can be incorporated in many ways with regards to how often one rebalances and what the limits of the asset classes are. It also introduces a cost for investors when purchasing assets which indicates that finding the optimal rebalancing solution for optimizing returns and minimizing risks and costs is not trivial. In fact, Zilbering, Jaconetti, and Kinniry Jr [2] concludes that there is no optimal rebalancing strategy regarding limits and frequency but the only thing to take advantage of is in order to remain at the same risk-and-return characteristics rebalancing the portfolio is a useful tool to do this with. Although there is no optimal rebalancing strategy the authors have concluded in a general sense that a semi-annual or annual monitoring and/or rebalancing with limits of 5%

is a reasonable balance between risk and costs where they state that these pa- rameters offers sufficient risk control for low costs.

Both Sharpe [1] and Zilbering, Jaconetti, and Kinniry Jr [2] explains that the rebalancing strategy can be hard to implement in reality from the perspective of an since the strategy is described as contrarian by Sharpe [1] and there- fore creates an emotional toll on the investor. This occurs when for example an investor keeps buying an asset that keeps falling in price or selling off an asset rising in price and this may be very trying for some investors [2]. Further- more, Sharpe [1] mentions that this strategy is not even possible if all investors choose to follow it because of its contrarian nature and describes it as if the price of an asset rises and the limits demand that you sell off some of your

(18)

6 CHAPTER 2. INTRODUCTION

investment, who will the buyer be if all actors in the markets wants to sell? It is thus impossible for all investors to be contrarian and hence impossible for everyone to follow a traditional asset allocation policy [1].

Rebalancing have so far been described as investing in multiple asset classes with specified target percentage values. Sharpe [1] describes this as traditional asset allocation policies which is recognized by setting fixed percentage values for the different asset classes in the portfolio and rebalancing with the purpose of holding the value of the asset as close to the target percentage as possible or at least within its limits. In this thesis traditional asset allocation will refer to the same thing but the focus is centered around what Sharpe [1] calls Adaptive asset allocation policies, referring to a rebalancing strategy adapting the per- centage values of the assets classes to the percentage which the asset constitute of the total market capitalization of all relevant assets. The reason why this is called adaptive is that as the market capitalization changes so should the target percentages change. Sharpe [1] claims that this should be preferred over tra- ditional asset allocation policies since this does not require transactions with other investors as soon as the market prices changes.

2.3 Previous Research

2.3.1 Adaptive Asset Allocation Policies

This thesis will mainly be based on the work of Sharpe [1] and can be seen as an expansion of his work and this section of the thesis will outline his work and furthermore state what the research gap is in and how this thesis can provide further insight on the subject of adaptive asset allocation policies.

Sharpe [1] begins the paper by explaining portfolio rebalancing and traditional asset allocation which is setting target percentage values for the desired value that the different asset classes shall have in the portfolio and sticking with them over time. In the context of traditional asset allocation the author goes on to explain the contrarian nature of rebalancing which have already been stated above. Sharpe continues the paper by illustrating a problem related with traditional asset allocation policies by using an example of an asset allocation policy consisting of stocks and bonds using two US-based indexes to represent these asset classes, specifically the Wilshire 5000 stock index and the Barclays Capital U.S aggreate bond index. Sharpe continues by stating that the goal of the portfolio is that 60% of the value should come from stocks and the rest

(19)

CHAPTER 2. INTRODUCTION 7

from bonds. Furthermore Sharpe continues by showing a time series chart of the ratio of the value of U.S stocks to U.S stocks plus Bonds

Figure 2.2: Ratio of the value of US stocks to US stocks + Bonds where it is clear that the ratio varies over time. In relation with this, he states that at a certain point in time the portfolio needs to be rebalanced by buying more stocks which leads to making the portfolio in the short term more riskier than intended but also states that at another point in time the portfolio needs to sell off stocks in an effort to rebalance the portfolio. The point of this is to illus- trate that in the short-term the fund does not follow the asset allocation policy desired. It should be noted however that in the long-term the risk follows the 60/40 policy intended. Sharpe goes on to refer that the long-term does not matter for investors referring to Keynes famous quote in the long term we are all dead. Sharpe therefore suggests two strategies in order to have a portfolio better suited to represent the desired target percentages. The first suggestion is to use Optimization based on Reverse Optimization, a strategy which will not be discussed in thesis. The second is what the author describes as adaptive asset allocation policies.

Adaptive asset allocation policies is described as an asset allocation policy taking the market capitalization of asset classes into consideration when con- structing the normal portfolio. If we for example would have a market solely consisting of bonds and stocks and stocks market capitalization constitute 70%

of the total market capitalization this should be reflected in an allocation policy and hence in the portfolio. As the market capitalization changes over time, so should the percentages in a portfolio which is why the policy is called adap- tive. The following picture illustrates this showing the relationship between the market capitalization of the two asset classes.

(20)

8 CHAPTER 2. INTRODUCTION

Figure 2.3: Relationship between the market capitalization of our different asset classes since 1990

2.3.2 Research Gap

The research gap and continuing of Sharpe’s work will be focused mainly on two areas. First, Sharpe [1] concludes his report by suggesting that in order to incorporate the adaptive asset allocation policy it is necessary to obtain mar- ket capitalization data of the asset classes. This is something which the author suggests is not straight-forward since finding the market capitalization of in- dexes is at the hands of the index providers and not the investors themselves.

Furthermore the author suggest to use monthly data which this study intends to do. Second is that Sharpe [1] does not conduct any backtesting or sim- ulation of this policy on a monthly bias to see what results which can arise from this adaptive allocation policy and suggests that doing so can aspire to discussions amongst institutional investors regarding adaptive asset allocation policies and can hence make them more comfortable with it. By using monthly data for conducting both backtesting and simulations both of these suggestions by Sharpe will be addressed.

2.4 Limitations

2.4.1 Assumptions and Restrictions

In order to conduct this thesis a few assumptions and limitations are present.

First and foremost this thesis will be assuming that all capital in financial mar- kets consists of equity and bonds and no other asset class, such as alternative

(21)

CHAPTER 2. INTRODUCTION 9

investments like real-estate, will be taken into consideration. The main rea- son for this is that it is simply too demanding and outside of the scope of this study to find and derive the market capitalization of the worlds entire finan- cial assets. Another reason is that this is consistent with Sharpe [1] which this thesis is based on since he uses stocks and bonds to illustrate the concepts of adaptive asset allocation policies. Furthermore this study will use market cap- italization indexes of two different asset classes in order to quantify them and to clarify these will serve as proxies since it is, again, outside of the scope of the thesis to derive the exact market capitalization of the asset classes. The in- dex which will be used for equity market capitalization is U.S based and there are a number of reasons for this. To begin with Gibson [3] states that around 30 years ago the U.S financial markets constituted around 100% of the wrolds liquidable capital market and hence they have always held a dominant role in this context. In relation with globalisation and emerging markets however this ratio is around 40% today but this thesis will assume that this is sufficient for it to serve as a proxy for the financial markets. For the bonds the Barclays US aggregate Bond Index will be used. In total there are 348 points of monthly data spanning over 29 years, from 1990-12-31 to 2019-12-31.

2.4.2 Dataset

The index which will be used as a proxy for equity is S&P 500 alongside with a monthly market capitalization index which will be provided by Siblisresearch [8]. Furthermore the monthly fixed income market capitalization will be rep- resented by the Barclays US aggregate bond index where monthly data have been provided by Bloomberg [9]. The data will be monthly since it otherwise would not be consistent with Sharpe [1] suggestion of expanding his work.The data points spans between 1990-12-31 to 2019-12-31 leading to a total number of 349 data points or 29 years of monthly data.

2.5 Algorithms

The simulations for this thesis will involve algorithms which will be of proba- bilistic character rendering different results for the performance of the different asset allocation policies. The algorithms can also be called randomized algo- rithms according to Cormen et al. [10] since they involve generating random numbers which is synonymous to sampling. The simulations will in short sample from certain fitting distributions for the variables of interest such as the returns of the asset classes and then go on to see how well the adaptive ap-

(22)

10 CHAPTER 2. INTRODUCTION

proach performs in relation with the traditional approach. The sampling will be conducted in two different ways as already stated and will vary between what is referred to as the Monte Carlo way of sampling which simply is ran- dom sampling, and Latin Hypercube Sampling (LHS) which is sampling from a stratified cumulative distribution function. LHS is in simpler terms sampling from unique intervals in the 0-1 region which then can be fitted to a distribu- tion of choice. Both of these sampling techniques is what gives rise to the probabilistic nature of the algorithms.

Within the field of randomized algorithms there are two main types of ran- domized algorithms called Monte Carlo algorithms and Las Vegas algorithms where the former is not to be confused with what is used to describe one of the sampling methods in this thesis [11].

2.5.1 Monte Carlo algorithms

As mentioned in Motwani and Raghavan [11] randomized algorithms belong- ing to the Monte Carlo class are recognized by the fact that they may fail or give wrong results as outcome but do not have their runtime dependent on the randomness and will therefore finish and not run infinitely.

2.5.2 Las Vegas Algorithms

Motwani and Raghavan [11] describes Las Vegas algorithms as always return- ing the correct results and succeeding but not guaranteeing that the program will finish in a desired time and can theoretically run infinitely

2.5.3 Running Time and Memory Usage Analysis

The randomized algorithm in this thesis will be, regardless of sampling method that is, a Monte Carlo algorithm since its running time is not randomized and it will always return a result. There is however no clear right or wrong answer or a certain event we will by looking for. The outcome is how the portfolio per- formed. Analyzing time complexity of randomized algorithms is in Cormen et al. [10] suggested as describing the running time as average-case run-time since the randomized part of the algorithm can be viewed as an input. An im- portant distinction of the analysis of randomized algorithms in Cormen et al.

[10] is the input to the algortithm is random or if the algorithm itself makes random choices where the authors state that if the algorithm would make ran- dom choices as it goes along the expected running time will be of interest. The

(23)

CHAPTER 2. INTRODUCTION 11

expected running time can be returned with the use of indicator variables in case there are certain events we are looking for as described by Cormen et al.

[10]. In order to exemplify and relate this to our work this could for example be comparing with a benchmark and using indicator variables when we beat the benchmark in terms of Sharpe ratio. Otherwise if the input is random which is the case in this thesis the average running time is of interest.

The memory usage of the algorithms used will be will be of interest to the report in comparing the sampling methods in order to see which one of them is more or less computationally demanding. Olsson, Sandberg, and Dahlblom [12] suggests in their study that LHS might be a more efficient way of sampling than compared with simple Monte Carlo in the sense that they have observed 50% less use om computer performance achieving the same result. It is there- fore of interest to measure the space needed by the two different sampling methods and comparing them. This is however of equal relevance for the av- erage running time meaning a comparison of both sampling methods will be done in two categories where one is space and the other time in order to give a idea of the computational performance. The differences in results will also be adressed.

Furthermore it is important to make a distinction of the algorithm being dis- cussed in this section. The randomized algorithm being addressed is not the entire scope of relevant code for conducting this work. Since the code will iterate over limits, policies and datasets which are all of fixed lengths it makes little sense to have a serious discussion about time complexity. To address this briefly, all iterations will have constant boundaries and no iteration will be de- pendent on any kind of input. Because of this the entire code including the randomized part will have a time complexity of O(1). Hence it makes more sense to focus on the randomized part which is what is addressed by Cormen et al. [10] in relation with probabilistic analysis.

(24)

Chapter 3

Methods

3.1 Tools

For backtesting Microsoft Excel have been used. There are mainly two rea- sons for this. The first is that before any backtesting began there was already a rebalancing file implemented by Coin. This was however only for traditional rebalancing so changes had to be made to implement the adaptive asset allo- cation policy. The second reason for the use of Excel is that the tool allows for changing strategy, policy, limits and more in an efficient way. It is important to remember that there are many combinations of rebalancing and this was taken into account when choosing a proper tool.

For the simulation part R has been used alongside the tool RStudio. The moti- vation is that the language has good settings and tools for statistical and math- ematical work.

3.2 Adaptive Asset Allocation in Practice

Adaptive Asset Allocation as suggested by Sharpe [1] is not simply looking at the market cap and letting it decide the normal portfolio and therefore letting it decide the investors risk-profile. Adaptive Asset Allocation policy can loosely be described as a tweak or adjustment to your current normal portfolio letting the market capitalization of the asset classes have a saying in the policy but not change it substantially unless there has been abnormal increase/decrease in the market capitalization. The strategy is implemented by the following algorithm which Sharpe [1] describes:

12

(25)

CHAPTER 3. METHODS 13

1. For each asset class, calculate a factor ki which is defined as VVh0 where V defines the market capitalization and V0 defines the current (todays) market capitalization and Vh describes a historic value of the market capitalization. This will be done for all asset classes so if you have two asset classes you will get two factors. These will be referred to as asset factors.

2. Once this is done multiply each corresponding asset class normal weight with the asset factors and sum it. If I have two asset classes and a 60-40 policy this means that you will get 60%∗k1+40%∗k2 = sum. This sum can be above 100% or below it depending on the factors and policies.

3. Once the sum has been determined it needs to be decided what percent- age our assets normal portfolio weight, multiplied with the asset factor constitutes of the sum since this will decide the weight of the normal portfolio. For example, let’s say that the sum is 110% and the equity factor is 1.15 and the equity normal weight is 60%. We would then have a new equity normal weight of:

0.6 ∗ 1.15

1.1 = 62.72% (3.1)

As we can see the market capitalization is not the only thing deciding the nor- mal portfolio but rather adjusting the desired weights to the market capitaliza- tion of the asset classes.

3.3 Types of Rebalancing

Zilbering, Jaconetti, and Kinniry Jr [2] mentions that there are three differ- ent ways rebalancing can be conducted. The first one can be describes as the time strategy which rebalances on a timely-basis like for example every week, month, quarter and so on. The second strategy can be described as the thresh- old strategy which only look if the asset classes have surpassed their respective limits. From a chronological perspective this means that rebalancing can hap- pen at any time. The final strategy is the time and threshold strategy which is a combination of the previous two strategies which is conducted by looking at the portfolio weights at a regular time interval and if the asset classes have surpassed any of their limits the portfolio will be rebalanced.

(26)

14 CHAPTER 3. METHODS

In this thesis no rebalancing with the threshold strategy will be evaluated since it would be excessive and not meaningful. The data is monthly and checking if the threshold is reached every month is the same thing as the time and threshold strategy. Zilbering, Jaconetti, and Kinniry Jr [2] also suggests that in order for the threshold strategy to be implemented one needs to look at daily observed data in order to give the strategy meaning over the two other strategies.

3.4 Sampling Distribution

For the simulations it will be crucial to choose a well-motivated distribution for sampling. A few example of what needs to be sampled is returns of the asset classes, which is given by the proxies used and market capitalization factors (referred to as asset factors above). The distribution was motivated by determining from density plots what kind of distribution the data resem- bles. From there a couple of distributions with similar pattern was decided to proceed with further evaluations. After comparing QQ-plots and empirical CDF’s between the normal and student-t distribution it was decided that the student-t distribution was the more appropriate one mainly based on the plots but also that the student-t distribution exhibits heavier tails than the normal distribution. This property can be advantageous since it can exhibit extreme values easier and therefore taking into account the absolute worst cases and not understating them as the normal distribution might implicitly do.

Since the student-t distribution requires a degree-of-freedom parameter the function fit.st from package QRM in R have been used to estimate the degrees of freedom best suited for the distributions. In fact this is what can be observed on all plots with fitted student-t distribution in the results section. The details of the distributions can be found in section 4.2.4.

All of this work will be done in R and RStudio since R has all functional- ity needed in order to conduct the work described in the previous paragraph.

Hence there is no need to build any statistical evaluation manually and there- fore decreasing the risk of erroneous methods and results.

3.5 Backtesting

Backtesting will be conducted on historical monthly data stretching from 1990- 12-31 up until and including 2019-12-31. There are many different ways re-

(27)

CHAPTER 3. METHODS 15

balancing can be conducted. For starters an investor can have different policies for the asset classes. What is meant by policy is simply the weight of the assets in the portfolio, e.g. 60% in equity and 40% in fixed income. Three policies will be evaluated in this part. 60-40, 50-50 and 40-60. For every policy, there will be different rebalancing strategies which are mentioned in section 3.3.

For every rebalancing strategy there will be different rebalancing frequencies, monthly, quarterly and annually. Besides this four limits will be tested which are 1%, 5%, 7%, and 10%. For every limit the yearly return and standard devi- ation will be presented alongside the Sharpe ratio. For each limit the number of transactions will also be stated in order to find out how expensive each com- bination of parameters is. All of the mentioned policies will be conducted for three time periods where each will be around 10 years. These are from 1991- 2000, 2001-2010 and from 2011-2019. The reason for this is to compare the strategy with different starting and end points and for a shorter amount of time since equities perform better in the long term which will lead to a greater shift in portfolio weight and this effect is desired to be excluded.

3.6 Risk measurements

3.6.1 Yearly return

Yearly return will in this thesis refer to the compound annual growth rate and can be defined as:

(Vf inal

Vstart)1t − 1 (3.2)

where V is the value of our portfolio. In this thesis four time periods will be evaluated. 1991-2019, 1991-2000, 2001-2010, 2011-2019 which implies that the t in the above equation will vary between 29, 10 and 9 years.

3.6.2 Yearly standard deviation

The yearly standard deviation is defined as:

σannual= σreturnsqf req (3.3)

where σannualis the annual standard deviation, σreturnsthe standard deviation for a certain amount of returns and freq the data frequency. If we have monthly data the freq variable is 12 and if it is quarterly freq is 4 and so on.

(28)

16 CHAPTER 3. METHODS

3.6.3 Value-at-Risk (VaR)

Let X denote the return of some asset and R0 the risk free rate. Then let L = −XR

0 be viewed as the discounted loss for some asset. The Value-at-Risk at level p is then defined as, in terms of L [13]:

V aRp = min{m : P (L ≤ m) ≥ 1 − p}

In statistical terms the Value-at-Risk at level p can be described as the negative (1-p)-quantile of X where X is the return of some asset and the above can be rewritten as:

V aRp(X) = FL−1(1 − p) (3.4) For this thesis the returns of the traditional and adaptive asset allocation poli- cies resembled a fitted normal distribution the best rather than a student-t dis- tribution. Hence the theoretical Value-at-Risk will assume a normal distribu- tion.

3.6.4 Empirical Value-at-Risk ( V aR)

d

The empirical Value-at-Risk assumes an underlying empirical distribution and is based on the empirical quantiles. Hult et al. [13] defines the empirical quan- tile at level p of a random variable X with n samples as:

Fn,X−1(p) = Xbn(1−p)+1c,n (3.5) Let L describe the discounted loss in the same way as explained in the defini- tion of Value-at-Risk. Given a sample of discounted losses L1...Lnthe empir- ical Value-at-risk at level p with n samples is given by:

V aRdp = Lbnpc+1,n (3.6)

where L1,n ≥ ... ≥ Ln,n is the ordered sample sorted from largest (L1,n) to smallest (Ln,n)

3.6.5 Expected-Shortfall (ES)

As Hult et al. [13] mentions the Value-at-Risk is one of the most popular mea- sures of risk but there are some serious drawbacks with it. The biggest one is that it ignores what remains of the left tail (beyond level p) of the distribution

(29)

CHAPTER 3. METHODS 17

of X (return of an asset). The consequence of this is that catastrophic scenar- ios can hide farther out in the left tail.

A way to counter this problem is to look at the average VaR below the level p. This average is what is referred to as Expected-Shortfall and can be defined as:

ESp(X) = 1 p

Z p 0

V aRu(X)du (3.7)

3.6.6 Empirical Expected-Shortfall ( ES)

d

Similar to the definition of the empirical Value-at-Risk, the empirical Expected- Shortfall at level p with n samples can be defined as:

ESdp = 1 p ∗

bnpc

X

k=1

Lk,n

n + p − bnpc n

!

∗ Lbnpc+1,n

(3.8)

3.6.7 Sharpe ratio

This ratio is defined as:

ER− rf

σY (3.9)

where ERis the expected yearly return, rf the risk-free rate and σY the yearly standard deviation. The risk-free rate will in this case be the American 3- month treasury yield rate.

3.7 Algorithms

This section addresses the algorithms used for sampling data when running the simulations. As previously mentioned, two sampling methods will be used.

Monte Carlo methods refers to a range of stochastic methods and although Monte Carlo is a class rather than a specific method there are four things all of the methods have in common [14]:

• Defined a domain of possible outputs

(30)

18 CHAPTER 3. METHODS

• Generated inputs randomly from a probability distribution over the do- main

• Performed a deterministic computation on the inputs

• Aggregate the results

Both of the sampling methods are of Monte Carlo class. The first sampling method will be referred to as simply Monte Carlo sampling which in essence is simply drawing a number of random samples from a chosen distribution [15].

Irregardless of sampling method the sampling algorithm can be described by the following pseudocode:

Algorithm 1: Randomized segment for i ∈ 1 :iterations do

/* Sample n points from certain

distribution which are supposed to

represent months, quarters or years for either 29, 10 or 9 years. Sampling looks different depending on if its LHS

or MC. */

mcF actorsBonds ← samplef unc(n, distribution) returnsBonds ← samplef unc(n, distribution) mcF actorsEquity ← samplef unc(n, distribution) returnsEquity ← samplef unc(n, distribution) Calculate outcome of portfolio...

The last part of the pseudocode is neglected since it is not very relevant to the topic of randomized algorithms and their time computational aspects. What is relevant however is how the sampling is implemented. To begin with it was found that the student-t distribution was the most fitting one based on a number of factors which will be outlined in the results section. R already has a special function for random generation for a t-distribution called rt and this is what have been used. For LHS, the lhs R package which contains a function called lhs was used meaning that the implementation for LHS is implemented by the authors of the lhs package which. This is important to take into consideration when measuring time and space usage.

The algorithm will be measured using R’s Sys.time function of measuring run- ning time and for measuring memory usage, specifically the change of memory

(31)

CHAPTER 3. METHODS 19

usage, the function mem_change from the pryr package will be used. Since it is preferable to see what happens with average running time as the num- ber of iterations increases the measurements will be presented for 100, 1000 and 10000 iterations. The memory usage will be measured between the two sampling methods for 1000 iterations.

3.7.1 Monte Carlo Sampling Algorithm

This simulation can be described by the following steps:

1. Draw a number of random samples from a distribution appropriate for the asset factors for bonds and equities but also for the returns of bonds and equities. The number of samples which will be drawn is also de- pendent on whether we are interested in simulating for the whole period or a 10-year period.

2. After the samples have been drawn, calculate returns and risk measur- ments such as Sharpe ratio, VaR, ES and other possible risk measure- ments.

3. Repeat this a large number of times. The more the better for accuracy [15]. In this thesis 10000 iterations will be used

4. Average out every measurement to serve as the final result

3.7.2 Latin Hypercube Sampling Algorithm

Latin Hypercube Sampling is a Monte Carlo method which has gained pop- ularity over its effectiveness over standard Monte Carlo methods such as im- portance sampling. The main difference between LHS and crude Monte Carlo is stratification which describes dividing a cumulative distribution function (CDF) into n intervalls where n is the number of data points to be sampled.

Each data point is then sampled from the internals randomly. The algorithm can shortly be describes as follows:

1. Find the CDF of the distribution and the number of points needed for sampling.

2. Stratify the CDF into n intervals

3. Sample from each interval randomly once 4. Repeat step 2-4 stated in section 3.7.1

(32)

Chapter 4

Results

The results will mainly be consisting of tables where the two different strate- gies can be read from the column headers denoted by an M or a T. The M stands for Macro and implies the adaptive asset allocation policy and the T stands for Traditional which denotes the traditional approach. For the different strate- gies (time and threshold and time strategy) the tables will differ. For the time and threshold strategies limits will play a role since they decide how often the portfolio rebalances. Therefore the tables associated with this strategy will for every data frequency (monthly, quarterly and annually) have four limits (1%, 5%, 7% and 10%). For the time strategy however limits are omitted since we rebalance as often as the data frequency.

Furthermore the backtesting will not include the risk measurements VaR or ES or their empirical equivalents since the backtesting does not involve any prob- ability distribution. It only serves to answer the question of how the strate- gies would have performed in our world with the historic data. The lack of a probability distribution in the results makes including solely the empirical equivalents less meaningful since a comparison between the empirical and the theoretical risk measurements is of interest.

The results are summarized by taking the average of the measurements, except transactions, across all time intervals in order to have a concise but at the same time representative idea of the different asset allocation policies. Instead of showing the average, the transaction row shows the sum of transactions over all four time periods. Averaging over all of the time periods implies for example that there will be three tables (one for each policy) for the time and threshold strategy and time strategy respectively.

20

(33)

CHAPTER 4. RESULTS 21

4.1 Backtesting

4.1.1 Time & Threshold, 60-40 policy

60-40 policy

Rebalancing frequency Monthly Quarterly Annually

Limits 1% 5% 7% 10% 1% 5% 7% 10% 1% 5% 7% 10%

Type of Rebalancing M T M T M T M T M T M T M T M T M T M T M T M T

Yearly return 11.9% 11.85% 11.95% 11.86% 11.94% 11.9% 11.99% 11.96% 11.93% 11.88% 11.93% 11.9% 12.15% 12.19% 12.13% 12.% 12.45% 12.2% 12.51% 12.32% 12.51% 12.32% 12.49% 12.51%

Yearly std 8.21% 7.74% 8.33% 7.89% 8.63% 7.89% 8.57% 7.89% 8.08% 8.04% 8.21% 8.22% 8.28% 8.32% 8.39% 8.44% 10.6% 10.4% 10.8% 10.6% 10.78% 10.63% 11.2% 10.7%

Sharpe ratio 1.15 1.53 1.13 1.5 1.38 1.51 1.1 1.52 1.17 1.18 1.15 1.15 1.17 1.16 1.11 1.12 1.17 1.18 1.16 1.16 1.16 1.16 1.12 1.17

Transactions 45 31 3 3 2 2 1 1 27 13 5 3 4 2 1 1 8 4 2 2 2 2 1 1

Table 4.1: Time & Threshold strategy, 1991-2000

60-40 policy

Rebalancing frequency Monthly Quarterly Annually

Limits 1% 5% 7% 10% 1% 5% 7% 10% 1% 5% 7% 10%

Type of Rebalancing M T M T M T M T M T M T M T M T M T M T M T M T

Yearly return 2.19% 2.31% 2.36% 2.39% 2.38% 2.5% 2.56% 2.72% 2.59% 2.62% 2.78% 2.77% 3.23% 2.85% 2.77% 2.45% 3.04% 2.71% 2.96% 2.67% 2.8% 2.35% 2.73% 2.35%

Yearly std 9.96% 10.0% 9.82% 9.87% 9.72% 9.75% 9.82% 10.0% 11.0% 11.0% 10.8% 10.9% 10.4% 10.8% 10.8% 10.7% 12.35% 12.47% 12.22% 12.59% 12.9% 12.3% 12.9% 12.4%

Sharpe ratio 0.22 0.23 0.21 0.21 0.25 0.26 0.26 0.27 0.23 0.21 0.26 0.25 0.31 0.26 0.26 0.23 0.25 0.22 0.21 0.21 0.22 0.19 0.21 0.19

Transactions 64 64 12 12 4 4 2 2 24 16 10 5 5 4 4 1 10 4 5 3 4 2 2 2

Table 4.2: Time & Threshold strategy, 2001-2010

60-40 policy

Rebalancing frequency Monthly Quarterly Annually

Limits 1% 5% 7% 10% 1% 5% 7% 10% 1% 5% 7% 10%

Type of Rebalancing M T M T M T M T M T M T M T M T M T M T M T M T

Yearly return 7.17% 7.1% 7.16% 7.15% 7.19% 7.21% 7.26% 7.26% 7.26% 7.25% 7.32% 7.32% 7.3% 7.31% 7.44% 7.44% 7.37% 7.32% 7.34% 7.38% 7.34% 7.38% 7.44% 7.44%

Yearly std 7.29% 7.32% 7.48% 7.4% 7.40% 7.33% 7.58% 7.6% 7.48% 7.51% 7.48% 7.52% 7.65% 7.69% 7.66% 7.66% 8.58% 8.43% 8.64% 8.67% 8.65% 8.71% 8.73% 8.72%

Sharpe ratio 0.98 0.97 0.96 0.97 0.97 0.98 0.96 0.96 0.97 0.97 0.98 0.97 0.95 0.95 0.97 0.97 0.86 0.87 0.85 0.85 0.85 0.85 0.85 0.85

Transactions 41 23 2 2 2 2 1 1 26 11 2 2 2 2 1 1 7 4 2 1 1 1 1 1

Table 4.3: Time & Threshold strategy, 2011-2019

60-40 policy

Rebalancing frequency Monthly Quarterly Annually

Limits 1% 5% 7% 10% 1% 5% 7% 10% 1% 5% 7% 10%

Type of Rebalancing M T M T M T M T M T M T M T M T M T M T M T M T

Yearly return 7.79% 7.86% 7.61% 7.68% 7.59% 7.59% 7.50% 7.52% 7.62% 7.59% 7.67% 7.64% 7.63% 7.82% 7.81% 7.54% 7.73% 7.64% 7.76% 7.68% 7.85% 7.55% 7.76% 7.67%

Yearly std 8.56% 8.59% 8.63% 8.62% 8.68% 8.72% 8.75% 8.78% 8.78% 8.83% 8.79% 8.93% 8.89% 8.94% 8.99% 9.00% 10.55% 10.66% 10.56% 10.87% 10.56% 10.85% 10.71% 10.84%

Sharpe ratio 0.872 0.871 0.876 0.876 0.873 0.876 0.887 0.891 0.864 0.855 0.869 0.852 0.854 0.871 0.865 0.835 0.73 0.713 0.731 0.713 0.74 0.693 0.721 0.705

Transactions 184 90 23 13 10 8 5 5 77 38 23 11 11 9 6 3 25 13 9 6 8 5 6 4

Table 4.4: Time & Threshold strategy, 1991-2019

(34)

22 CHAPTER 4. RESULTS

4.1.2 Time & Threshold, 50-50 policy

50-50 policy

Rebalancing frequency Monthly Quarterly Annually

Limits 1% 5% 7% 10% 1% 5% 7% 10% 1% 5% 7% 10%

Type of Rebalancing M T M T M T M T M T M T M T M T M T M T M T M T

Yearly return 11.25% 11.18% 11.3% 11.27% 11.32% 11.26% 11.39% 11.39% 11.28% 11.22% 11.27% 11.25% 11.52% 11.38% 11.37% 11.36% 11.78% 11.5% 11.83% 11.63% 11.83% 11.63% 12.13% 11.84%

Yearly std 6.98% 6.45% 7.09% 6.61% 7.28% 6.61% 7.37% 6.61% 7.05% 7.0% 7.17% 7.19% 7.26% 7.32% 7.48% 7.39% 9.54% 9.29% 9.71% 9.55% 9.71% 9.55% 9.62% 9.63%

Sharpe ratio 1.61 1.73 1.59 1.7 1.55 1.7 1.55 1.72 1.6 1.6 1.57 1.56 1.59 1.55 1.52 1.51 1.23 1.21 1.22 1.22 1.22 1.22 1.26 1.23

Transactions 46 32 4 3 2 2 1 1 27 13 5 3 4 2 2 1 8 4 2 2 2 2 1 1

Table 4.5: Time & Threshold strategy, 1991-2000

50-50 policy

Rebalancing frequency Monthly Quarterly Annually

Limits 1% 5% 7% 10% 1% 5% 7% 10% 1% 5% 7% 10%

Type of Rebalancing M T M T M T M T M T M T M T M T M T M T M T M T

Yearly return 2.92% 3.01% 3.03% 3.16% 3.07% 3.2% 3.25% 3.42% 3.25% 3.28% 3.49% 3.4% 3.89% 3.63% 3.43% 3.09% 3.68% 3.37% 3.59% 3.32% 3.43% 2.99% 3.36% 2.99%

Yearly std 8.46% 8.49% 8.25% 8.33% 8.21% 8.24% 8.34% 8.56% 9.25% 9.21% 8.98% 9.22% 8.65% 9.11% 9.04% 8.97% 10.45% 10.54% 10.31% 10.66% 11.08% 10.47% 10.99% 10.47%

Sharpe ratio 0.31 0.35 0.37 0.38 0.37 0.39 0.39 0.1 0.35 0.36 0.39 0.37 0.15 0.1 0.38 0.31 0.35 0.32 0.35 0.31 0.31 0.29 0.31 0.29

Transactions 61 61 11 11 4 4 2 2 27 16 10 7 5 5 4 1 10 4 5 3 4 2 2 2

Table 4.6: Time & Threshold strategy, 2001-2010

50-50 policy

Rebalancing frequency Monthly Quarterly Annually

Limits 1% 5% 7% 10% 1% 5% 7% 10% 1% 5% 7% 10%

Type of Rebalancing M T M T M T M T M T M T M T M T M T M T M T M T

Yearly return 6.47% 6.46% 6.51% 6.49% 6.52% 6.52% 6.61% 6.6% 6.59% 6.57% 6.65% 6.65% 6.61% 6.71% 6.63% 6.75% 6.71% 6.65% 6.67% 6.71% 6.67% 6.71% 6.78% 6.78%

Yearly std 6.23% 6.26% 6.44% 6.38% 6.26% 6.31% 6.57% 6.58% 6.36% 6.39% 6.44% 6.41% 6.58% 6.34% 6.62% 6.57% 7.51% 7.35% 7.59% 7.62% 7.59% 7.62% 7.69% 7.67%

Sharpe ratio 1.01 1.03 1.01 1.02 1.01 1.03 1.01 1. 1.01 1.03 1.03 1.01 1. 1.06 1. 1.03 0.89 0.9 0.88 0.88 0.88 0.88 0.88 0.88

Transactions 39 25 2 2 2 2 1 1 27 12 2 2 2 2 1 1 7 4 2 1 1 1 1 1

Table 4.7: Time & Threshold strategy, 2011-2019

50-50 policy

Rebalancing frequency Monthly Quarterly Annually

Limits 1% 5% 7% 10% 1% 5% 7% 10% 1% 5% 7% 10%

Type of Rebalancing M T M T M T M T M T M T M T M T M T M T M T M T

Yearly return 7.28% 7.29% 7.38% 7.32% 7.39% 7.43% 7.56% 7.41% 7.39% 7.35% 7.49% 7.45% 7.41% 7.54% 7.65% 7.31% 7.51% 7.41% 7.53% 7.46% 7.54% 7.33% 7.6% 7.46%

Yearly std 7.27% 7.3% 7.3% 7.36% 7.39% 7.404% 7.37% 7.62% 7.34% 7.39% 7.32% 7.48% 7.46% 7.54% 7.48% 7.57% 9.06% 9.15% 9.09% 9.38% 9.09% 9.37% 9.24% 9.37%

Sharpe ratio 0.997 0.994 1.007 0.989 0.996 0.999 1.02 0.968 1.003 0.991 1.019 0.992 0.988 0.995 1.018 0.960 0.825 0.806 0.825 0.791 0.825 0.779 0.819 0.793

Transactions 192 92 22 13 10 8 8 5 78 41 25 13 11 9 8 3 25 13 10 6 8 5 6 4

Table 4.8: Time & Threshold strategy, 1991-2019

References

Related documents

It investigates how the film content parameters Actor, Director and Genre can be used to further enhance the accuracy of predictions made by a purely collaborative

1528, 2013 Department of Electrical Engineering. Linköping University SE-581 83

Leakage caused by the centrifugation was determined using a sample of liposomes straight after loading, with the buffer exchanged to borax buffer to ensure that no EMBA will be

We have also compared how the time until 40 cars cannot park varies, de- pending on different spot selection strategies, where we have seen that, for both the families (L, ·) and

The authors suggest that the process of conflict resolution could integrate with the concept of strategic sustainable development in areas of long-term, intractable conflict

The following chapter describes the task of implementing, designing, iden- tifying resource sharing and the subsequent integration between the tools Farkle and Enea Optima using

It reports on findings from a small exploratory study with sec- ondary and upper secondary school teachers in England, Finland, and Sweden who participated in work- shops drawing on

The Swedish migrant women’s narratives reveal gender- and nation-specific dimensions of whiteness in the US, thereby illuminating how transnational racial hierarchies