• No results found

Hypothetical Scenario Analysis in Modeling of Market Risk

N/A
N/A
Protected

Academic year: 2021

Share "Hypothetical Scenario Analysis in Modeling of Market Risk"

Copied!
58
0
0

Loading.... (view fulltext now)

Full text

(1)

 

 

 

 

 

 

 

 

 

 

 

 

Hypothetical  Scenario  Analysis  in  

Modeling  of  Market  Risk  

Emma  Berglund  

emma.berglund87@gmail.com                     Supervisor    

Carl  Magnus  Lundin     calu07@handelsbanken.se   Handelsbanken     Examiner     Lars-­‐Daniel  Öhman     lars-­‐daniel.ohman@math.umu.se    

(2)

 

(3)

Sammanfattning  

Det  senaste  decenniets  kriser  har  skapat  ett  behov  av  att  hantera  risker  på  ett  mer  omfattade  sätt.   Nya  regelverk  och  riktlinjer  syftar  till  att  ge  banker  och  andra  finansiella  institut  varningsignaler  vid   ett   tidigt   skede   i   finansiell   turbulens   för   att   undvika   stora   förluster.   Ett   sätt   är   att   använda   sig   av   stressat  Value  at  Risk,  𝑉𝑎𝑅,  då  marknadens  beteende  tydligt  skiljer  sig  åt  i  finansiell  stress.  

En  vanlig  metod  skatta  𝑉𝑎𝑅  är  att  använda  sig  av  empiriska  data  från  finansiellt  stressade  perioder.   Nackdelen   med   detta   är   att   det   ger   ett   bakåtblickande   mått   som   endast   tar   hänsyn   till   kriser   som   redan   uppstått.   Användningen   av   riskmodellerna   förutsätter   att   historiska   data   är   ett   bra   sätt   att   mäta   risk   och   att   det   skulle   följa   en   känd   statistisk   process.   Detta   har   dock   visat   sig   vara   en   överskattad  metod  och  därmed  har  ett  behov  att  använda  sig  av  hypotetiska  scenarion  uppstått.   I  detta  arbete  undersöks  vad  som  händer  i  korrelationerna  mellan  en  portföljs  tillgångar  i  en  stressad   tid  i  jämförelse  med  en  lugn.  Sedan  undersöks  om  dessa  korrelationer  kan  användas  för  att  stressa   𝑉𝑎𝑅-­‐siffran  och  varna  för    eventuella  förluster.  

Modellerna   som   används   är   FHS   Unconditional   Student’s   t-­‐model   och   MV-­‐GARCH   (1,1).   Den   förstnämna   går   ut   på   att   generera   slumptal   från   en   t-­‐fördelning   och   korrelera   dessa   enligt   korrelationer   som   beräknats   med   hjälp   av   det   historiska   datamaterialet.   Det   historiska   datamaterialet   skalas   sedan   med   variansen   i   varje   tidssteg   och   ytterligare   korrelationer   beräknas.   Detta  är  en  del  i  filtered  historical  simulation,  FHS.  Den  multivariata  garch-­‐modellen  använder  sig  av   observationen   och   variansen   i   föregående   tidssteg,   samt   tillgångarnas   standardiserade   residualers   korrelation   vid   en   beräkning   av   nästa   tidsstegs   avkastning.   De   simulerade   tidsserierna   från   de   ovannämnda   modellerna   används   för   att   beräkna   𝑉𝑎𝑅 -­‐siffror   som   syftar   till   att   efterlikna   motsvarande  siffror  med  användning  av  det  historiska  datamaterialet.  

Vid  skapandet  av  hypotetiska  scenarier  modifieras  de  beräknade  korrelationerna.  Det  som  används   för   att   modifiera   korrelationerna   är   de   relativa   förändringarna   i   korrelationerna   mellan   en   lugn   period   och   efterföljande   kris.   Dessa   multipliceras   sedan   med   korrelationerna   under   hela   tidsperioden.   Även   korrelationerna   i   kriserna   används   samt   en   konstant   relativ   förändring   i   korrelationen  utan  empirisk  förankring.  För  att  sedan  utföra  tester  om  korrelationen  kan  användas   för  att  stressa  𝑉𝑎𝑅-­‐siffran  appliceras  de  modifierade  korrelationerna  i  modellerna  och  simuleringar   genererar  nya  𝑉𝑎𝑅-­‐siffror.  Dessa  jämförs  sedan  med  de  ordinära  𝑉𝑎𝑅-­‐siffrorna  från  simuleringarna   och  det  historiska  datamaterialet.    

Den  modifiering  som  mest  effektivt  stressar  𝑉𝑎𝑅  är  när  korrelationen  vid  en  turbulent  tid  från  mitten   av  2002  till  mitten  av  2003  används.  Resultatet  visar  dock  en  för  liten  förändring  av  𝑉𝑎𝑅-­‐siffran  för   att  helt  förlita  sig  på  korrelation  som  parameter  att  stressa  𝑉𝑎𝑅-­‐siffran.      

(4)

Abstract  

The   last   decade's   crises   have   created   a   need   to   manage   risk   in   a   more   comprehensive   way.   New   guidelines   and   regulations   aim   to   give   banks   and   other   financial   institutions   warning   signals   at   an   early  stage  of  a  financial  turmoil  to  avoid  big  losses.  One  way  is  to  use  Stressed  Value  at  Risk,  VaR,  as   the  market  behavior  clearly  differs  in  financial  stress.    

A   common   method   to   estimate  𝑉𝑎𝑅  is   to   use   empirical   data   from   financial   stressed   periods.   The   downside  of  this  is  that  it  gives  a  retrospective  measure  that  only  takes  already  occurred  crises  into   account.   The   use   of   risk   management   models   assumes   that   historical   data   is   a   good   basis   for   risk   measurement  and  that  the  history  will  follow  a  known  statistical  process.  This  has  been  proven  to  be   an  overrated  method  and  a  need  to  use  the  hypothetical  scenarios  has  emerged.    

This  work  examines  what  happens  in  the  correlations  between  a  portfolio's  assets  in  a  stressful  time   in   comparison   to   the   correlations   in   a   calm   period   of   time.   It   also   examines   whether   these   correlations  can  be  used  to  stress  the  ordinary  𝑉𝑎𝑅  to  alert  for  potential  losses.  

The  models  used  are  FHS  Unconditional  Student's  t-­‐model  and  MV-­‐GARCH  (1,1).  The  first  mentioned   model   generates   random   numbers   from   a   t-­‐distribution   and   correlates   them   according   to   the   historical  data  set.  The  data  set  is  scaled  with  the  variance  of  each  time  step  before  the  correlation  is   calculated.  This  is  called  filtered  historical  simulation,  FHS.  The  multivariate  GARCH  model  uses  the   observation  and  the  variance  in  the  previous  time  step,  and  also  the  asset  correlation  is  used  in  the   calculation  of  the  return  in  the  next  time  step.    

The  correlations  are  modified  in  order  to  create  the  hypothetical  scenarios.  The  modifications  that   are   used   are   the   relative   changes   in   the   correlations   between   a   calm   and   turbulent   time   period.   These  are  then  multiplied  by  the  correlations  for  the  entire  period.  Also  the  correlations  in  the  crises   are   used   and   applied   on   the   whole   time   period   as   well   as   a   constant   relative   change   in   the   correlation   without   empirical   support.   To   test   if   the   correlation   can   be   used   to   stress   the  𝑉𝑎𝑅  the   modified   correlations   is   applied   in   the   models   and   simulations   generate   new  𝑉𝑎𝑅  numbers.   These   are  compared  with  both  the  ordinary  𝑉𝑎𝑅  from  the  simulation  and  the  historical  data  set.    

The   modification   that   most   effective   stresses   the  𝑉𝑎𝑅  is   when   the   correlation   at   a   turbulent   time   from  mid-­‐2002  to  mid-­‐2003  are  used.  The  results  show,  however,  that  the  small  change  of  𝑉𝑎𝑅  is  too   small  to  completely  rely  on  the  correlation  parameter  to  stress  the  𝑉𝑎𝑅.    

(5)

Table  of  Contents  

 

 

 

1  Introduction  ...  1  

2  Theoretical  background  ...  3  

2.1  Value  at  Risk  ...  3  

2.2  Monte  Carlo  Simulation  ...  3  

2.3  Unconditional  Student’s  t-­‐distribution  ...  4  

2.4  Filtered  Historical  Simulation  ...  4  

2.5  GARCH  (1,1)  ...  4   2.6  Multivariate  GARCH  (1,1)  ...  5   2.7  Cholesky’s  decomposition  ...  6   3  Methodology  ...  7   3.1  Portfolio  ...  7   3.1.1  Distribution  analysis  ...  10  

3.1.2  Autocorrelation  and  Heteroskedasticity  ...  14  

3.1.3  Subintervals  of  the  time  series  ...  17  

3.2  Simulation  with  Unconditional  Student’s  t  ...  18  

3.2.1  FHS  Unconditional  t-­‐model  ...  19  

3.3  Simulation  with  MV-­‐GARCH  (1,1)  ...  20  

3.4  Evaluation  of  the  models  ...  20  

3.4.1  Root  mean  square  error  ...  20  

3.4.2  Backtesting  ...  21  

3.5  Hypothetical  data  ...  25  

3.6  Stressed  𝑽𝒂𝑹  ...  26  

4  Results  ...  27  

4.1  Correlation  analysis  ...  27  

4.1.1  Correlations  calculated  by  the  historical  data  ...  27  

4.1.2  Correlations  calculated  by  the  filtered  historical  data  ...  28  

4.1.3  Comparison  between  correlations  before  and  after  FHS  ...  29  

4.2  Simulation  with  modified  correlations  ...  29  

5  Conclusions  ...  33  

5.1  A  comparison  between  historical  and  modified  simulated  𝑽𝒂𝑹-­‐curves  ...  33  

5.1.1  Difference  of  exceedances  between  the  modified  and  historical  𝑽𝒂𝑹  ...  33  

5.1.2  Mean  difference  of  relative  change  between  modified  and  historical  𝑽𝒂𝑹  ...  33  

(6)

5.2.1  Difference  of  exceedances  with  simulated  𝑽𝒂𝑹  ...  33  

5.2.2  Mean  difference  of  relative  change  with  historical  𝑽𝒂𝑹  ...  33  

5.3  Summary  ...  34  

6  Discussion  ...  35  

6.1  Advantages  and  disadvantages  of  the  models  ...  35  

6.1.1  FHS  Unconditional  Student’s  t-­‐model  ...  35  

6.1.2  MV-­‐GARCH  (1,1)  ...  35  

6.2  Recommendations  for  implementation  ...  35  

7  References  ...  37  

Appendix  1  ...  38  

Backtesting  ...  38  

Appendix  2  ...  41  

Correlation  matrices  ...  41  

Simulation  with  Unconditional  Student’s  t  without  FHS  ...  41  

Simulation  with  Unconditional  Student’s  t  with  FHS  ...  43  

Simulation  figures  ...  45                  

(7)

1  Introduction  

The   demands   of   banking   authorities   resulting   from   the   financial   crisis   over   the   last   decade   have   created   a   need   to   manage   risk   in   a   more   distinct   and   comprehensive   way.   New   guidelines   and   regulations   have   recently   been   developed   to   meet   the   needs   in   today’s   risk   management.   These   regulations  aim  to  alert  banks  and  other  financial  institutions  for  possible  losses  at  an  early  stage  in   order  to  reduce  them.    

The   new   regulations   include   the   requirement   to   produce   better   methods   to   test   stressed   measurements  such  as  stressed  Value  at  Risk,  𝑉𝑎𝑅.  (EBA  European  Banking  Authority,  2012).  A  well-­‐ used   method   to   manage   stressed  𝑉𝑎𝑅  is   to   use   historical   data   from   financial   stressed   times.   However,  there  are  drawbacks  to  using  historical  data.  One  of  the  problems  is  that  historical  data   gives  a  retrospective  measure  and  only  considers  events  that  have  already  occurred.  This  means  that   most  risk  management  models  have  relied  on  the  assumptions  that  historical  data  is  a  good  basis  for   risk  measurement  and  that  it  will  follow  a  known  statistical  process.  This  has  however  been  proven   to  be  overestimated  in  times  of  financial  turmoil.  Assumptions  about  the  market  in  stable  conditions   for  a  period  of  time  will  indicate  good  conditions  in  the  near  future  and  will  not  foresee  vulnerability   or  possible  shocks.  Correlation  and  other  relationships  have  also  been  proven  unreliable  in  times  of   turmoil.  In  fact,  history  has  shown  that  the  market’s  behavior  in  financially  stressed  times  is  clearly   different  from  its  behavior  in  non-­‐stressed  times.  The  assumptions,  that  the  history  would  follow  a   known   statistical   process,   led   to   an   underestimation   of   the   market   developments   and   extreme   events  at  the  end  of  the  last  decade  resulting  in  losses  far  higher  than  expected,  (Basel  Committee  of   Banking  Supervision.,  2009).  

New  stress  testing  techniques  have  been  developed  since  the  crisis.  One  basic  method  is  to  change   one   parameter   in   the   model   while   keeping   the   others   constant   to   test   the   sensitivity.   A   way   to   implement   this   is   to   make   use   of   hypothetical   analysis   of   events   that   are   unlikely   to   occur   but   plausible.   Hypothetical   analysis   means   that   calculations   are   made   on   data   that   do   not   contain   the   true  history,  but  rather  a  modified  variant.    

The  purpose  of  this  thesis  is  to  provide  a  complement  to  ordinary  𝑉𝑎𝑅-­‐calculations  and  stress  tests   by  analyzing  hypothetical  scenarios.  The  hypothetical  data  is  inspired  by  earlier  stressed  events  in  the   historical   data.   The   correlation   is   studied   to   determine   if   it   can   be   used   to   generate   hypothetical   scenarios  with  fluctuations  similar  to  the  historical  fluctuations  of  a  portfolio’s  time  series.    

The  correlation  between  the  assets  in  a  fixed  portfolio  will  be  studied  to  demonstrate  how  different   categories  of  assets  are  related  to  each  other.  This  analysis  is  also  performed  in  order  to  observe  if   differences  in  the  correlation  occur  between  financially  stressed  and  calm  periods  of  time,  different   stressed  periods  and  different  calm  periods.  Market  circumstances  changes  over  time  which  creates   a  problem  with  using  data  that  span  over  a  long  historical  period.  One  of  the  reasons  is  fluctuations   in  volatility.  Therefore,  it  is  necessary  to  find  a  way  to  filter  the  historical  events  by  the  volatility  and   study  what  happens  with  the  correlations  and  compare  those  with  the  results  before  the  filtering.     The  models  FHS  Unconditional  Student’s  t-­‐model  and  MV-­‐GARCH  (1,1)  are  used  to  give  a  simulation   of   the   history   and   enable   changes   in   the   correlations.   The   methodological   questions   are   if   the   correlations   during   a   crisis   or   if   the   relative   changes   in   correlations   between   a   calm   period   and   a   crisis  can  be  used  to  stress  the  ordinary  𝑉𝑎𝑅.  It  is  also  of  interest  to  study  if  there  is  some  general  

(8)

way  to  modify  the  correlations  and  stress  the  𝑉𝑎𝑅  without  knowing  what  the  correlation  looked  like   or   how   it   behaved   earlier   in   time.   To   test   this,   the   correlation   changes   from   a   calm   period   to   a   stressed  are  multiplied  with  the  ordinary  correlation,  and  the  new  modified  simulated  𝑉𝑎𝑅-­‐curve  can   be  obtained  and  analyzed.  To  test  if  a  correlation  during  a  crisis  can  be  used,  this  is  simply  applied  on   the  whole  time  period.  

(9)

2  Theoretical  background  

This  section  describes  a  rough  theory  of  the  models  and  other  mathematical  concepts.  A  description   of  the  risk  measurement  used  is  found  in  the  first  section,  followed  by  a  derivation  of  how  to  make   use  of  the  Monte  Carlo  simulation  in  making  estimates.  Finally,  the  models  used  in  the  simulations   are  described.    

2.1  Value  at  Risk  

Value  at  Risk,  𝑉𝑎𝑅  is  a  common  risk  measure  and  is  widely  used  in  the  market  risk  departments  of   financial  institutions.  It  is  a  simple  and  intuitive  way  of  measuring  a  position’s  risk  exposure.  For  the   institution,  𝑉𝑎𝑅  represents   a   position’s   maximum   loss   during   a   certain   time   period   for   a   given   probability  𝛼.  An  illustration  of  the    𝑉𝑎𝑅  for  a  curve  from  a  normal  distribution  is  shown  in  Figure  1.  

 

Figure  1.  Shows  the  𝑽𝒂𝑹  at  the  significant  level  𝟏 − 𝜶.  The  yellow  area  has  the  value  𝜶.   The  formula  for  calculating  the  𝑉𝑎𝑅  is  the  following:  

𝑉𝑎𝑅!!! 𝑋 = sup 𝑥 ∈ 𝑅 𝑃 𝑋 ≤ 𝑥 ≤ 𝛼         (1)  

where  𝑋  is  a  random  variable  and  𝛼  is  the  probability.  

2.2  Monte  Carlo  Simulation  

Simulations   utilizing   Monte   Carlo   methods   are   a   commonly   used   technique   of   estimating   mathematical   systems.   When   it   is   difficult   to   analytically   calculate   the   value   of   a   parameter   this   method  can  be  used.  The  method  relies  on  generating  random  numbers  from  a  certain  probability   distribution.  

To   illustrate   the   method,   assume   one   wants   to   calculate   the   expected   value   of   the   function  𝑓 𝑥   with  a  given  probability  distribution  function  𝜓 𝑥  where  𝑥 ∈ ℝ.  The  expected  value  is  given  by  

𝜇 = 𝐸! ! 𝑓 𝑥 = ∫ 𝑓 𝑥 𝜓 𝑥 𝑑𝑥       (2)  

This   can   be   simulated   by   drawing   values  𝑥  from   the   given   distribution  𝜓 𝑥  and   using  𝑥!,   where   𝑖 = 1,2 … 𝑁,  to  calculate  𝑓! = 𝑓 𝑥! .  This  gives  the  Monte  Carlo  estimator  

𝜇 ≔ !

! 𝑓 𝑥!

!

(10)

2.3  Unconditional  Student’s  t-­‐distribution  

It  has  been  suggested  that  many  of  the  financial  time  series  can  be  accurately  modeled  by  random   numbers  from  a  student’s  t-­‐distribution  and  such  simulations  has  been  proven  successful  in  earlier   𝑉𝑎𝑅  estimations,  (Tsay,  R.,  2002),  (Alexander,  C.,  Sheedy,  E.,  2008).  The  simulation  makes  use  of  the   mean  adjusted  returns,  i.e.  the  mean  of  the  time  series  is  calculated  and  subtracted  from  each  value   in  the  time  series.  The  mean-­‐adjusted  returns  𝑎!  have  the  following  distribution  

𝑎!~𝑇 𝜈 𝜎 !!!! !!/!

 ,         (4)  

where  𝑇 𝜈  is  the  standardized  Student’s  t  distribution  with  𝜈  degrees  of  freedom.  The  mean  is  zero   and  𝜎  is  the  standard  deviation  of  the  time  series.  In  the  one  dimensional  case,  the  𝑉𝑎𝑅  is  calculated   as    

𝑉𝑎𝑅!!! = 𝑇! 𝛼 !! !!! !

!/!

𝜎.       (5)  

2.4  Filtered  Historical  Simulation    

A   problem   with   simulations   based   on   historical   events   is   that   the   volatility   seems   to   change   over   time.   To   get   observations   that   are   drawn   from   the   standardized   empirical   return   distribution   the   returns  𝑟  are  filtered  in  the  following  way  

𝑎! =!!!

!,!!"           (6)  

where  𝜎!,!"#  is  estimated  with  a  GARCH(1,1)  which  is  described  in  the  next  section,  see  section  2.5.    

2.5  GARCH  (1,1)  

The   autoregressive   conditional   heteroskedastic   (ARCH)   model   takes   returns   in   previous   time   step   into  account  in  predictions.  One  drawback  to  this  method  is  that  it  ignores  changes  in  variance  which   could   be   advantageous   to   take   into   account.   The   Generalized   Autoregressive   Conditional   Heteroskedasticity   model   (GARCH)   extends   the   simpler   ARCH-­‐model   by   taking   the   conditional   variance  into  consideration.    

Although   the   distribution   of   the   observations   in   a   time   series   is   unknown,   GARCH   gives   a   reliable   simulation   of   financial   time   series.   This   due   to   the   capturing   of   the   conditional   variance   described   above.    

Consider  a  process  

𝑋!= 𝐶 + 𝑎!             (7)  

where  𝑎!  is  the  mean-­‐adjusted  log  return  such  that  𝑎! = 𝑟!− 𝜇!  and  𝐶  is  the  mean  of  the  historical   observations.  The    GARCH  (1,1)  is  described  as  follows  

(11)

Maximum  likelihood  is  used  to  estimate  the  parameters.  The  approach  is  to  maximize  the  logarithm   of  𝐿 𝜃  which  is  defined  as    

𝐿 𝜃 = ! 𝑓 𝜃

!!!         (9)  

where  𝑓 𝜃  is  the  density  function,  in  this  case  the  normal  density  function  

𝑓 𝜃 = !

!!!!!  𝑒

!!!!

!!!!.         (10)  

The  logarithm  of  both  sides  gives  

log 𝐿 𝜃 = ! log 𝑓 𝜃

!!!         (11)  

and  the  log  likelihood  function  log 𝐿 𝜃  that  will  be  maximized  for  the  normal  probability  function  is   rewritten  as   𝐿 𝜃 = −!!log 2𝜋 −!! !!!!log 𝜎!!−!! !! ! !!! ! !!!     (12)  

where  𝜃  denotes  all  the  unknown  parameters  in  𝜎!!  and  𝑎!.  

2.6  Multivariate  GARCH  (1,1)  

The  constant  correlation  multivariate  GARCH  was  proposed  by  Tim  Bollerslev  (1990).  It  provides  the   possibility  of  simulating  a  mixture  of  time  series  with  respect  to  correlations.        

Let  𝑎!  be  the  returns  with  mean  zero  and  

𝑎!|ℱ!!!~𝑁 0, 𝐻!         (13)  

where  ℱ!!!  is  all  the  available  information  up  through  time  𝑡 − 1  and  

𝐻! = 𝐷!𝑅!𝐷!           (14)  

and  𝑅!  is  the  correlation  matrix  which,  in  the  constant  correlation  case,  is  𝑅! = 𝑅.  Note  that  Equation   14   requires   that   the   correlation   matrix   is   positive   definite.  𝐷!  in   this   equation   is   the   diagonal   of   standard  deviations   ℎ!"  for  the  𝑖!!  time  series  composed  in  a  univariate  GARCH(1,1)  as  follows  

ℎ!" = 𝛼!!+ 𝛼!!𝑟!"!!! + 𝛽!!ℎ!"!!         (15)   The   coefficients  𝛼!!, 𝛼!!  and  𝛽!!for   the   respective   time   series   have   the   same   restrictions   as   the   corresponding   coefficients   in   Equation   8   and   are   estimated   by   maximum   likelihood.   The   log-­‐ likelihood  in  the  multivariate  case  is  defined  as  

𝐿 𝜃 = −!! !!!! 𝑘log 2𝜋 + 2log|𝐷! +  log 𝑅! + 𝜖!!𝑅!!!𝜖!)   (16)   where  𝜃  denotes   all   the   unknown   parameters   in  𝐷!  and  𝜖!  and  𝜖!~𝑁 0, 𝑅!  is   the   standardized   residuals  in  the  univariate  GARCH  model.  

(12)

2.7  Cholesky’s  decomposition  

In   a   desire   to   create   a   linear   relationship   between   a   set   of   independent   variables,   Cholesky’s   decomposition  can  be  used.  Let  𝜌  be  a  positive  and  definite  𝑛×𝑛  matrix.  If  an  upper  triangular  matrix   𝑈  exists  such  that  𝜌 = 𝑈!𝑈  then  the  matrix  𝑈  is  called  the  Cholesky  factor  of  𝜌.  

(13)

3  Methodology  

This   section   describes   the   portfolio   that   is   used   in   the   thesis.   That   is   a   distribution   and   an   autocorrelation  analysis  of  the  different  time  series  in  the  portfolio  and  also  how  the  time  period  is   divided  into  financially  calm  and  turbulent  subintervals.  The  models  used  in  the  simulations  are  FHS   Unconditional   t-­‐model   and   MV-­‐GARCH(1,1).   The   models   together   with   an   evaluation   of   them   are   presented  in  this  section.  The  end  of  the  section  describes  how  the  hypothetical  data  is  created  and   how  the  stressed  𝑉𝑎𝑅  is  defined.    

3.1  Portfolio  

The  portfolio  used  in  this  study  contains  of  a  single  share  in  each  of  the  indices  OMXS30,  FTSE100,   S&P500,  and  one  American  dollar  expressed  in  SEK  and  also  a  British  pound  expressed  in  SEK.  The   time  series  used  to  know  the  change  of  those  will  thus  be  the  foreign  exchange  rates  USDSEK  and   GBPSEK.    

The  asset  OMXS30  is  a  market  value-­‐weighted  index  for  the  Stockholm  Stock  Exchange  and  consists   of  the  30  most  traded  stocks.  FTSE100  is  a  share  index  which  consists  of  the  100  companies  with  the   highest  capitalization  of  the  London  Stock  Exchange  Group.  S&P500  is  a  stock  market  index  with  the   500  top  publicly  American  traded  companies.  These  three  indices,  as  well  as  the  foreign  exchange   rates  corresponding  to  the  same  countries  as  the  indices,  are  chosen  because  banks  commonly  invest   in  them.  

The  time  series  consist  of  the  daily  relative  changes  from  1994-­‐01-­‐03  to  2011-­‐07-­‐11  where  OMXS30,   USDSEK  and  GBPSEK  are  calculated  from  the  prices  in  SEK.  However,  the  relative  changes  in  the  time   series   S&P500   and   FTSE100   are   expressed   in   relative   changes   calculated   with   the   countries’   currencies  respectively,  see  their  prices  over  time  in  Figure  3.  To  obtain  the  series  corresponding  to  a   relative  change  calculated  with  SEK  the  exchange  rate  USDSEK  has  to  be  used.  If  this  conversion  is   used,  the  correlation  between  S&P500  and  USDSEK  will  be  strengthened  in  a  way  that  does  not  only   include   the   market   conditions.   Therefore,   the   fact   that   the   international   indices’   relative   changes   depend   on   another   currency   than   SEK   will   be   ignored   and   treated   as  if   they   were   calculated   from   SEK.  

As  the  𝑉𝑎𝑅  of  interest  is  on  a  one  day  horizon,  the  change  in  the  portfolio  can  be  consider  as  the   relative  change  from  the  day  before  if  1  SEK  had  been  invested  in  each  asset.  To  illustrate  how  the   value   of   the   different   assets   has   changed   over   time,   the   price   for   every   asset   is   observed   from   Bloomberg   from   2011-­‐07-­‐03   and   the   daily   prices   are   calculated   backwards   with   the   daily   relative   changes    

𝑃!!!= !!!!!

!             (17)  

where  𝑃!  is  the  price  on  day  𝑡  and  𝑝!  is  the  daily  relative  change  from  day  𝑡 − 1  to  𝑡.  The  daily  prices   from   1994-­‐01-­‐03   to   2011-­‐07-­‐03   are   obtained   for   currencies   and   shown   in   Figure   2.   Note   that   the   indices  prices  in  Figure  3  are  expressed  in  different  currencies.  

(14)

  Figure  2.  The  daily  prices  for  USD  and  GBP  from  1994-­‐01-­‐03  to  2011-­‐07-­‐11.  

  Figure  3.  The  daily  prices  for  OMXS30,  FTSE100  and  S&P500  from  1994-­‐01-­‐03  to  2011-­‐07-­‐11.  

The   daily   relative   changes,   also   denoted   as   returns,   of   the   currencies   and   indices   over   time   are   shown  in  Figure  4  and  5.  

1995 2000 2005 2010 4 6 8 10 12 14 16 Date Pr ic e i n SEK USD GBP 1995 2000 2005 2010 0 1000 2000 3000 4000 5000 6000 7000 Date Pr ic e i n t h e i n d ic e s c o u n tr ie s ' c u rr e n c ie s r e s p e c ti v e ly (S E K , U S D , G B P ) OMXS30 S&P500 FTSE100

(15)

  Figure  4.  The  daily  relative  changes  of  the  currencies  from  1994-­‐01-­‐03  to  2011-­‐07-­‐11.  

 

  Figure  5.  Daily  relative  changes  of  the  indices  from  1994-­‐01-­‐03  to  2011-­‐07-­‐11.  

The  daily  relative  changes  of  the  assets  are  added  together  for  each  day  and  represent  the  portfolio,   see  Figure  6.   1995 2000 2005 2010 -0.05 0 0.05 Date Re tu rn s USD 1995 2000 2005 2010 -0.05 0 0.05 Date Re tu rn s GBP 1995 2000 2005 2010 -0.1 0 0.1 Date Re tu rn s OMXS30 1995 2000 2005 2010 -0.1 0 0.1 Date Re tu rn s S&P500 1995 2000 2005 2010 -0.05 0 0.05 0.1 Date Re tu rn s FTSE100

(16)

  Figure  6.  The  daily  relative  changes  of  each  asset  added  together  for  each  day  represents  the  portfolio.   3.1.1  Distribution  analysis  

A   simulation   of   the   portfolio   requires   understanding   of   its   behavior   as   well   as   the   behavior   for   all   assets  respectively.  In  this  section  an  analysis  of  the  distribution  is  made  for  the  different  assets  and   the  portfolio  with  histogram  fitted  with  normal  distribution  and  t-­‐distribution.  An  estimation  of  the   distributions  parameters  is  done  and  the  fitting  are  obtained  by  the  density  functions.  There  is  also  a   quantile-­‐quantile   plot,   qq-­‐plot,   for   the   different   time   series   quantiles   versus   the   quantiles   of   a   normal   distribution   and   of   a   t-­‐distribution.   As   mentioned   before,   the   standardized   Student’s   t-­‐   distribution   is   commonly   used   in   modeling   financial   time   series.   Therefore,   a   limit   is   set   to   only   include  tests  for  this  distribution  and  for  normal  distribution.  

To  get  a  quantitative  comparison  between  the  two  tested  distributions  a  chi-­‐square  goodness-­‐of-­‐fit   test  is  done.  The  chi-­‐square  statistic  

𝜒!= !!!!!!

!!

!

!!!           (18)  

is  computed  where  the  data  is  grouped  into  bins,  𝐸!  is  the  expected  value  of  the  counts  and  𝑂!  are   the  observed  counts  from  the  time  series.  The  number  of  bins,  𝑁,  is  set  to  ten  and  the  test  statistic  is   compared  with  the  chi-­‐square  distribution.  The  number  of  the  degrees  of  freedom  is  set  to  𝑁 − 3.   The  parameters  needed  for  the  respective  distributions  are  estimated  by  maximum  likelihood.  The   null  hypothesis  in  the  test  is  that  the  data  comes  from  the  tested  distribution.  The  results  of  the  test   are  presented  in  the  end  of  this  section.  

1995 2000 2005 2010 -0.25 -0.2 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 0.2 0.25 Date Da ily p e rc e n ta g e c h a n g e s

(17)

 

Figure  7.  Histogram  of  the  assets  and  the  portfolio’s  distributions  vs.  the  fitted  normal  distribution  (red  line).  

 

Figure  8.  The  assets  and  the  portfolio  observations’  quantiles  (vertical  axis)  vs.  the  normal  distribution  (horizontal  axis).

   

As  can  be  seen  in  Figure  7,  all  six  time  series  distributions  have  more  extended  peaks  than  the  fitted   line  from  the  normal  distributions.  This  is  an  indication  that  the  distributions  probably  have  heavier   tails.  It  is  also  clear  in  the  qq-­‐plots  in  Figure  8  that  the  ends  of  the  curves  are  shifted  from  the  red  line  

-0.20 0 0.2 200 400 600 OMXS30 -0.20 0 0.2 200 400 600 S&P500 -0.20 0 0.2 200 400 600 FTSE100 -0.20 0 0.2 200 400 600 USD -0.20 0 0.2 200 400 600 GBP -0.20 0 0.2 200 400 600 Portfolio -5 0 5 -0.1 0 0.1 0.2 0.3 OMXS30 -5 0 5 -0.2 -0.1 0 0.1 0.2 S&P500 -5 0 5 -0.1 -0.05 0 0.05 0.1 FTSE100 -5 0 5 -0.05 0 0.05 USD -5 0 5 -0.05 0 0.05 GBP -5 0 5 -0.4 -0.2 0 0.2 0.4 Portfolio

(18)

corresponding   to   the   normal   distribution.   If   the   top   of   the   curve   is   shifted   to   the   left   it   is   an   indication  that  the  distribution  of  the  time  series  has  heavier  tails  than  the  reference  distribution.     To   obtain   the   student’s   t-­‐distribution   parameter,   degrees   of   freedom,   the   maximum   likelihood   approach   described   in   section   2.5   is   used.   However,   the   student’s   t   density   function   is   used   and   defined  as  follows  

𝑓 𝜃 = ! !!! ! ! !"! !! !! !!!! ! ! ! !!!!       (19)  

where  𝑥  are  the  observations  in  the  time  series,  𝜇  is  the  mean,  𝜎  is  the  standard  deviation  and  𝜈  is   the  degrees  of  freedom.  𝜃  represents  the  unknown  parameters  𝜈  and  𝜇,  and  𝜎  is  given  by  

𝜎 = !

!!!,      for  𝜈 > 2         (20)  

 

  Figure  9.    Histogram  of  the  assets  and  portfolios  distributions  vs.  student’s  t  distribution  (red  line).  

-0.20 0 0.2 200 400 600 OMXS30 -0.20 0 0.2 200 400 600 S&P500 -0.20 0 0.2 200 400 600 FTSE100 -0.20 0 0.2 200 400 600 USD -0.20 0 0.2 200 400 600 GBP -0.20 0 0.2 200 400 600 Portfolio

(19)

 

Figure  10.    The  assets  and  the  portfolios  observations’  quantiles  (vertical  axis)  vs.  the  student’s  t  distribution  (horizontal   axis)  with  4  degrees  of  freedom  for  the  currencies  and  3  for  indices.      

A   comparison   between   Figure   7   and   9   indicates   clearly   that   the   series   distributions   follow   the   reference   distribution   more   closely   in   Figure   9.   The   qq-­‐plots   in   Figure   10   also   show   signs   that   the   sample’s  quantiles  follow  the  quantiles  of  a  student’s  t  distribution  more  closely  than  those  from  a   normal  distribution  in  Figure  8.  

In  the  following  table  some  general  information  of  the  time  series  is  presented.     Table  1.  General  information  of  the  time  series.  

  OMXS30   S&P500   FTSE100   USD   GBP  

Mean   4.1 ⋅ 10!!   3.0 ⋅ 10!!   1.8 ⋅ 10!!   −3.7 ⋅ 10!!   −2.8 ⋅ 10!!  

Standard  deviation   0.015   0.012   0.011   0.0071   0.0060  

Skewness   0.24   0.21   0.055   −0.15   −0.083  

Kurtosis   6.9   15.0   9.7   6.0   6.4  

 

Skewness   is   a   measure   of   the   asymmetry   of   the   distribution.   The   value   is   positive,   negative   or   undefined  and  indicates  whether  the  right,  the  left  or  none  of  the  tails  are  longer  than  the  other.  The   skewness  is  defined  as  

𝑠 =! !!!!! !           (21)  

The  skewness  of  a  sample  without  correcting  for  bias  can  be  estimated  as  follows:  

𝑠!= ! ! !!!!!!!!! ! ! !!!!!!!! ! !         (22)   -0.5 0 0.5 -0.4 -0.2 0 0.2 0.4 OMXS30 -0.5 0 0.5 -0.2 -0.1 0 0.1 0.2 S&P500 -0.5 0 0.5 -0.2 -0.1 0 0.1 0.2 FTSE100 -0.1 0 0.1 -0.1 -0.05 0 0.05 0.1 USD -0.1 0 0.1 -0.1 -0.05 0 0.05 0.1 GBP -0.5 0 0.5 -0.5 0 0.5 Portfolio

(20)

Kurtosis  is  a  measure  of  the  shape  of  the  distribution  and  more  specifically,  the  peak.  It  indicates  how   much  the  distribution  deviates  from  a  normal  distribution  which  has  a  kurtosis  of  three.  A  kurtosis   more   than   three   is   called   leptokurtic   and   characterizes   a   thin   peak   and   fat   tails.   The   kurtosis   is   defined  as  

𝑘 =! !!! !

!!           (23)  

and  to  obtain  the  uncorrected  bias  measure  of  a  sample,  the  following  Equation  applies  

𝑘!= ! ! !!!!!!!! ! ! ! !!!!!!!!! !         (24)  

The   kurtosis   value   of   all   assets   in   Table   1   indicates   that   they   come   from   a   distribution   with   heavy   tails,  for  example  from  the  student’s  t-­‐distribution.  This  was  expected,  as  mentioned  before,  as  it  is   well  know  that  distributions  of  financial  returns  often  have  heavy  tails.  

The   chi-­‐square   statistic   corresponds   to   a   value  1 − 𝑝  where   a  𝑝-­‐value   less   that  0.05  strengthens   a   rejection   of   the   null   hypothesis   for   a   confidence   level   of   95   percent.   The  𝑝-­‐value   of   the   null   hypothesis  stating  that  the  time  series  of  the  portfolio  is  normally  distributed  is  1.37 ⋅ 10!!!  and  the   corresponding  𝑝-­‐value  for  student’s  t-­‐distribution  is  0.302.  

3.1.2  Autocorrelation  and  Heteroskedasticity  

To  determine  the  dependency  between  returns,  the  autocorrelation  function,  ACF,  is  calculated.  If   the  return  𝑎!  is  correlated  with  𝑎!!!,  where  𝑘  is  the  lag,  the  correlation  is  called  autocorrelation.  The   estimates  used  are  of  Box,  Jenkins  and  Reinsel,  specifically    

𝑧! =!!!

!           (25)  

where  𝑧!  is  the  autocorrelation  of  lag  𝑘  and   𝑐!= !

! 𝑎!− 𝑎

!!!

!!! 𝑎!!!− 𝑎      𝑘 = 0,1,2, … , 𝐾     (26)   𝑎!  is  the  return  sequence  and  𝑎  is  the  sample  mean.  If  the  observations  in  the  sequence   𝑎!  are   squared  and  𝑎  is  its  mean  the  test  is  for  heteroskedasticity  instead.    

In  figures  11  through  15  the  ACF  of  all  the  time  series  are  illustrated.  The  blue  lines  represent  a  95   percentage  confidence  interval.  

(21)

  Figure  11.  The  autocorrelation  function  of  the  time  series  for  OMXS30  and  its  heteroskedasticity  to  the  right.  

  Figure  12.  The  autocorrelation  function  of  the  time  series  for  S&P500  and  its  heteroskedasticity  to  the  right.  

0 1000 2000 3000 4000 -0.2 0 0.2 0.4 0.6 0.8 Lag Sa m p le Au to c o rr e la ti o n

Sample Autocorrelation Function

0 1000 2000 3000 4000 -0.2 0 0.2 0.4 0.6 0.8 Lag Sa m p le Au to c o rr e la ti o n

Sample Autocorrelation Function

0 1000 2000 3000 4000 -0.2 0 0.2 0.4 0.6 0.8 Lag Sa m p le Au to c o rr e la ti o n

Sample Autocorrelation Function

0 1000 2000 3000 4000 -0.2 0 0.2 0.4 0.6 0.8 Lag Sa m p le Au to c o rr e la ti o n

(22)

  Figure  13.  The  autocorrelation  function  of  the  time  series  for  FTSE100  and  its  heteroskedasticity  to  the  right.  

  Figure  14.  The  autocorrelation  function  of  the  time  series  for  USD  and  its  heteroskedasticity  to  the  right.  

0 1000 2000 3000 4000 -0.2 0 0.2 0.4 0.6 0.8 Lag Sa m p le Au to c o rr e la ti o n

Sample Autocorrelation Function

0 1000 2000 3000 4000 -0.2 0 0.2 0.4 0.6 0.8 Lag Sa m p le Au to c o rr e la ti o n

Sample Autocorrelation Function

0 1000 2000 3000 4000 -0.2 0 0.2 0.4 0.6 0.8 Lag Sa m p le Au to c o rr e la ti o n

Sample Autocorrelation Function

0 1000 2000 3000 4000 -0.2 0 0.2 0.4 0.6 0.8 Lag Sa m p le Au to c o rr e la ti o n

(23)

  Figure  15.  The  autocorrelation  function  of  the  time  series  for  GBP  and  its  heteroskedasticity  to  the  right.  

It  is  illustrated  that  there  is  autocorrelation  in  all  assets  and  clearly  heteroskedasticity  as  well.  This   suggests  that  it  is  inappropriate  to  simulate  the  returns  without  considering  the  autocorrelation.  A   way   to   get   around   this   is   to   filter   the   innovations   by,   for   example,   the   conditional   variances.   The   filtered  innovations  are  then  assumed  to  be  independent  and  identically  distributed.  

3.1.3  Subintervals  of  the  time  series  

It  is  of  interest  to  determine  what  happens  with  the  correlation  between  a  financial  non-­‐stressed  and   stressed   time.   To   simplify,   the   whole   period   of   data   is   divided   into   subintervals.   The   intervals   are   chosen   by   delimiting   the   financial   shocks   in   time,   using   the   time   around   them   as   stressed   time   periods  and  using  the  intervals  before  and  after  these  as  calm  periods.  The  different  calm  periods  are   not  necessarily  equally  calm  but  rather  relatively  calm  in  comparison  with  its  environment.  

0 1000 2000 3000 4000 -0.2 0 0.2 0.4 0.6 0.8 Lag Sa m p le Au to c o rr e la ti o n

Sample Autocorrelation Function

0 1000 2000 3000 4000 -0.2 0 0.2 0.4 0.6 0.8 Lag Sa m p le Au to c o rr e la ti o n

(24)

In  Figure  16  the  prices  of  OMXS30  are  illustrated.    

  Figure  16.  The  prices  of  OMXS30  in  SEK  over  time.    

It  is  possible  to  discern  some  great  losses  over  time  in  the  figure  above  and  it  is  also  possible  to  see   that   these   periods   in   Figure   16   are   more   volatile   than   their   environment.   The   cluster   of   the   time   series  shown  in  Figure  6  does  not  necessarily  occur  exactly  at  the  same  time  as  the  price  fall  and  it  is   therefore  difficult  to  decide  when  the  shock  starts.  The  shocks  are  defined  roughly  by  selecting  time   periods   where   the   loss   is   more   than   20   percent   within   50   days   for   the   index   OMXS30   (shown   in   Figure  16),  and  where  the  portfolio  variant  described  in  section  3.2  has  a  daily  relative  change  less   than  −0.09  at  least  four  times  during  the  same  50  day  period.  Those  periods  that  meet  these  two   conditions  are  considered  to  be  financially  stressed  times.  The  exact  dates  are  picked  by  arbitrarily   selecting  a  date  where  the  relative  changes  start  to  increase  and  after  the  upturn,  when  the  relative   changes  have  decreased,  see  Figure  6.  The  subintervals  are  presented  in  Table  2.  

Table  2.  The  dates  of  the  subintervals.    

Subinterval   Period  from   Period  to  

Period  1   1994 − 01 − 04   1998 − 07 − 15   Period  2*   1998 − 07 − 16   1998 − 10 − 08   Period  3   1998 − 10 − 09   2002 − 06 − 14   Period  4*   2002 − 06 − 17   2003 − 04 − 08   Period  5   2003 − 04 − 09   2008 − 05 − 16   Period  6*   2008 − 05 − 19   2009 − 04 − 28   Period  7   2009 − 04 − 29   2011 − 07 − 11  

*Period  2,  4  and  6  are  considered  to  be  stressed.  

3.2  Simulation  with  Unconditional  Student’s  t  

As  argued  in  previous  sections,  the  assets  return  series  seem  to  have  heavy  tails  and  using  student’s  

1994 1996 1998 2001 2003 2006 2008 2011 200 400 600 800 1000 1200 1400 1600 Date Va lu e o f O M XS3 0 i n SEK

(25)

To  make  use  of  the  model,  one  generates  random  numbers  from  a  student’s  t-­‐distribution  such  that   a  matrix  of  the  same  size  as  the  portfolio  is  obtained.  It  is  generated  with  three  degrees  of  freedom   for  the  indices  and  four  degrees  for  the  currencies.  These  numbers  were  obtained  by  the  maximum   likelihood  method.  The  correlation  is  calculated  by  the  mean-­‐adjusted  returns  in  the  different  time   periods  of  the  historical  data.  To  take  the  correlation  into  account,  Cholesky’s  decomposition  is  used   to  correlate  the  generated  random  numbers.  

Let  a  vector  𝑟!,!  be  the  generated  numbers  for  time  period  𝑖  and  asset  number  𝑗  where  𝑖 = 1,2, … ,7   and  𝑗 = 1,2, … ,5.  To  obtain  generated  numbers  with  the  same  variance  as  the  historical  data  a  vector   𝑎!,!  is  created  as  follows  

𝑎!,! = 𝜎!,!𝑟!,!             (27)  

where  𝜎!,! = 𝜎!,! !!!

!!!

!!/!

   is  a  constant  with  the  standard  deviation  of  asset  𝑗  in  time  period  𝑖.  A   matrix  can  now  be  obtained  as  follows  

𝒂𝒊 = 𝑎!,!  𝑎!,!  𝑎!,!  𝑎!,!  𝑎!,!         (28)     A  correlated  matrix  corresponding  to  the  historical  data  is  given  by    

𝑨𝒊 = 𝒂𝒊𝑈!           (29)  

where  𝑈!  is   the   Cholesky   factor   of   the   correlation   matrix   calculated   by   the   historical   data   in   time   period  𝑖.   To   obtain   the   whole   portfolio   estimation,   for   the   entire   time   period,   the   calculations   in   Equation  29  are  done  for  all  time  periods  and  then  simply  put  together  in  the  correct  time  sequence.   To   obtain   the   simulated   time   series   corresponding   to   the   portfolios’   time   series,   the   mean   of   the   time  series  is  added  to  the  simulated  mean  adjusted  returns.  

3.2.1  FHS  Unconditional  t-­‐model    

The  problem  with  simulations  based  on  historical  events,  such  as  in  this  case,  is  that  volatility  may   influence  the  correlations  and  therefore  provide  misleading  information.  To  get  around  this  problem   Filtered  Historical  Simulation,  FHS,  is  used.  

Consider  the  matrix  𝑟  which  represents  the  historical  data  for  all  five  assets  in  the  portfolio.  To  get   observations  that  are  drawn  from  a  standardized  empirical  return  distribution  𝑟  is  filtered.  Hence,  a   sequence  𝑎!,!  is   obtained   where  𝑖  and  𝑗  is   the   same   as   above.   A   description   of   filtered   historical   simulation  is  found  in  section  2.4.  

The  correlations  for  the  different  time  periods  are  now  calculated  by  the  filtered  events  the  same   way   as   described   above.   One   might   argue   that   the   estimated   standard   deviations   by   GARCH(1,1)   could  be  used  after  filtering  the  historical  innovations,  but  for  simplicity  the  same  standard  deviation   as  in  Equation  27  are  used  in  the  simulations.  The  FHS  Unconditional  t-­‐model  is  given  by  Equation  27,   28  and  by  

(26)

where  𝑈!  is  the  Cholesky  factor  of  the  correlation  matrix  calculated  by  the  filtered  historical  data  in   time  period  𝑖.  

The  𝑉𝑎𝑅  is   calculated   as   in   Equation   1   every   day   by   the   latest   252   days.   The   252   first   days   in   the   historical  data  will  therefore  not  be  compared  with  any  𝑉𝑎𝑅  estimation.    

3.3  Simulation  with  MV-­‐GARCH  (1,1)  

Simulation  of  a  portfolio  containing  different  assets  may  require  taking  into  consideration  that  the   returns   vary   differently   to   get   a   satisfying   and   realistic   result.   The   MV-­‐GARCH   uses   a   constant   correlation  approach,  but  although  the  correlations  remain  constant,  it  will  still  allow  the  conditional   heteroskedasticity  to  be  time-­‐variant.    

In  this  work  the  interpretation  of  Kevin  Sheppard  (2001,  2003),  based  on  the  theory  of  Tim  Bollerslev   (1990)  is  used.    

The  whole  time  period  is  used  to  estimate  the  coefficients  for  the  assets’  time  series  respectively.   This  means  that  the  coefficients  in  Equation  15  will  be  determined  for  the  same  time  period  as  the   prediction   time   period.   Predictions   with   GARCH   normally   use   the   coefficients   estimated   by   an   appropriate  calibration  time,  but  in  this  case,  when  the  aim  is  to  answer  the  question  of  whether  the   data  can  be  stressed  by  correlation,  no  calibration  time  is  used.  Also  in  these  estimations  the  mean-­‐ adjusted   returns   are   used   as   in   the   Unconditional   Student’s   t-­‐model.   By   the   covariance   matrix   obtained  in  Equation  14  the  observation  in  the  time  step  𝑡  is  obtained  as  follows  

𝑟! = 𝐻!!/!𝜂!           (31)  

where  𝐸 𝜂!𝜂!! = 𝐼,  the  identity  matrix.    

One  can  argue  that  the  coefficient  estimations  should  be  based  on  the  fact  that  the  innovations  are   student’s   t-­‐distributed,   but   it   is   arguable   that   the   model   based   on   an   assumption   of   normal   distributed  innovations  in  the  coefficients  estimations  performs  well  (Gouriéroux,  C.,  1997).  

The  time  series  corresponding  to  the  portfolio  that  are  simulated  by  this  model  are  used  to  calculate   a  𝑉𝑎𝑅  which  is  calculated  the  same  way  as  described  in  section  3.2.1.    

3.4  Evaluation  of  the  models  

This  section  describes  two  tests  performed  on  the  models  with  the  objective  to  evaluate  them.  The   results  of  the  test  are  also  presented.  

3.4.1  Root  mean  square  error  

To   test   how   well   the   model   estimates   the   historical   data   a   root   mean   square   error   (RMSE)   is   calculated  for  both  models  as  follows  

(27)

Table  3.  The  RMSE  between  the  historical  data  and  the  simulations.  The  RMSE  values  for  FHS  Unconditional  Student’s  t  is   a  mean  of  100  iterations.  The  RMSE  for  MV-­‐GARCH(1,1)  is  obtained  by  one  iteration.    

  Model   RMSE       )  

Returns   FHS  Unconditional  Student’s  t     0.0470        

MV-­‐GARCH(1,1)   0.0478        

 

Model    (𝜶 = 𝟎. 𝟎𝟏)  RMSE    (𝜶 = 𝟎. 𝟎𝟓)  RMSE    (𝜶 = 𝟎. 𝟏)  RMSE    (𝜶 = 𝟎. 𝟐)  RMSE  

𝑽𝒂𝑹  

FHS  Unconditional  

Student’s  t   0.0141   0.0103   0.0083   0.0067  

MV-­‐GARCH(1,1)     0.0146   0.0085   0.0068   0.0050  

 

The   information   in   Table   3   indicates   that   the   models   perform   almost   equally.   The   unconditional   model   shows   a   slightly   better   RMSE   of   both   returns   and  𝑉𝑎𝑅  for  𝛼 = 0.01.   However,   for   other   probabilities,  the  RMSE  calculated  of  the  𝑉𝑎𝑅  calculations  from  the  MV-­‐GARCH  model  is  noticeable   smaller  than  for  the  other  model.  Since  the  𝑉𝑎𝑅-­‐curve  is  analyzed  in  the  tests  of  the  stress  effects  by   the  correlation,  the  MV-­‐GARCH  seems  to  be  more  accurate.    

3.4.2  Backtesting  

A  backtest  to  test  the  models  performance  is  used.  First  the  number  of  historical  events  that  exceeds   their  𝑉𝑎𝑅-­‐curve  is  calculated.  By  using  the  historical  data  and  using  the  𝑉𝑎𝑅-­‐curves  simulated  by  the   models,   the   exceedances   can   be   compared   for   the   different  𝑉𝑎𝑅  curves.   It   is   also   of   interest   to   compare   the   outcome   of   this   with   the   statistical   expected   number   of   exceedances.   This   is   done   for  𝛼 = 0.01, 𝛼 = 0.05, 𝛼 = 0.1,  and  𝛼 = 0.2  to  see  if  the  stability  varies  for  different  probabilities.   In   section   Backtesting   in   Appendix   1   a   backtest   is   presented   where   the   simulated   innovations   exceedances   of   the   historical   𝑉𝑎𝑅  curve   are   calculated   and   compared   with   the   statistical   exceedances.    

For  𝛼 = 0.01  

   

Figure  17.  The  difference  in  exceedances  by  the  historical  data  on  the  𝑽𝒂𝑹-­‐curves.  The  diagram  to  the  left  is  a   comparison  between  the  𝑽𝒂𝑹-­‐curves  of  the  models  and  the  historical  𝑽𝒂𝑹-­‐curve.  The  right  diagram  is  a  comparison   between  the  𝑽𝒂𝑹-­‐curves  of  the  models  and  the  statistical  expected  number  of  exceedances  for  the  given  probability.  

    -­‐40   -­‐20   0   20   40   60   1   2   3   4   5   6   7   -­‐40   -­‐20   0   20   40   60   1   2   3   4   5   6   7   FHS  Uncond.  t   MV-­‐GARCH(1,1)  

(28)

Table  4.  The  number  of  exceedances  in  each  period  for  the  given  probability.  

  Period  1   Period  2   Period  3   Period  4   Period  5   Period  6   Period  7  

FHS  Uncon.  t   11   7   5   4   15   6   5  

MV-­‐GARCH(1,1)   26   7   9   6   32   6   7  

Historical   13   4   7   7   13   5   6  

Statistical   9,17   0,61   9,46   2,05   12,83   2,37   5,52  

 

The  number  of  exceedances  between  the  𝑉𝑎𝑅-­‐curve  generated  by  the  FHS  Unconditional  Student’s   t-­‐model  and  the  historical  𝑉𝑎𝑅-­‐curve  is  more  consistent  than  in  a  comparison  with  MV-­‐GARCH  (1,1).   The  number  of  exceedances  by  the  unconditional  model  also  seems  to  be  more  consistent  with  the   statistical  number  of  exceedances.  The  diagram  to  the  right  in  Figure  17  supports  the  fact  that  FHS   Unconditional  Student’s  t-­‐model  performs  better  for  𝛼 = 0.01  which  was  also  the  conclusion  by  the   RMSE  values  in  Table  3.  

For  𝛼 = 0.05  

   

Figure  18.  The  difference  in  exceedances  by  the  historical  data  on  the  𝑽𝒂𝑹-­‐curves.  The  diagram  to  the  left  is  a   comparison  between  the  𝑽𝒂𝑹-­‐curves  of  the  models  and  the  historical  𝑽𝒂𝑹-­‐curve.  The  right  diagram  is  a  comparison   between  the  𝑽𝒂𝑹-­‐curves  of  the  models  and  the  statistical  expected  number  of  exceedances  for  the  given  probability.   Table  5.  The  number  of  exceedances  in  each  period  for  the  given  probability.  

  Period  1   Period  2   Period  3   Period  4   Period  5   Period  6   Period  7  

FHS  Uncon.  t   48   22   65   28   98   18   26  

MV-­‐GARCH(1,1)   51   15   39   24   85   30   16  

Historical   52   12   47   22   75   15   18  

Statistical   45,85   3,05   47,3   10,25   64,15   11,85   27,6  

 

With   this   probability   of   the  𝑉𝑎𝑅  the   MV-­‐GARCH   (1,1)   models  𝑉𝑎𝑅  curve   performs   better   as   it   is   closer  to  both  the  historical  and  statistical  number  of  exceedances.  

    -­‐40   -­‐20   0   20   40   60   1   2   3   4   5   6   7   -­‐40   -­‐20   0   20   40   60   1   2   3   4   5   6   7   FHS  Uncond.  t   MV-­‐GARCH(1,1)  

References

Related documents

A choice experiment eliciting environmental values is set up in order to test for hypothetical bias based on both within and between sample designs.. A larger hypothetical bias

The data refer to the equally weighted portfolio for 3,189 the Long/Short Equity Hedge strategy funds between 1994 and 2013.The risk factors represented are in

Key words: methyl chloride, dichloromethane, toluene, styrene, uncertainty, intra-individual variability, risk assessment, physiologically based modeling, Markov chain Monte

This study aims to investigate the role of mobile Augmented Reality (MAR) within retail in a five- year time-horizon. This by using scenario planning methodology,

Flood Re is not a market solution to the longstanding problem of providing affordable insurance to high flood-risk households. But, crucially, it is not entirely a non-market

After reviewing the autoregressive conditional heteroscedasticity model (ARCH) and generalized autoregressive conditional heteroscedasticity model (GARCH), we analyze

Based on different dengue incidence rates, a sensitivity analysis that used alternative estimates for vaccine price costs as well as treatment costs and a

By creating example and index portfolios for the quarterly contracts in the same way as for the yearly contract above and then multiplying the average prices with the