• No results found

Catastrophe, Ruin and Death - Some Perspectives on Insurance Mathematics

N/A
N/A
Protected

Academic year: 2022

Share "Catastrophe, Ruin and Death - Some Perspectives on Insurance Mathematics"

Copied!
44
0
0

Loading.... (view fulltext now)

Full text

(1)

Catastrophe, Ruin and Death

-Some Perspectives on Insurance Mathematics

Erland Ekheden

(2)

Abstract

This thesis gives some perspectives on insurance mathematics related to life insurance and / or reinsurance.

Catastrophes and large accidents resulting in many lost lives are unfortu- nately known to happen over and over again. A new model for the occurrence of catastrophes is presented; it models the number of catastrophes, how many lives that are lost, how many lost lives that are insured by a specific insurer and the cost of the resulting claims, this makes it possible to calculate the price of reinsurance contracts linked to catastrophic events.

Ruin is the result if claims exceed initial capital and the premiums collected by an insurance company. We analyze the Cramér–Lundberg approximation for the ruin probability and give an explicit rate of convergence in the case were claims are bounded by some upper limit.

Death is known to be the only thing that is certain in life. Individual life spans are however random, models for and statistics of mortality are impor- tant for, amongst others, life insurance companies whose payments ultimately depend on people being alive or dead.

We analyze the stochasticity of mortality and perform a variance decompo- sition were the variation in mortality data is either explained by the covariates age and time, unexplained systematic variation or random noise due to a finite population. We suggest a mixed regression model for mortality and fit it to data from the US and Sweden, including prediction intervals of future mortalities.

c

Erland Ekheden, Stockholm 2014 ISBN 978-91-7447-935-5

Printed in Sweden by Universitetsservice US-AB, Stockholm 2014

Distributor: Department of Mathematics, Stockholm University

(3)

Paus

Ibland liksom hejdar sig tiden ett slag och någonting alldeles oväntat sker.

Världen förändrar sig varje dag men ibland blir den aldrig densamma mer.

Alf Henrikson

(4)
(5)

List of Papers

The following papers, referred to in the text by their Roman numerals, are included in this thesis.

PAPER I: Ekheden, E. and Hössjer, O. (2014). Pricing Catastrophe Risk in Life (re)Insurance Scandinavian Actuarial Journal, 2014(4), 353-367.

DOI:10.1080/03461238.2012.695747

PAPER II: Ekheden, E. and Silvestrov, D. (2011). Coupling and Explicit Rate of Convergence in Cram´r–Lundberg Approximation for Reinsurance Risk Processes, Communications in Statistics - The- ory and Methods, 40 (19-20), 3524-3539.

DOI: 10.1080/03610926.2011.581176

PAPER III: Ekheden, E. and Hössjer, O. (2014). Analysis of the Stochas- ticity of Mortality Using Variance Decomposition, in Modern Problems in Insurance Mathematics, D. Silvestrov and A. Martin- Löf (eds.), 199-222, EAA Series, Springer international.

DOI: 10.1007/978-3-319-06653-0_13

PAPER IV: Ekheden, E. and Hössjer, O. (2014). Multivariate Time Series Modeling, Estimation and Prediction of Mortalities, Submitted.

Reprints were made with permission from the publishers.

(6)
(7)

Author’s contribution

Paper I: E. Ekheden did the data analysis, programming and writing, and the model was developed jointly with O. Hössjer.

Paper II: Joint work by E. Ekheden and D. Silvestrov.

Papers III & IV: E. Ekheden did the data analysis and program-

ming, the writing and model was developed jointly with O. Hössjer.

(8)
(9)

Tack

Det finns många att tacka som på ett eller annat sätt bidragit till att denna avhandling blivit till.

Jag vill främst tacka mina handledare, Dmitrii Silvestrov och Ola Hössjer. Utan ert stöd, idéer, diskussioner och genomläs- ningar hade det inte blivit någon avhandling.

Erik Alm som med sitt katastrofintresse lade grunden till det som nu har blivit en hel avhandling.

Fredrik Olsson, för tips och råd i att bemästra R.

Christer Borell, som höjde mitt tänkande en nivå.

Lars Gråsjö, som gav mig trygghet i att jag faktiskt skulle klara av att doktorera en dag.

Gunnar Roos, som gav mig friskt syre och ork.

Svante Silvén, som såg till att jag inte tappade intresset för matematik.

Nu- och dåvarande kollegor på Avdelningen för matematisk statis- tik, vilka bidragit både till trivsel och intressanta samtal: Jens, Christina, Rolf, Gudrun, Susanna, Jan-Olov, Disa, Maria, Joanna, Mathias med flera.

Min älskade familj och kära hustru som ställt upp i vått och torrt.

Sigyn Knutsson, mamma, djupt saknad – alltid i mitt hjärta.

(10)
(11)

Contents

Abstract ii

List of Papers v

Author’s contribution vii

Tack ix

List of Figures xiii

I Introduction xv

1 Insurance mathematics 17

1.1 Collective model . . . . 17

1.1.1 Life and non-life insurance . . . . 19

1.1.2 Reinsurance . . . . 20

1.2 Catastrophes . . . . 20

1.2.1 Extreme values . . . . 22

1.3 Ruin . . . . 24

1.4 Death . . . . 25

1.4.1 Mortality improvements . . . . 26

1.4.2 Two-way mortality tables . . . . 29

(12)

2 Overview of Papers 31

2.1 Paper I . . . . 31

2.2 Paper II . . . . 32

2.3 Papers III & IV . . . . 33

2.3.1 Paper III . . . . 33

2.3.2 Paper IV . . . . 34

2.4 Summary . . . . 35

Sammanfattning xxxvii

References xxxix

II Papers xliii

(13)

List of Figures

1.1 Illustration of an excess of loss reinsurance con- tract. The reinsurer pay the part (dashed) of clams exceeding the retention S = 4 to the ce- dent. The reinsurer only observes (gets notified of) claim 2 and 4, those in excess of the retention. 21 1.2 An illustration of a ruin process with initial cap-

ital u = 3. At t = 6.3 there is a claim resulting in a negative capital, i.e. ruin. . . . 25 1.3 Empirical one year death risks ˆ q x for Swedish

females and males 2011. . . . 27 1.4 Plots, for Swedish data, of estimates of logit(q xt )

and q xt for various ages x and calendar years t.

The points are ordered linearly along the hori-

zontal axis, where the first set of points are for

age 60, years 1980 to 2011, then the rest of the

ages 61, . . . , 90 line up from left to right. . . . . 28

(14)
(15)

Part I

Introduction

(16)
(17)

1. Insurance mathematics

Certain types of random events can have a negative effect on individuals and corporations. A car, a house or a factory might burn down. An individual may die unexpectedly young, leaving children and a large mortgage behind, or live on in poverty, long after the savings account is emptied.

To protect oneself from the economic effects of such events one can buy a protection, insurance. Insurance companies accept risks for a premium. Insurance relies on the law of large num- bers and central limit theorem, according to which the sum of a large number of random variables is much less random than the variables themselves.

Insurance is the swapping of a deterministic payment, the pre- mium P, for a stochastic amount, the contingent claim amount as defined in the insurance contract.

1.1 Collective model

For an understanding of insurance mathematics, the collective model, introduced by Lundberg (1903), is paramount. The total claim amount up to time t is given by

S = S(t) =

N(t)

i=1

Z i . (1.1)

We see from (1.1) that claims arrive according to some stochas- tic counting process, 0 ≤ T 1 < T 2 < . . ., the number of claims at time t is

N(t) = max{i; T i ≤ t}, and the cost of the i:th claim is Z i .

It is intuitively clear that in order to have a viable insurance

operation, the premiums must be at least as large as the expected

17

(18)

total claims E[S]. Assuming Z i are independent and identically distributed and independent of N(t), with finite first moments of N(t) and Z i , we have

E[S] = E[N(t)] · E[Z 1 ].

Thus, in order to calculate the premium one needs to study the claim frequency process N(·) and claim severities Z i . The fair or pure premium is such that P = E[S]. In practice the premi- ums must be higher to accommodate the costs of running the insurance company, office, personnel, IT, marketing and so on.

Even if P ≥ E[S] there is a risk that at some point in time, due to the inherent randomness, P < S. With claims exceeding premi- ums, some extra capital is needed to pay the claims. How much extra capital is needed for an insurer to almost surely be able to fulfill its obligations? According to the central limit theorem, the risk goes down as the number of insurance policies goes up.

The classical model for the dynamics of capital in an insurance company is the following:

c(t) = c 0 + p · t −

N(t)

i=1

Z i . (1.2)

The amount of capital, c(t), is the sum of an initial capital c 0 plus premiums (linearly earned with time) minus the claim amount up to time t. An important question in the classical setting is:

what is the probability of ruin, the event that the capital at some point in time becomes negative?

This model does not include investments. In practice, returns on investments and the financial risks connected to investments are very important for insurance operations. There is a rich litera- ture on financial mathematics, see for example Björk (2009) or Hult et al (2012). However we will not treat financial risk in this dissertation.

In classical ruin theory the time horizon is infinite. In a modern regulatory framework like Solvency II, the time horizon is lim- ited to one year and the capital requirement c 0 is set so that the ruin probability is less than 0.5% for the next year.

Some important questions for insurance mathematics can be summarized as follows:

18

(19)

(a) What is the claim cost? It is often divided into i. What is the claim frequency?

ii. What is the claim severity?

(b) What is the ruin probability and capital requirement?

1.1.1 Life and non-life insurance

Life insurance is insurance were the payment depends on one (or two) persons life, for example a life policy may pay a lump sum in case the insured person dies, or pay an annuity for as long as the insured person stays alive. A life insurance contract can be active for a long time, a pension insurance can first have a savings period of 40 years and then start to pay out an annuity during 25 years. The effect of interest over long periods is im- portant and the discounting of payment streams to present value is a vital part of calculating premiums and provisions. What makes life insurance special in this regard is that payments are discounted not only with interest but also with mortality.

Answering (aii) in life insurance is easy, the benefits are defined in the insurance contract (if person x dies before the age y the insurer will pay z monetary units to beneficiary w). Therefore claim severity is a known variable.

The opposite is true for non-life insurance; defined “et con- trario” as all insurance that is not life insurance, typically the insurance of property and casualty (also known as P&C), were claim amounts in general are stochastic. A motor insurance claim might be the cost of a new bumper or of a new car. Here one must try to find a distribution that fits claim severity and estimate its parameters.

Estimating a claim cost is in practice an iterative process. Be- fore an insurance policy is sold, the premium must be calculated.

This involves finding the expected claim cost in a process known as pricing. Pricing can involve anything from qualified guess- work, in the case with a new insurance type were there is no historical data to analyze, to the use of complex generalized lin- ear models (GLMs) in cases were a long history of detailed data exists and it is possible to estimate how different factors such as age, residential area, yearly mileage etc., affect expected claim cost.

19

(20)

Once sold, the insurance company must set up a provision to cover the future claim costs associated with the policy. The pol- icy covers events that occur during a specified time period (often a year), but claims can be reported with some delay, and in some cases it can take a long time before the final claim cost is known.

For instance, if a person is injured in an accident, considerable time can go before one can decide how well the person recov- ered and what might be considered permanent damage. Hence, it can take years before the final claim cost associated with the policy is truly known. During this time the reserves must be updated accordingly to new information that is received. This process is known as reserving, see for example Taylor (2000).

1.1.2 Reinsurance

One way to manage insurance risk is through reinsurance. Rein- surance is insurance for insurers. A reinsurance contract can protect the direct insurer (or ceedent as it is more commonly referred to) from the effects of unusually high claim frequency or from severe claims exceeding a certain retention (threshold level). Such an “excess of loss” contract is illustrated in Fig- ure 1.1. Another way is to split the risks and premiums to a given proportion (say 50/50) between ceedent and reinsurer in a “quota share” contract. Then the reinsurer reimburse 50% of each claim, regardless of size.

But reinsurance is not only to protect from extreme events, one important use is to lessen capital requirements by mitigating part of the risk. This is especially useful for relatively new or fast growing insurance companies who can face high sales costs (provisions to brokers etc) that constitute a considerable amount of the premium, while the sale of a policy immediately will give rise to a debt (insurance provision). By reinsuring part of the risk, the debt is lowered to a corresponding degree.

1.2 Catastrophes

When we build models for (Ia), claim frequency, it is often as-

sumed that claims arrive independently of each other. If that is

not the case, then the law of large numbers may not hold, es-

pecially over shorter time periods, and the smoothing effect of

20

(21)

1 2 3 4 5 6

0 2 4 6 8 10

Excess of loss reinsurance

Claim number

Claim siz e

Figure 1.1: Illustration of an excess of loss reinsurance contract. The reinsurer pay the part (dashed) of clams exceeding the retention S = 4 to the cedent. The reinsurer only observes (gets notified of) claim 2 and 4, those in excess of the retention.

collecting several risks in one portfolio is lost. Generally speak- ing, claims do arrive seemingly independent of each other, but there are events were this is not the case, for example a fire that spreads and burn down several neighboring buildings. Events resulting in several insurance claims are denoted as catastrophic.

Insurance companies have to control the concentration of risks, for example by not giving fire insurance to an entire building block, in order not to expose themselves to unnecessary catas- trophe risk.

Thinking of catastrophes, natural catastrophes like hurricanes, floods and earthquakes spring to mind. Such perils can, and regularly do, cause enormous insurance losses. For models of natural perils, see Woo (1999).

A model for catastrophes can be incorporated into (1.1), inter- preting T i not as the time of the i:th claim, but rather as that of the i:th catastrophe.

Lack of data is a challenge since extreme events, almost by def-

inition, are rare. For an insurer it is often not possible to model

catastrophe risk just working with own experience. Instead spe-

cial consultancy firms, large reinsurance brokers and reinsurers,

21

(22)

with resources to collect a lot of catastrophe data, provide ad- vanced models for catastrophes that can be used to analyze an insurer’s exposure to different perils, and serve as a guide for how much reinsurance to buy. For insurance of property, some geographic areas are known to be more exposed than others, it might be a seismic active area or one with recurring storms. In life insurance it is hard to control concentration risk since people move around.

1.2.1 Extreme values

The mathematical treatment of extremes, rare and large events, is called extreme value theory. Heavy tailed distributions or just

"heavy tails" is a key concept in this area. The (right-)tail be- havior of a distribution is characterized by the speed that

(1 − F(x)) → 0 as x → ∞.

Most commonly used distributions; such as the exponential, nor- mal and gamma, have exponentially decaying or lighter tails, meaning that

∃ λ > 0 : (exp(λ x)(1 − F(x))) → 0 as x → ∞.

There are in contrast distributions for which

∀ λ > 0, (exp(λ x)(1 − F(x))) → ∞ as x → ∞.

These are said to have heavy tails and important examples are the Pareto and log-normal distributions.

The classical theory of extremes is about the limiting distribu- tion of a properly scaled maximum M n = max(X 1 , . . . X n ), of a sequence of independent and identically distributed random variables X i with some given distribution, see Resnick (1987).

From the insurance perspective, we are not only interested in the maxima but in the behavior a bit out in the tail. The tail behav- ior is important, as it governs the risk for very costly claims. A way to analyze the tail is to use the Peeks over threshold (POT) method.

To be more specific, if X is a random variable with distribu-

tion function F, we study the distribution of exceedances over a

22

(23)

threshold u,

F u (x) = P(X ≤ x + u|X > u) = F(x + u) − F(u) 1 − F(u) . For large u, the excess distribution F u can, under some condi- tions, be approximated by the generalized Pareto distribution (GPD), see Pickands (1975). It has a cumulative distribution function

G (u,σ ,ξ ) (x) = 1 − [1 + ξ (x − u)/σ ] −1/ξ , (1.3) were u ∈ ℜ, x ≥ u and σ > 0. If X ∼ GPD(u, σ , ξ ) then

E[X ] = u + σ

1 − ξ when ξ < 1 and

Var(X ) = σ 2

(1 − ξ ) 2 (1 − 2ξ ) when ξ < 1/2.

The Pareto distribution has a heavy tail, if ξ ≥ 1/2 the variance does not exist, and if ξ ≥ 1 the same holds for the expected value.

We can interpret a random sequence {(T i , Z i ), i = 1, 2 . . .}

as a marked Poisson process, see Jacobsen (2006), were the mark Z i is the total claim amount resulting from event i. (The claims themselves do not form a Poisson process since such a process with probability one has no two events occurring at the same time.)

By thinning of events to include only those larger than a certain threshold u, we can use the POT model to motivate a (gener- alized) Pareto distribution for the total claim severities. Pareto distributions have shown to give a good fit for example wind storms, see Rootzén and Tajvidi (1997) and to claims of Danish industrial fires, see Hult et al (2012).

While popular in non-life applications, it seems that extreme value theory has not been extensively applied to life insurance.

The most famous model for life catastrophes is due to Strickler

(1960). Strickler used data from the Statistical Bulletin of the

23

(24)

Metropolitan Life Insurance Company in New York who had supplied summaries of the accidents in the US which claimed five lives or more for the period 1946–1950.

The annual number of deaths for each million of population re- sulting from accidents claiming m or more lives was approxi- mated by the function

A(m) = 8 · 100 1/m · m −1/3 .

From this equation he derived an elegant pricing formula. Draw- backs with Strickler’s model are that there is no statistical method to update A(m) in accordance to new data, it assumes a constant deterministic rate of catastrophes and is limited to catastrophes claiming at most 1500 lives. There have been some smaller ad- justments proposed to Strickler’s model, see for instance Harbitz (1992) and Alm (1990). These modifications have however not addressed the main weaknesses of the model.

1.3 Ruin

Classical risk theory or collective risk theory is the study of an insurance company’s risk business as formulated in (1.2).

The aspect of the model that is most studied is the risk of ruin;

the probability

ψ (u) = P u + p · t −

N(t)

i=1

Z i < 0

!

that the insurer can not fulfill its liabilities, which happens if the total claims at some point in time exceeds collected premiums plus initial capital u. See Figure 1.2.

We refer to the originating works by Lundberg (1903, 1909, 1926) and Cramér (1930, 1955), where the theory connected with the celebrated Cramér–Lundberg approximation for ruin probability was developed. This approximation has the form of the following asymptotic relation,

e ρ u ψ (u) → π as u → ∞, (1.4) where ρ is the Lundberg exponent, given as the solution of the corresponding functional equation.

24

(25)

0 1 2 3 4 5 6 7

−2 0 2 4 6

Capital, c(t)

t

c(t)

Figure 1.2: An illustration of a ruin process with initial capital u = 3. At t = 6.3 there is a claim resulting in a negative capital, i.e. ruin.

A probabilistic approach was proposed by Feller (1971), who used renewal theory in an elegant way to obtain the asymptotic relation (1.4), and Gerber (1979), who showed in which way the Cramér - Lundberg approximation can be derived by the use of martingale theory.

Generalizations of the somewhat simplistic classical risk model have been made in several directions. We refer to works by Grandell (1991) and Schmidli (1997) for the corresponding re- sults related to doubly stochastic risk models. Related results for ruin in a finite horizon and for models with heavy claims can be found in Embrechts, Klüppelberg and Mikosch (1997) and Asmussen (2000), upper and lower bounds for ruin proba- bilities in Kalashnikov (1997) and Rolski, Schmidli, Schmidt, and Teugels (1999), and asymptotic expansions of ruin proba- bilities for perturbed classical risk processes in Gyllenberg and Silvestrov (2000, 2008).

1.4 Death

Our lives are but too fragile. It is impossible to insure oneself

from death, but one can protect one’s survivors from the demise

of the breadwinner. Life insurance companies started in the 18th

25

(26)

century, and one of the earliest, in Sweden, "Civilstatens Enke- och Pupillcassa", founded 1740, still exists today. A problem at that time was the lack of mortality statistics, which lead to financial problems for the company due to larger losses than ex- pected. Perhaps the founders had not read “Annuities on Lives”

(1725), the first textbook on life insurance mathematics, written by the famous Abraham de Moivre. For more reading about the history of actuarial science, see Haberman and Sibbett (1995).

In order to produce a mortality table we have to keep track of all deaths, but also the number of individuals alive. Sweden is perhaps the first country who started to collect such statistics.

From 1751 the church had to register all births and deaths. This early start of data collection, with good quality, and the fact that Sweden has had peace since 1814 has made Swedish mortality data popular among researchers.

Once you have a table with numbers, it is appealing to find a pattern, a formula or law that explains it. A formula makes calculations easier. Gompertz suggested such a law 1825 and Makeham successfully extended the formula in 1860, into one that is still in use today, at least in Scandinavia. It has the form

µ x = a + b exp(cx),

where µ x is the death intensity or force of mortality, at age x.

Closely related is the one year death risk q x , q x = 1 − exp(−

Z 1

0

µ x+s ds).

The general shape of the mortality curve, plotted on a logit-scale is seen in Figure 1.3. We have a so called bathtub shape see Klein and Moeschberger (2003), a relatively high infant (first year) mortality, then a drop, and then mortality starts to rise again around the age of 13. After that, the mortality rises quite quickly to around 25, and, for males, lies still a few years before it starts to increase approximately linearly. At very high ages the curve tends to plane out a bit.

1.4.1 Mortality improvements

Improvements in living standard; vaccine, hygiene, nutrition,

antibiotics, housing standards, etc, have for over a century given

26

(27)

0 20 40 60 80 100

−10 −8 −6 −4 −2

Mortality 2011, SWE f

Age

Logit qx

0 20 40 60 80 100

−10 −8 −6 −4 −2

Mortality 2011, SWE m

Age

Logit qx

Figure 1.3: Empirical one year death risks ˆ q x for Swedish females and males 2011.

rise to a development where mortality goes down and people live longer and longer. At what ages the improvements have been most pronounced has changed over time. First mortality went down in active ages, 20s, 30s and 40s. Over the last thirty years we have seen rapid improvements at ages over 65, 1-2% per year, see Figure 1.4. During the 20th century actuarial societies and life insurers have been aware of this process, new mortality tables have regularly been developed, with some extra margin for future improvements.

Longevity is a term to describe the fact that we live longer and longer, and it is also often used to denote the risk that future mortality improvements will be greater than anticipated. Why is this a risk? Insurance contracts are long and often contains guaranties of one sort or another. For a life long annuity (pen- sion) an assumption on mortality is used to calculate the annuity payment given the initial capital. It is clear that the payments can be higher if the pension is expected to be paid out over 20 years than over 25 or 30. If people live longer than expected and the insurer cannot decrease the payments, then losses will occur.

In order to model longevity, we let the mortality rate µ x not only depend on age x, but also on calendar time t. Lee and Carter (1992) introduced such a stochastic model incorporating mortality improvements:

27

(28)

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

−4 −3 −2 −1

logit q xt , SWE m

Age 60 to 90, year 1980−2011

logit q

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

0.00 0.05 0.10 0.15 0.20 0.25

q xt , SWE m

Age 60 to 90, year 1980−2011

q

Figure 1.4: Plots, for Swedish data, of estimates of logit(q xt ) and q xt for various ages x and calendar years t. The points are ordered linearly along the horizontal axis, where the first set of points are for age 60, years 1980 to 2011, then the rest of the ages 61, . . . , 90 line up from left to right.

log ˆ µ xt = α x + β x κ t + ε xt

where ˆ µ xt is the observed death rate at age x during calendar year t , with constraints ∑ t κ t = 0 and ∑ x β x = 1 for the parameters, and error terms ε xt .

The Lee-Carter model has received a lot of attention in the lit- erature, with improvements of estimating procedure as well as extensions with cohort effects, different distributional assump- tions of the error terms etc. For overview and further refer- ences, see for example Renshaw and Haberman (2006), Booth and Tickle (2008), Cairns, Blake and Dowd (2008) and Barrieu et al. (2012).

Some more elaborate models, in the same spirit as Lee-Carter’s, have shown to be sensitive to indata. Such lack of robustness is a drawback when prediction of future mortality rates are made, see for instance Cairns (2013).

In order to fit a model such as Lee-Carter’s we must first esti-

mate the raw death rates. Assuming that N xt is the number of

individuals of age x alive at the beginning of calendar year t the

28

(29)

number of deaths

D xt |q xt ∼ Bin(N xt , q xt ) (1.5) among them within one year is assumed to have a binomial dis- tribution, with a mortality rate q xt that can be estimated as

ˆ

q xt = D xt

N xt . (1.6)

To be more precise, demographers use not N xt but the exposure- to-risk E xt , the total time during calendar year t that people have age x, taking into account that our age switches from x to x + 1 at our birthday (thus exposing E x+1,t from the birthday).

While the Makeham mortality curve is smooth, real mortality data is not. There is typically a lot of noise in the data, stemming from the fact that we have finite populations. Then it is natural to turn from a deterministic model like Makeham’s to a stochastic one, a feature that seems to have been somewhat overlooked.

Are we observing real changes in q xt or just random fluctuations in estimates ˆ q xt thereof?

1.4.2 Two-way mortality tables

It is neat to have a formula in which you plug in age x and cal- endar year t and out pops q xt . It is appealing with a continu- ous formula for µ xt since it is natural to think that risk to die changes gradually over time - not with a jump every birthday or new years eve - and it makes it possible to give nice analytical expressions for calculations of premiums and provisions. How- ever, from a practical point of view, with spreadsheet software ready available, tabulated one year death risks and summation formulas are easier to work with than continuous expressions and integrals. An ordinary mortality table lists values of q x , with a model for longevity it can be extended to a two-way table, con- sisting of values q xt predicted for a set of future calendar years t.

29

(30)

30

(31)

2. Overview of Papers

2.1 Paper I

Paper I develops a new model for catastrophe risk in life insur- ance, the risk that many insured lives are lost in a single event.

The model is adapted for the pricing of catastrophe excess of loss (Cat XL) reinsurance contracts, protecting a ceding com- pany from the economic consequences of a catastrophe.

The model is hierarchical and straightforward to combine with numerical Monte Carlo simulations. First, the catastrophes, de- fined as events claiming at least k lives, are modeled with the POT-model as a key ingredient. They are seen as a marked Poisson process with marks equal to the total claim Z i of each catastrophe. This involves the number of lives lost X i , which is assumed to follow a discrete generalized Pareto distribution (1.3). In more detail, the total claim of the i:th catastrophe has the form

Z i = f

Y i

j=1

Z i j

! ,

where Y i ≤ X i is the number of individuals in the catastrophe af- fecting a single insurer, if at least M (the lower limit for a catas- trophe according to contract) of their customers lives are lost, otherwise Y i equals 0, Z i j are the individual claims of the i:th catastrophe, and f is a non-linear function whose form depends on the reinsurance contract. For a Cat XL contract

f (x) = min (max(x − S, 0), L) ,

where S is the retention level and L is the maximal liability.

Since catastrophe data on this level is available it is possible to estimate the parameters both for the Poisson intensity and for the Pareto distribution.

31

(32)

Secondly, to assess how a catastrophe affects a single life in- surer, we use a truncated beta-binomial model,

Y i = Y i 0 1 {Y 0

i ≥M} ,

Y i 0 |X i , p i ∼ Bin(X i , p i ),

p i |X i ∼ Beta(d(X i )π, d(X i )(1 − π)),

based on both the insurer’s market share π and the size X i of the catastrophe. In a relatively small catastrophe we expect more of dependence among the lost lives, in a very large one we expect less dependence, so that the amount of overdispersion relative to a binomial distribution decreases, as modelled with an increas- ing function x → d(x).

Lastly we use information about the distribution of sums insured to assign a claim cost Z i j for each lost life.

This enables us to simulate the claim distribution and make sen- sitivity analyses for the parameters. This is useful no only for pricing of reinsurance contracts but also for the design of a rein- surance program, i.e. how much reinsurance protection an in- surer should buy.

2.2 Paper II

In this paper we go back to the Cramér–Lundberg approximation and give an explicit rate of convergence in the case were claims are bounded by some upper limit R. This is typically the case, for example, for reinsurance models. For instance, R corre- sponds to the liability L of the Cat XL contract of paper I. Using coupling arguments, we obtain inequalities, which give explicit rates of convergence in the Cramér–Lundberg approximation (1.4),

|e ρ u ψ (u) − π | ≤ e −β u K R (β ), u ≥ 0, (2.1) provided by explicit expression for constant K R (β ) and param- eter β R > 0, for which our results guarantee inequalities (2.1) to hold for 0 < β < β R .

We define a reinsurance risk processes and the corresponding

reinsurance ruin probabilities. Then we describe the coupling

construction, it lets us interpret the normalized ruin probabil-

ity and the corresponding limit in the asymptotic relation (1.4)

32

(33)

as a one-dimensional distribution of two coupled regenerative processes. Finally, we use the above coupling construction to get the explicit rates of convergence in the Cramér–Lundberg approximation represented above in the relation (2.1).

2.3 Papers III & IV

Paper III and IV both analyze models of mortality.

2.3.1 Paper III

Our starting point in Paper III is to view mortality as a multi- variate time series with calendar years as observations. We have data with observations of D xt and E xt from which we calculate

ˆ

q xt , according to (1.6).

To find a model for q xt we analyze the stochasticity in mortality data from the US, UK and Sweden. It is usual to work with log(q xt ), since it is approximately linear over a wide range of ages, but we use logit(q xt ) since this is the canonical link for probabilities of a generalized linear model.

We perform an explorative data analysis on both logit trans- formed mortality data, and - in order to remove linear trends - on logit transformed increments. For Swedish and UK data we observe a high degree of randomness once the linear trend is removed, while there is more structure left in the US data.

We formalize this in terms of a two-factor model with age and calendar year as covariates, using a mixed regression model

logit ˆ q xt = logit(q xt ) + ε xt b ,

= α x + β x (t − ¯t) + ε xt s + ε xt b

= m xt + ε xt s + ε xt b ,

(2.2)

with a logistic link function and an age-specific linear time trend,

centered around a conveniently chosen time point ¯t. Based on

(2.2), the variance of the observations is decomposed, for a ran-

domly chosen age x and calendar year t, into three parts; bi-

nomial risk σ b 2 = Var(ε xt b ), the variance due to random mor-

tality variation in a finite population, systematic risk (σ exp 2 =

Var(m xt )) explained by the covariates and unexplained system-

atic risk (σ s 2 = Var(ε xt s )), variance that comes from real changes

33

(34)

in mortality rates, not captured by the covariates. When the sys- tematic risk component vanishes, (2.2) reduces to a linear logis- tic regression model.

The amount of unexplained variance caused by binomial risk provides a limit in terms of the resolution that can be achieved by a model and conditionally on a specific mortality rate q xt , it is given by

Var(ε xt b |q xt ) = E [Var(logit( ˆ q xt )|q x,t )]

≈ E h

1 N xt q xt (1−q xt )

i

, (2.3)

where the variance of a transformed binomial variable is com- puted by means of a Gauss approximation. It is clear that the binomial risk is inversely proportional to population size N xt . The above variance decomposition can be used as a model selec- tion tool for selecting the number of covariates and regression parameters of the deterministic part of the regression function, and for testing whether unexplained systematic variation should be explicitly modeled or not. The test is based on comparing the relative sizes of estimates of σ b 2 and the total unexplained vari- ance σ unexp 2 = σ s 2 + σ b 2 . For a small population, the unexplained systematic risk component σ s 2 is typically excluded.

In agreement with (2.3), the population size turns out to be cru- cial, and for Swedish data, the simple logistic regression model works very well, leaving only a small fraction of unexplained systematic risk, whereas for UK and US data, the amount of un- explained systematic risk is larger, so that more elaborate mod- els might work better.

2.3.2 Paper IV

Paper IV is builds on the results from Paper III, by suggesting

a model with an explicit expression for unexplained systematic

variation and a procedure for estimating it. More specifically,

we employ the mixed regression model (2.2) for mortality data

which can be decomposed into a deterministic trend component

explained by the covariates age and calendar year, a multivari-

ate Gaussian time series part not explained by the covariates,

and binomial risk. The multivariate Gaussian time series has

34

(35)

components

ε xt s = η xt + c x ζ t + d x

|t−˜t|

s=1

κ ˜t+sgn(t−˜t)s , (2.4) where all η xt , ζ t and κ t are independent normal random vari- ables with zero mean. The first term of (2.4) represents white noise and is caused, for instance, by a heterogeneous population, the second term c x ζ t represents period effects, such as catastro- phes and influenzas, that affect many age classes in a similar way, as specified by c x . The third term is a two-sided random walk term, centered around the time point ˜t. It incorporates ran- dom departures from a linear trend, similarly to all age classes, as specified by d x . When the multivariate Gaussian time series component (2.4) is absent in (2.2), we get a linear logistic re- gression model, as used in Paper III for a small population.

The mixed regression model is fitted to mortality data from the United States and Sweden, with the aim to provide prediction in- tervals for future mortality, as well as smoothing historical data, using the best linear unbiased predictor (Robinson, 1991). We find that the form of the Gaussian time series has a large impact on the width of the prediction intervals, a random walk compo- nent significantly adds to the width. Such a component is found in US data, but not in the Swedish, possibly because of the rela- tively larger amount of random noise in the Swedish data. This finding poses some new questions on proper model selection.

2.4 Summary

This thesis gives some perspectives on insurance mathematics related to life insurance and / or reinsurance, catastrophes, ruin and death.

Paper I is about catastrophes, events resulting in many lost lives which unfortunately are known to happen over and over again.

Our new model for catastrophes includes the number of catas- trophes, how many lives that are lost, how many lost lives that are insured by a specific insurer and the cost of the resulting claims. This makes it possible to calculate the price of reinsur- ance contracts linked to catastrophic events.

35

(36)

A fancy model is of little use without data so that the model parameters can be estimated. We analyze two datasets, one in- ternational for catastrophes claiming at least 20 lives and one for Swedish accidents with at least 5 dead. For practical pric- ing in other countries than Sweden, data of accidents claiming 3–19 lives would be useful. We assume that the catastrophes follow a Poisson process. An extension of the model would be to allow other types of claim processes. Leppisaari (2014) con- siders a more general class of Poisson point processes, but based on statistical tests and model comparisons he concludes that our model, with a generalized Pareto distributed number of deaths, fits well.

Paper II is about ruin. Ruin is the result if claims exceed initial capital and the premiums collected by an insurance company.

We analyze the Cramér–Lundberg approximation for the ruin probability and give an explicit rate of convergence in the case were claims are bounded by some upper limit. Further studies in this direction could be to perform numerical simulations in order to find how fast the convergence is for some scenarios.

Paper III deals with the stochastic nature of mortality. Death, known to be the only thing that is certain in life, exhibits a large portion of randomness. Models and statistics of mortality rates are important for, amongst others, life insurance compa- nies whose payments ultimately depend on people being alive or dead. We analyze the stochasticity of mortality and perform a variance decomposition were the variation in mortality data is either explained by the covariates age and time, unexplained systematic variation or random noise due to a finite population.

In Paper IV we give analytical formulas for prediction inter- vals of age and time-specific mortalities, based on quantiles of a predictive distribution. It would be of interest to derive predic- tion intervals also for the whole reserve that the insurance com- pany must hold. However, since we have modeled mortality as a Gaussian process on the logit probability scale, it is a non-trivial task to derive the predictive distribution of the reserve. The reason is that simple analytical expressions are available only on the logit probability scale, and quantiles of sums of random variables do not transform easily under the logit transformation.

One possibility would be to approximate the analytical expres- sion by means of a Taylor expansion, or resort to simulations.

36

(37)

Sammanfattning

Försäkringsmatematik handlar om att bemästra den slumpmäs- sigthet som är förknippad med försäkring. Det är i någon me- ning slumpen som avgör om och när en skada sker och hur stor den blir. Likaså är det slumpen som avgör om vi blir sjuka eller när vi dör. Slumpmässigheten är som störst på individnivå, när vi större studerar grupper eller kollektiv framträder ett mönster.

Därför kan vi säga att, på gruppnivån, är unga bilförare farligare i trafiken än äldre och rökare i snitt lever sju år mindre än icke- rökare. Detta är ett uttryck för stora talens lag, ju större grupp desto mindre slumpmässighet i det totala, samanvägda utfallet.

Detta är grunden för försäkring, att vi kan dela risken för kost- samma skador eller risken att bli så gamla att vi levt upp vårt sparande med andra och betala en förutsägbar premie istället.

Den här avhandlingen tar upp några olika aspekter av försäk- ringsmatematik.

En klassisk gren av försäkringsmatematiken är ruinteorin. Den ger en modell för risken att ett försäkringsbolag hamnar på obe- stånd (i ruin), dvs att skadorna blir större än de inbetalda pre- mierna, på grund av en slumpmässig anhopning av skador eller osedvanligt stora skador. En av artiklarna handlar om hur snabb konvergensen är i den så kallade Cramér–Lundberg approxima- tionen av ruinsannolikheten.

Livförsäkringsbolag räknar med att folk dör en och en och obe- roende av varandra. Det stämmer för det mesta, men ibland sker olyckor eller katastrofer där flera människor, från en handfull till tusentals, omkommer på en och samma gång. Sådana hän- delser kan få stora konsekvenser för ett försäkringsbolag och för att skydda sig från risken kan bolag köpa återförsäkringsskydd av speciella återförsäkringsbolag. I avhandlingen presenteras en ny modell för katastrofrisk som baseras på katastrofdata insam- lat från hela världen.

För ett livförsäkringsbolag som lovat att betala en pension så

(38)

länge någon lever är det av största vikt att ha en god uppfattning om hur lång period det kan tänkas bli. Om livslängden överskat- tas väljer spararna andra bolag, underskattas den hotar konkurs då mer pengar än beräknat måste betalas ut. Länge har vi haft en positiv trend, nämligen att risken att dö vid en given ålder sjun- ker år för år. Detta gör att vi i snitt lever längre och längre. Det är givetvis av intresse att försöka modellera utvecklingen och det finns flera förslag i litteraturen varav Lee-Carter från 1992 är den mest kända.

Vi börjar med att undersöka slumpens betydelse i dödlighetsda- ta. Då vi har ändliga populationer kommer vi alltid ha ett visst brus i form av slumpmässiga variationer år från år som saknar egentlig orsak, vilket gör det svårt att urskilja verkliga, syste- matiska förändringar om de inte är stora nog. Detta är en aspekt som inte tagits upp tidigare i dödlighetsmodellering och vi fin- ner att för svenska data räcker en enkel linjär modell långt för att förklara data.

Orsaken är att det inneboende slumpbruset är stort i ett relativt

litet land som Sverige, det ger oss en låg upplösning. I länder

med större befolkning som Storbritannien och USA är slump-

bruset mindre och då kan man använda mer avancerade model-

ler. Vi introducerar en sådan modell som rymmer systematiska

slumpeffekter, exempelvis kan slumpmässiga händelser såsom

en allvarligare influensa eller kraftig värmebölja höja dödlighe-

ten ett visst år. Modellen uppskattar och ger prediktionsintervall

för framtida dödlighet.

(39)

References

Alm, E. (1990). Catastrophes can also hit life assurance, First - A Journal for Skandia International.

Asmussen, S. (2000). Ruin Probabilities. World Scientific, Sin- gapore.

Barrieu, P., Bensusan, H., El Karoui, N., Hillairet, C., Loisel, S., Ravanelli, C. and Salhi, Y. (2012). Understanding, modelling and managing longevity risk: key issues and main challenges.

Scandinavian Actuarial Journal 2012(3), 203-231.

Björk, T. (2009). Arbitrage theory in continuous time. 3rd ed, Oxford University Press, Oxford.

Booth, H. and Tickle, L. (2008). Mortality modelling and fore- casting: A review of methods. Annals of Actuarial Science 3, I/II, 3-43.

Cairns, A.J.G., Blake, D. and Dowd, K. (2008). Modelling and management of mortality risk: a review. Scandinavian Actuarial Journal 2008(2-3), 79-113.

Cramér, H. (1930). On the Mathematical Theory of Risk. Skan- dia Jubilee Volume, Stockholm.

Cramér, H. (1955). Collective Risk Theory. Skandia Jubilee Volume, Stockholm.

Embrechts, P., Klüppelberg, C. and Mikosch, T. (1997). Mod- elling Extremal Events for Insurance and Finance. Applications of Mathematics, 33, Springer, Berlin.

Feller, W. (1971). An Introduction to Probability Theory and its Applications, Vol. II. Wiley, New York.

Gerber, H.U. (1979). An Introduction to Mathematical Risk

Theory. Huebner Foundation Monographs, Philadelphia.

(40)

Grandell, J. (1991). Aspects of Risk Theory. Springer, New York.

Gyllenberg, M. and Silvestrov, D. (2000). Nonlinearly perturbed regenerative processes and pseudo-stationary phenomena for sto- chastic systems. Stochastic Processes Applications, 86, 1-27.

Gyllenberg, M. and Silvestrov, D. (2008). Quasi-Stationary Phe- nomena in Nonlinearly Perturbed Stochastic Systems. De Gruyter Expositions in Mathematics, 44, Walter de Gruyter, Berlin.

Haberman, S. and Sibbett, T. A. (eds.) (1995). History of actu- arial science, Pickering, London.

Harbitz, M. (1992). Catastrophe Covers in Life Assurance, Trans- actions of the International Congress of Actuaries, Montreal.

Hult, H., Lindskog, F., Hammarlid, O. and Rehn, C.J. (red.) (2012). Risk and portfolio analysis: principles and methods, Springer, New York.

Jacobsen, M. (2006). Point process theory and applications.

Marked point and piecewise deterministic processes. Birkhäuser, Boston.

Kalashnikov, V. (1997). Geometric Sums: Bounds for Rare events with Applications. Mathematics and Its Applications, 413, Kluwer, Dordrecht.

Klein J.P. and Moeschberger M.L. (2003). Survival analysis, Techniques for Censored and Truncated Data. 2nd edition, Springer, New York.

Lee, R.D. and Carter, L.R. (1992). Modelling and forecasting U.S. mortality. Journal of the American Statistical Association 87(419), 659-671.

Leppisaari, M. (2014). Modeling catastrophic deaths using EVT with a microsimulation approach to reinsurance pricing, Scan- dinavian Actuarial Journal To appear

Lundberg, F. (1903). I. Approximerad framställning av sanno- likhetsfunktionen. II. Återförsäkring av kollektivrisker. Almqvist

& Wiksell, Uppsala.

Lundberg, F. (1909). Über die der Theorie Rückversicherung.

In: VI Internationaler Kongress für Versicherungswissenschaft.

Bd. 1, Vien, 1909, 877–955.

(41)

Lundberg, F. (1926). Försäkringsteknisk riskutjämning. F. En- glunds boktryckeri AB, Stockholm.

Pickands, J. (1975). Statistical inference using extreme order statistics, Annals of Statistics 3, 119-131.

Renshaw, A.E. and Haberman, S. (2006). A cohort-based ex- tension to the Lee-Carter model for mortality reduction factors.

Insurance: Mathematics and Economics 38(3), 556-570.

Resnick, S.I. (1987). Extreme values, regular variation, and point processes, Springer, New York.

Robinson, G.K. (1991). That BLUP is a Good Thing: The Esti- mation of Random Effects. Statistical Science 6(1), 15-32.

Rolski, T., Schmidli, H., Schmidt V. and Teugels, J. (1999).

Stochastic Processes for Insurance and Finance. Wiley Series in Probability and Statistics, Wiley, New York.

Rootzén H. and Tajvidi N. (1997). Extreme value statistics and wind storm losses: a case study, Scandinavian Actuarial Journal 1997(1), 70-94.

Schmidli, H. (1997). An extension to the renewal theorem and an application to risk theory. The Annals of Applied Probability 7, 121-133.

Taylor, G. (2000). Loss Reserving-An Actuarial Perspective.

Kluwer Academic press, Boston.

Woo, G. (1999). The Mathematics of Natural Catastrophes. Im-

perial College Press, London.

(42)
(43)

Part II

Papers

(44)

References

Related documents

​ 2 ​ In this text I present my current ideas on how applying Intersectional Feminist methods to work in Socially Engaged Art is a radical opening towards new, cooperative ​ 3 ​

In order to contribute to the human resource management research, this study uses theory of professions by Abbott (1988) as the theoretical framework with focus on three strategies

Purpose The purpose of this study is to see if the distance to a hospital performing colon cancer surgery is a risk factor for emergency surgical intervention and to determine

When Stora Enso analyzed the success factors and what makes employees &#34;long-term healthy&#34; - in contrast to long-term sick - they found that it was all about having a

In this study, a hydrological analysis of Hjuken river was done to examine if remote data through an analysis using GIS could be used for identifying three different process

(Director! of! Program! Management,! iD,! 2015;! Senior! Project! Coordinator,! SATA!

Figure 6.1 - Result matrices on test dataset with Neural Network on every odds lower than the bookies and with a prediction from the model. Left matrix shows result on home win

Andra kritiker har i allmänhet, hävdar Merryn Williams, studerat Hardy isolerad från det sam­ hälle han levde i; detta har lett till felbedöm­ ningen att han