• No results found

On some properties of elliptical distributions

N/A
N/A
Protected

Academic year: 2021

Share "On some properties of elliptical distributions"

Copied!
10
0
0

Loading.... (view fulltext now)

Full text

(1)

On some properties of elliptical distributions

Fredrik Armerin

Abstract

We look at a characterization of elliptical distributions in the case when finiteness of moments of the random vector is not assumed. Some addi-tional results regarding elliptical distributions are also presented.

Keywords: Elliptical distributions, multivariate distributions.

JEL Classification: C10.

(2)

1

Introduction

Let the n-dimensional random vector X have finite second moments and the property that the distribution of every random variable on the form hTX + a for

every h ∈ Rn

and a ∈ R is determined by its mean and variance. Chamberlain [2] showed that if the covariance matrix of X is positive definite, then this is equivalent to the fact that X is elliptically distributed. There are, however, elliptical distributions that do not have even finite first moments. In this note we show that for a random vector to be elliptically distributed is equivalent to it fulfilling a condition generalizing the moment condition above, and one that can be defined even if the random vector do not have finite moments of any order.

In portfolio analysis, if r is an n-dimensional random vector of returns and there is a risk-free rate rf, then the expected utility of having the portfolio

(w, w0) ∈ Rn× R is given by

Eu wTr + w0rf ,

where u is a utility function (we assume that this expected value is well defined). See e.g. Back [1], Cochrane [3] or Munk [5] for the underlying theory. If the distribution of wTr + r

f only depends on its mean and variance, then

Eu wTr + w

0rf = U wTµ + w0rf, wTΣw



(1) for some function U (this is one of the applications considered in Chamberlain [2]). If we only consider bounded utility functions, then the expected value is well defined even if r does not have any finite moments. Below we will show that if Equation (1) holds for any bounded and measurable function u, then this is in fact a defining property of elliptical distributions, i.e. r must be elliptically distributed if Equation (1) holds for any bounded and measurable u.

The results presented in this note are previously known, or are basic gener-alizations of known results.

2

Basic definitions

The general reference for this section is McNeil et al [4].

Definition 2.1 An n-dimensional random vector X has a spherical distribution if

U X = Xd

for every orthogonal n × n matrix U , i.e. for every n × n matrix U such that U UT = UTU = I.

(3)

Theorem 2.2 Let X be an n-dimensional random vector. The following are equivalent.

(i) The random vector X has a spherical distribution. (ii) There exists a function ψ of a scalar variable such that

EheihTXi= ψ(hTh)

for every h ∈ Rn.

(iii) For every h ∈ Rn

hTX = khkXd 1,

where khk2= hTh.

We call ψ in the theorem above the characteristic generator of X, and write X ∼ Sn(ψ) if X is n-dimensional and has a spherical distribution with characteristic

generator ψ.

For a strictly positive integer n we let (ek) for k = 1, . . . , n denote the vectors

of the standard basis in Rn. If X ∼ S

n(ψ) for some characteristic generator ψ,

then by choosing first h = ek and then h = −ek we get from property (iii) in

Theorem 2.2 that

−Xk d

= Xk,

i.e. each component in a random vector which has a spherical distribution is a symmetric random variable. By choosing first h = ek and then h = e` and

again using (iii) in Theorem 2.2, we get

Xk d

= X`

for every k, ` = 1, . . . , n, i.e. every component of a spherically distributed ran-dom vector has the same distribution.

Definition 2.3 An n-dimensional random vector X is said to be elliptically distributed if

X= µ + AY,d where µ ∈ Rn, A is an n × k-matrix and Y ∼ S

k(ψ). With Σ = AAT we write

X ∼ En(µ, Σ, ψ) in this case.

The characteristic function of X ∼ En(µ, Σ, ψ) is given by

EheihTXi= eihTµψ(hTΣh).

If X has finite mean, then

µ = E [X] , and if X has finite variance, then we can choose

(4)

If X ∼ En(µ, Σ, ψ), B is an k × n-matrix and b is an k × 1-dimensional vector,

then

BX + b ∼ Ek(Bµ + b, BΣBT, ψ).

Alternatively, if X ∼ En(µ, Σ, ψ), then

BX + b= Bµ + BAY,d

where Y ∼ Sn(ψ) and AAT = Σ. Finally, when Σ is a positive definite matrix

we have the equivalence

X ∼ En(µ, Σ, ψ) ⇔ Σ−1/2(X − µ) ∼ Sn(ψ).

3

Characterizing elliptical distributions

The following proposition shows the structure of any elliptically distributed random vector.

Proposition 3.1 Let µ ∈ Rn and let Σ be an n × n symmetric and positive

semidefinite matrix. For an n-dimensional random vector X the following are equivalent. (i) X ∼ En(µ, Σ, ψ). (ii) We have hTX = hd Tµ + √ hTΣh Z

for any h ∈ Rn, where Z is a symmetric random variable with

EeitZ = ψ(t2).

Proof. (i) ⇒ (ii): If X ∼ En(µ, Σ, ψ), then for every h ∈ Rn and some matrix

A such that AAT = Σ hTX =d hTµ + hTAY = hTµ + (ATh)TY d = hTµ + kAThkY1 = hTµ + √ hTAAThY 1 = hTµ + √ hTΣhY 1.

Since Y has a spherical distribution, Y1 is a symmetric random variable with

characteristic function

EeitY1 = ψ(t2).

(ii) ⇒ (i): If X has the property that

hTX = hd Tµ + √

(5)

for every h ∈ Rn and where EeitZ = ψ(t2), then EheihTXi= eihTµEhei √ hTΣh Zi = eihTµψ(hTΣh), i.e. X ∼ En(µ, Σ, ψ). 2

Note that the previous proposition is true even if Σ is only a positive semidefinite matrix.

Remark 3.2 With the same notation as in Proposition 3.1, if the random vec-tor X has the property that for every h ∈ Rn

hTX = hd Tµ + √

hTΣh Z

holds and Σ has at least one non-zero diagonal element (the only case when this does not hold is when Σ = 0), then Z must be symmetric. To see this we assume, without loss of generality, that Σ11> 0. Now first choose h = e1, and

then h = −e1. We get

X1 d = µ1+ p Σ11Z and − X1 d = −µ1+ p Σ11Z respectively, or X1− µ1 √ Σ11 d = Z and −X√1− µ1 Σ11 d = Z

respectively. It follows that Z= −Z.d Using the representation

hTX = hd Tµ + √

hTΣh Z

we see that the finiteness of moments of the vector X is equivalent to the finiteness of the moments of the random variable Z. This representation is also a practical way of both defining new and understanding well known elliptical distributions. When Z ∼ N (0, 1) we get the multivariate normal distribution, and when Z ∼ t(ν) we get the multivariate t-distribution with ν > 0 degrees of freedom. The multivariate t-distribution with ν ∈ (0, 1] is an example of an elliptical distribution which does not have finite mean.

Now assume that the random vector X has the property that the distribution of hT

X + a is determined by its mean and variance for every h ∈ Rn

and a ∈ R. If we let µ = E [X] and Σ = Var(X), which we assume is a positive definite matrix, then Chamberlain [2] showed that X must be elliptically distributed. Hence in this case, if

EhT

1X + a1 = E hT2X + a2



and Var(hT1X + a1) = Var(hT2X + a2),

then we must have

hT1X + a1 d

(6)

This property can, with notation as above, be rewritten as follows: If hT1µ + a1= hT2µ + a2 and hT1Σh1= hT2Σh2, then hT1X + a1 d = hT2X + a2.

It turns out that this condition, which is well defined for any X ∼ En(µ, Σ, ψ)

even if no moments exists, is a defining property of elliptical distributions if Σ is a positive definite matrix.

Proposition 3.3 Let µ ∈ Rn and let Σ be an n × n symmetric and positive

definite matrix. For an n-dimensional random vector X the following are equiv-alent.

(i) X ∼ En(µ, Σ, ψ) for some characteristic generator ψ.

(ii) For any measurable and bounded f : R → R and any h ∈ Rn

and a ∈ R Ef (hTX + a) = F (hTµ + a, hTΣh)

for some function F : R × R+→ R.

(iii) If

hT1µ + a1= hT2µ + a2 and hT1Σh1= hT2Σh2

for h1, h2∈ Rn and a1, a2∈ R, then

hT1X + a1 d

= hT2X + a2.

For a proof of this, see Section A.1. It is possible to reformulate this proposition without using the constants a, a1 and a2.

Proposition 3.4 Let µ ∈ Rn and let Σ be an n × n symmetric and positive

definite matrix. For an n-dimensional random vector X the following are equiv-alent.

(i) X ∼ En(µ, Σ, ψ) for some characteristic generator ψ.

(ii) For any measurable and bounded g : R → R and any h ∈ Rn

Eg(hT(X − µ)) = G(hTΣh) for some function G : R+→ R.

(iii) If

hT1Σh1= hT2Σh2

for h1, h2∈ Rn, then

(7)

For a proof, see Section A.2.

In Propositions 3.3 and 3.4 we assumed that the matrix Σ was positive def-inite. The implications (i) ⇒ (ii) and (ii) ⇒ (iii) in these propositions are still valid when Σ is only positive semidefinite (and the general characterization of elliptical distributions in Proposition 3.1 also holds in this case). The implica-tions (iii) ⇒ (i) in the proposiimplica-tions above are not true in general, as is seen in the following example.

Example 3.5 Let Σ =  1 0 0 0  and X =  U 0  ,

where U ∼ N (0, 1). In this case we have

hT1Σh1= hT2Σh2 ⇒ hT1X d

= hT2X,

so X has property (iii) from Proposition 3.4. But X is not spherically dis-tributed. This follows from the fact that every component of a spherically distributed random vector must have the same distribution.

By letting µ = [0 0]T it is possible to also construct a counterexample to the implication (iii) ⇒ (i) in Proposition 3.3. 2

A

Proofs

A.1

Proof of Proposition 3.3

(i) ⇒ (ii): We know that there exists a symmetric random variable Z such that

hTX = hd Tµ + √

hTΣh Z

for any h ∈ Rn. Hence for any measurable and bounded f

Ef (hTX + a) = EhfhTµ +hTΣh Z + ai= F hTµ + a, hTΣh ,

where

F (x, y) = E [f (x +√y Z)] .

(ii) ⇒ (iii): Fix t ∈ R and let f1(x) = sin tx and f2(x) = cos tx (which are two

bounded and measurable functions). Define Fi, i = 1, 2, by

Efi(hTX + a) = Fi(hTµ + a, hTΣh).

Now take any h1, h2∈ Rn and a1, a2∈ R such that

(8)

Then

Eheit(hT1X+a1)i = Esin(t(hT

1X + a1)) + i cos(t(hT1X + a1))  = F1(hT1µ + a1, hT1Σh1) + iF2(hT1µ + a1, hT1Σh1) = F1(hT2µ + a2, hT2Σh2) + iF2(hT2µ + a2, hT2Σh2) = Esin(t(hT 2X + a2)) + i cos(t(hT2X + a2))  = Eheit(hT2X+a2)i.

Since this holds for any t ∈ R we have

hT1X + a1 d

= hT2X + a2.

(iii) ⇒ (i): Take h ∈ Rn and let

 h1 = Σ−1/2h a1 = −hTΣ−1/2µ and  h2 = khkΣ−1/2e1 a2 = −khkeT1Σ−1/2µ. Then hT1Σh1= khk2 and hT2Σh2= khk2. We also have hT1µ + a1= hTΣ−1/2µ + (−hTΣ−1/2µ) = 0 and hT2µ + a2= khkeT1Σ −1/2µ + (−khkeT 1Σ −1/2µ) = 0. It follows that hT1X + a1 d = hT2X + a2 ⇔ hTΣ−1/2(X − µ)= khked T1Σ−1/2(X − µ). This shows that

Σ−1/2(X − µ) ∼ Sn(ψ),

which, since Σ is a positive definite matrix, is equivalent to X ∼ E(µ, Σ, ψ).



A.2

Proof of Proposition 3.4

(i) ⇒ (ii): There exists a symmetric random variable Z such that

hTX = hd Tµ + √

(9)

for any h ∈ Rn. It follows that for any bounded and measurable g

Eg(hT(X − µ)) = Ehg√hTΣh Zi= G hTΣh ,

where

G(x) = Eg(√x Z) .

(ii) ⇒ (iii): Fix t ∈ R and let g1(x) = sin tx and g2(x) = cos tx (which are two

bounded and measurable functions). Define Gi, i = 1, 2, by

Egi(hT(X − µ)) = Gi(hTΣh).

Now take any h1, h2 such that

hT1Σh1= hT2Σh2. Then EheithT1(X−µ) i = Esin(thT 1(X − µ)) + i cos(th T 1(X − µ))  = G1(hT1Σh1) + iG2(hT1Σh1) = G1(hT2Σh2) + iG2(hT2Σh2) = Esin(thT 2(X − µ)) + i cos(th T 2(X − µ))  = EheithT2(X−µ) i .

Since this holds for any t ∈ R we have hT

1(X − µ) d

= hT

2(X − µ).

(iii) ⇒ (i): Take h ∈ Rn and let

h1= Σ−1/2h and h2= khkΣ−1/2e1, . Then hT1Σh1= khk2 and hT2Σh2= khk2. Hence hT1Σh1= hT2Σh2

and it follows that

hT1(X − µ)= hd T2(X − µ) ⇔

hTΣ−1/2(X − µ)= khked T1Σ−1/2(X − µ).

Since Σ is a positive definite matrix, this shows, as in the proof of Proposition 3.3, that

X ∼ En(µ, Σ, ψ).

(10)

References

[1] Back, K. & E. (2010), “Asset Pricing and Portfolio Choice Theory”, Oxford University Press.

[2] Chamberlain, C. (1983), “A Characterization of the Distributions That Im-ply Mean-Variance Utility functions”, Journal of Economic Theory 29, p. 185-201.

[3] Cochrane, J. H. (2001), “Asset Pricing”, Princeton University Press. [4] McNeil, A. J., Frey, R. & Embrechts P. (2005), “Quantitative Risk

Man-agement”, Princeton University Press.

[5] Munk, C. (2013), “Financial Asset Pricing Theory”, Oxford University Press.

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Däremot är denna studie endast begränsat till direkta effekter av reformen, det vill säga vi tittar exempelvis inte närmare på andra indirekta effekter för de individer som

I regleringsbrevet för 2014 uppdrog Regeringen åt Tillväxtanalys att ”föreslå mätmetoder och indikatorer som kan användas vid utvärdering av de samhällsekonomiska effekterna av

a) Inom den regionala utvecklingen betonas allt oftare betydelsen av de kvalitativa faktorerna och kunnandet. En kvalitativ faktor är samarbetet mellan de olika

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

• Utbildningsnivåerna i Sveriges FA-regioner varierar kraftigt. I Stockholm har 46 procent av de sysselsatta eftergymnasial utbildning, medan samma andel i Dorotea endast

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i