• No results found

Antieigenvalues of Wishart Matrices

N/A
N/A
Protected

Academic year: 2021

Share "Antieigenvalues of Wishart Matrices"

Copied!
37
0
0

Loading.... (view fulltext now)

Full text

(1)

Antieigenvalues of Wishart Matrices

Department of Mathematics, Linköping University Simon Calderon

LiTH-MAT-EX2020/11SE

Credits: 16hp Level: G2

Supervisor: Martin Singull

Department of Mathematics, Linköping University Examiner: Xiangfeng Yang

(2)
(3)

Abstract

In this thesis we derive the distribution for the rst antieigenvalue for a random matrix with distribution W ∼ Wp(n, Ip) for p = 2 and p = 3. For p = 2 we

present a proof that the rst antieigenvalue has distribution β((n−1)/2, 1). For p = 3we prove that the probability density function can be expressed using a sum of hypergeometric functions.

Besides the main objective, the thesis seeks to introduce the theory of mul-tivariate statistics and antieigenvalues.

Keywords:

Antieignvalues, Wishart matrix, Hypergeometric function, Multivariate statistics

(4)
(5)

Sammanfattning

I den här rapporten härleder vi fördelningen för det första antiegenvärdet hos slumpmatriser med fördelning W ∼ Wp(n, Ip) för p = 2 och p = 3. För fallet

p = 2 pressenteras ett bevis för att det första antiegenvärdet har fördelningen β((n − 1)/2, 1). För p = 3 bevisar vi att täthetsfunktionen kan skrivas med en summa av hypergeometriska funktioner.

Förutom huvudsyftet med rapporten så innehåller rapporten en introduktion till multivariat statistik och antiegenvärden.

Nyckelord:

Antiegenvärden, Wishartmatris, Hypergeometrisk funktion, Multivariat statistik

(6)
(7)

Acknowledgements

First of all I would like to thank Dr. Martin Singull for the idea for this project and for his guidance and patience with this drawn out thesis. I would also like to thank Dr. Xiangfeng Yang for being the examiner of this thesis.

A special thanks to Erik Sätterqvist for intresting discussions during the start of this thesis. I would also like to thank Sam Olsson for being the opponent.

(8)
(9)

Nomenclature

X,Y,Z... Random variables x, y, z . . . Random vectors X, Y, Z Random matrices λn The n:th eigenvalue µn The n:th antieigenvalue Σ Covariance matrix α, β, γ . . . Elements of Rn α0 Transpose of α

Rm×n The set of all real valued m × n matrices |A| The determinant of A

aij The entry in row i and column j of a matrix A

(10)
(11)

Contents

1 Introduction 1 1.1 Disposition . . . 1 2 Preliminaries 3 2.1 Linear algebra . . . 3 2.2 Real analysis . . . 4 2.3 Statistics . . . 8

2.4 Introduction to multivariate statistics. . . 8

2.5 Eigenvalues of random matrices . . . 10

3 Antieigenvalues 13 3.1 Introduction . . . 13

3.2 A formal denition . . . 13

3.3 Testing for sphericity . . . 14

3.4 Probability densitiy function for the antieigenvalues . . . 15

3.5 Derivation of the probability density function for p = 3 . . . 17

3.6 Graphs of fµ2 1(x)for p = 3 and various n . . . 20

4 Discussion and further research 21 4.1 Simplifying and approximating the PDF . . . 21

(12)
(13)

Chapter 1

Introduction

We know from linear algebra that an eigenvector of a matrix A is the vector that remains parallell to itself under transformation by A, the corresponding eigenvalue (denoted as λ) is the factor by which the eigenvector is scaled. The opposite of an eigenvector, the rst antieigenvector of a matrix A is the vector most turned under transformation by A, the second antieigenvector is the vector most turned under some orthogonal restrictions etc. Furthermore, the corresponding rst antieigenvalues (denoted as µ1 and ν1) are the cosine

and sine of the angle turned for the rst antieigenvector.

A random matrix is a matrix whos entries are random variables. We can study distributions of matrices, and in this thesis we are especially intrested in matri-ces with a Wishart distribution, denoted A ∼ Wp(n, Σ).

This thesis seeks to derive the distribution of µ2

1 for a random matrix A ∼

W3(n, I3).

1.1 Disposition

• Chapter 2 is devoted to the prerequisites for the thesis. This includes denitions and results from linear algebra, real analysis and (univariate) statistics and an introduction to the eld of multivariate statistics. • Chapter 3 introduces the theory of antieigenvalues. This chapter

con-tains the main results of the thesis which is derivations of the probability distribution µ2

(14)
(15)

Chapter 2

Preliminaries

The main matter of this thesis revolves around the eld of multivariate statistics. This chapter will serve as an introduction to the necessary multivariate statistics, which can be found in Section 2.3 and onward. However, before we get into the multivariate statistics we need to introduce some denitions and theorems from linear algebra (Section 2.1) and real analysis (Section 2.2).

2.1 Linear algebra

Denition 2.1.1. The trace of a n × n square matrix A = (aij), denoted as

tr A is the sum of the diagonal elements of the matrix, i.e tr(A) = Pni=1aii.

We will also use the notation etr(A) = exp(tr A).

The trace is invariant of the base we express the matrix A in. Hence tr A = Pn

i=1λi where λ1. . . λn are the eigenvalues of A.

Denition 2.1.2. The vectorization of a r × s matrix A, denoted as vec(A), is the rs × 1 vector given by vec(A) = (a11. . . ar1a12. . . ars)0.

The vectorization of a matrix A can informally speaking be explained as stacking the columns of A in to a new column vector.

(16)

4 Chapter 2. Preliminaries matrix B is dened as A ⊗ B =      a11B a12B · · · a1mB a21B a22B · · · a1mB ... ... ... ... an1B an2B · · · anmB      .

Hence that if A is n × m and B and is p × q, A ⊗ B is a np × mq matrix. The Kronecker product does not commute, i.e., in general A ⊗ B 6= B ⊗ A. See [4] for further details about the Kronecker product.

Denition 2.1.4. A positive matrix is a matrix whos elements are all non negative, i.e., aij> 0 for all i and j.

Denition 2.1.5. A positive denite matrix (not to be confused with a positvie matrix) is a matrix with all eigenvalues satises λi > 0. If the eigenvalues

satises λi≥ 0 the matrix is called positive semidenite.

For more reading on matrices, see [5].

2.2 Real analysis

In the eld of multivariate statistics we have the need for various special func-tions. For this thesis we are intrested in the beta function, the gamma function and its generalisations, and the hypergeometric function. The following section introduces theese functions and some relevant properties.

In statistics (amongst other branches of mathematics) we have need for a conti-nous variant of the factorial function, i.e., some function that satises f(x+1) = xf (x).

Denition 2.2.1. The gamma function is denied as Γ(z) =

Z ∞

0

tz−1e−tdt,

for z ∈ C \ {−n : n ∈ N}.

This denition saties Γ(z + 1) = zΓ(z) and Γ(n) = (n − 1)! for n ∈ N.

We will return to proving some relevant properties for the gamma function, but rst we need to introduce another closely related function: the beta function.

(17)

2.2. Real analysis 5

Denition 2.2.2. The beta function is dened as B(m, n) =

Z 1

0

tm−1(1 − t)n−1dt

We will also need the following two theorems.

Theorem 2.2.1. The beta function has the follwing property B(m, n) = Γ(m)Γ(n)

Γ(m + n).

Proof. We intend to prove the equivalent equation Γ(m)Γ(n) = Γ(m+n)B(m, n). By the denition of the gamma function we get:

Γ(m)Γ(n) = Z ∞ 0 um−1e−udu Z ∞ 0 vn−1e−vdv = Z ∞ 0 Z ∞ 0 um−1vn−1e−u−vdudv.

Using the substitution u = xy, v = x(1−y) gives the Jacobian |J(x, y)| = x and the following integral

Γ(m)Γ(n) = Z ∞

x=0

Z 1

y=0

(xy)m−1(x(1 − y))n−1e−xdydx.

Factoring the integral into one integral for each variable yields Γ(m)Γ(n) = Z ∞ x=0 xm+n−1e−xdx Z 1 y=0 ym−1(1 − y)n−1dy.

We see that the integrals match up with the denitions of the gamma and beta functions. Thusly we have

Γ(m)Γ(n) = Γ(m + n)B(m, n).

Theorem 2.2.2. The beta function satises the identity: B(m, n) = 2

Z 1

0

x2m−1(1 − x2)n−1dx.

Proof. We start with the denition of the beta function. Substituting t = x2

gives dt = 2xdx and

(18)

6 Chapter 2. Preliminaries

Theorem 2.2.3 (Legendre duplication formula). The gamma function has the property

Γ(z)Γ(z + 1/2) = 21−2z√πΓ(2z). Proof. Theorem 2.2.1 for m = n = z gives the equality

Γ(z)Γ(z) Γ(2z) = Z 1 0 tz−1(1 − t)z−1dt. Substituting t = 1+x 2 yields Γ(z)Γ(z) Γ(2z) = 2 2−2z Z 1 0 (1 − x2)z−1dt.

By theorem 2.2.2 we have that 22−2z Z 1 0 (1 − x2)z−1dt = 21−2zB(1/2, z) and therefore Γ(z)Γ(z) Γ(2z) = 2 1−2zΓ(1/2)Γ(z) Γ(z + 1/2). Rearrangeing and using the fact that Γ(1/2) =√πgives us

Γ(z)Γ(z + 1/2) = 21−2z√πΓ(2z).

Denition 2.2.3. The incomplete gamma function is dened as Γ(s, x) =

Z ∞

x

ts−1etdt

This gives Γ(s, 0) = Γ(s).

In multivariate statistics we have the need for a multivariate generalization of the gamma function.

Denition 2.2.4. The multivariate gamma function is denied by one of two equivalent denitions. The rst denition is as an integral over the set of all positive denite p × p matrices, that is

Γp(a) =

Z

V>0

(19)

2.2. Real analysis 7

The second denition is as a product of gamma functions Γp(a) = πp(p−1)/4 p Y j=1 Γ  a −j − 1 2  . For p = 1 we have the ordinary gamma function.

Theorem 2.2.4. The multivariate gamma function has the property Γp(a) = π(p−1)/2Γ(a)Γm−1(a −

1 2).

Proof. This follows directly from the second denition of the multivariate gamma function

Denition 2.2.5. The rising pochamer symbol is dened as (q)n =

(

0, n = 0,

q(q + 1) . . . (q + n − 1), n 6= 0.

For our purposes, we are only intrested in the rising pochamer symbol for the following denition

Denition 2.2.6. The hypergeometric function is dened for |z| < 1 as

2F1(a, b; c; z) = ∞ X n=0 (a)n(b)n (c)n zn n!, a, b, c ∈ R.

This function has some properties that we are interested in. If a or b are negative integers the series terminates to the following polynomial

2F1(−m, b; c; z) = m X n=0 (−1)nm n  (b)n (c)n zn.

We also have the following special case

2F1(a, 1; 1; z) = (1 − z)−a.

Theorem 2.2.5. It can be shown that Z ∞ 0 xµ−1e−βxΓ(v, αx)dx = α vΓ(µ + v) µ(α + βµ+v)2F1  1, µ + v, µ + 1, β α + β  , for Re(α + β) > 0, Re(µ) > 0 and Re(µ + v) > 0.

(20)

8 Chapter 2. Preliminaries

2.3 Statistics

When we study multivariate statistics we make use of some results and def-initions from univariate statistics. We will in particular encounter the beta distribution and likelihood ratio tests.

Denition 2.3.1. The beta distribution is a family of distributions dened on [0, 1]with the following probability density function

fX(x) =

1 B(α, β)x

α−1(1 − x)β−1,

where α > 0 and β > 0. See [1] for more details.

When testing hypotheses we will need a test, for this thesis we will make use of the Likelihood ratio test.

Denition 2.3.2 (Likelihood ratio test). Consider testing the hypothesis H0:

θ = θ0 verus H1 : θ = θ1. Let L(θi|x) be the likelihood function given that

θ = θi. We dene the likelihood ratio, denoted Λ(x), as

Λ(x) = −2 ln  L(θ0|x) supθ6=θ0L(θ|x)  .

When performing a likelihood ratio test we reject H0 if Λ(x) < c and not

other-wise.

2.4 Introduction to multivariate statistics.

Multivariate statistics is the simultanious observation and analysis of multiple random variables. For a more extensive treatment of the subject see [7] and [8], which have served as sources for the following sections.

We begin by dening our multivariate counterparts to the univariate random variables.

Denition 2.4.1. A random vector x is a column vector with random variables as entries, i.e., x = (X1X2· · · Xn)0.

Denition 2.4.2. A random p × q matrix X is a p × q matrix with random variables as entries.

(21)

2.4. Introduction to multivariate statistics. 9

Now that we have a denition for multivariate random variables, the natural question to ask if and how we can dene multivariate analouges to the expected values and variance, i.e., µ and σ2.

Denition 2.4.3 (Expected value). The expected value for a vector x = (X1X2· · · Xn)0 is given by µ = E(x) =      E(X1) E(X2) ... E(Xn)      .

Denition 2.4.4 (Covariance matrix). The covariance matrix Σ for a vector x = (X1X2· · · Xn)0 is given by Σ =     

var(X1) cov(X1, X2) · · · cov(X1, Xn)

cov(X2, X1) var(X2) · · · cov(X2, Xn)

... ... ... ...

cov(Xn, X1) cov(Xn, X2) · · · var(Xn)

     .

Theorem 2.4.1. Σ is a positive denite matrix. See [8] for a proof.

The parameters µ and Σ works in a similar way to µ and σ2 in univariate

statistics.

Denition 2.4.5 (Sphericity). A covariance matrix is said to be spherical if Σ = σ2Ip.

Denition 2.4.6 (Multivariate normal distribution). A k × 1 random vector X = (X1. . . Xk)0 is said to be k-variate normal if and only if α0X is univariate

normal for all α ∈ Rk (To avoid triviality, assume α 6= 0.)

Similairly to how we notate normal distribution for univariate variables we de-note that X has multivariate distribution with expected value µ and covariance matrix Σ as X ∼ Nk(µ, Σ).

The multivariate normal distribution has the following probability density func-tion for k dimensions

(22)

10 Chapter 2. Preliminaries

Denition 2.4.7. A p×p random matrix X is said to have Wishart distribution with scale matrix Σ and n degrees of freedom (denoted X ∼ Wp(n, Σ)) if X has

the probability density function fX(x) =

1 2np/2|Σ|n/2Γ(n

2)

|x|(n−p−1)/2etr((−1/2)Σ−1x).

Σis a positive denite p × p matrix and n > p − 1.

Theorem 2.4.2. Let X1, X2. . . Xkbe identically and independently distributed.

The maximum likelihood estimators for µ and Σ are ¯ X = 1 n n X i=1 Xi and S = 1 n − 1 n X i=1 (Xi− ¯X)(Xi− ¯X) 0 . See [8] for a proof.

Theorem 2.4.3. If X1, X2, · · · Xn are independent random vectors with Xi∼

Np(µ, Σ)then: (n − 1)S = n X i=1 (X − ¯X)(X − ¯X)0∼ Wp(n, Σ).

See [8] for a proof.

This is similar to the χ2 distribution for univariate random variables. This

motivates the further study of the Wishart distribution.

2.5 Eigenvalues of random matrices

We can dene random eigenvalues just as for ordinary matrices.

Denition 2.5.1. An eigenvalue λ of a random vector A is a random variable which for some eigenvector v satises Av = λv

It is possible to calculate the distribution of λ if the distribution of A is known. For Wishart matrices it is not easy to derive distribution for individual eigen-values. It is however easier to nd the joint distribution.

Theorem 2.5.1 (Eigenvalues of a Wishart matrix). Let W ∼ Wp(n, Ip), p ≤ n.

Then the probability density function of the random eigenvalues λ1> λ2> · · · >

λp is given as fλ(λ1, λ2. . . λp) = Cp,nexp  −1 2 p X i=1 λi  p Y i=1 λ 1 2(n−p−1) i Y i<j (λi− λj),

(23)

2.5. Eigenvalues of random matrices 11 where Cp,n= πp2/2 2pn/2Γ p(p/2)Γp(n/2) .

See [6] or [8] for a proof. We will use this theorem in the next chapter when we derive the distribution for the rst antieigenvalue.

(24)
(25)

Chapter 3

Antieigenvalues

3.1 Introduction

We know from basic linear algebra that an eigenvector v of a matrix A is a vector that remains parallell to itself under transformation by A, i.e., that Av = λv for some λ ∈ R. This corresponding constant λ is known as an eigenvalue. It stands to reason then that the antieigenvector of a matrix A is the vec-tor most turned by transformation by A the two corresponding antieigenvalues µand ν is the cosine and sine of the angle turned.

For a more extensive introduction, see [3]

3.2 A formal denition

We need a more formal denition of the antieigenvalues and antieigenvectors than some hard to pinpoint notion of most turned.

Denition 3.2.1 (Antieigenvalues). The following denition is due to Gustafson (1968) and can be found in [3]

µ = inf

x6=0

hAx, xi

||x||||Ax|| ν = infε>0||x||≤1sup ||(εA − I)x||

Let φ(A) be the maximum angle turned under transformation by A. Follwing [3] we can identify the quantities above as

(26)

14 Chapter 3. Antieigenvalues

If we order the eigenvalues λ1≥ λ2· · · ≥ λp≥ 0we can express the

antieigen-values of A in terms of the eigenantieigen-values of A µ = 2pλ1λp

λ1+ λp

and ν = λp− λ1 λ1+ λp

. This obviously satises µ2+ ν2= 1.

Theorem 3.2.1. The antieigenvectors of a semi-positive deninte matrix A comes as a pair given by

x±=  λ p λ1+ λp 1/2 x1±  λ 1 λ1+ λp 1/2 xp,

where x1 and xp are any normed vectors from the respective eigenspaces. A

proof can be found in [3].

We can dene the antieigenvalue and antieigenvector of a random matrix in a similair way to how we dene eigenvectors and eigenvalues of random matri-ces. For the remainder of this thesis we are most intrested in µ, in particular µ2.

Example (Antieigenvalue and antieigenvector of a 2 × 2 matrix). Let

A =1 0 0 4 

This gives λ1= 1 and λ2= 4. Using the formula µ = 2 √ λ1λ2 λ1+λ2 gives µ = 4 5 and φ(A) = arccos(4 5) ≈ 37

. Theorem 3.2.1 gives the antieigenvectors

x±= 1 √ 5 ±1 2  .

3.3 Testing for sphericity

Until now we have not mentioned what motivates the study of antieigenval-ues. The antieigenvalues of a Wishart matrix is a recuring phenomenon in multivariate statistics. Here we will show one example: testing for sphericity (independence).

Consider for W ∼ Wp(n, Σ)testing the hypothesis that H0: Σ = σ2Ip versus

(27)

3.4. Probability densitiy function for the antieigenvalues 15 statistic becomes Λ = |W | n/2  1 ptrW np/2 = (Λ ∗)n/2, where Λ= |W |  1 ptrW p = Qp i=1λi  1 p Pp i=1λi p

See [8] for further details.

However, Venables (1976) observed that sphericity can be tested as an union-intersection test procedure, such that for any nonnull vector a we have the equivalent hypothesis

H0: Σ = σ2Ip ⇔ H0:

\

a:a0a=1

Ha: (a0Σa)(a0Σ−1a) = 1 ,

and the alternative

H1: Σ 6= σ2Ip ⇔ H1:

[

a:a0a=1

Aa: (a0Σa)(a0Σ−1a) > 1 .

By the union-intersection principle one can show that the hypothesis H0 is

rejected vesus H1 if ˆµ21≤ c, where ˆµ1 is the estimated rst antieigenvalue, i.e.,

ˆ µ2

1= 4ˆλ1ˆλp/(ˆλ1+ ˆλp)2, and c is chosen so that the size of the test is α. Hence,

we need the distribution for ˆµ2 1.

3.4 Probability densitiy function for the

antieigen-values

Since we can express the antieigenvalue in terms of eigenvalues, and we have a joint probability density function for the eigenvalues, the question arises if we can derive a probability density function for the antieigenvalues. This is a prob-lem which is unsolved for an arbitrary choice of p and n, however the special case of p = 2 have the following result.

Proposition 3.4.1 For a matrix A ∼ W2(n, I2) the rst antieigenvalue has

the probability density function fµ2

1(x) =

Γ((n − 1)/2 + 1) Γ((n − 1)/2)Γ(1)x

(28)

16 Chapter 3. Antieigenvalues

Proof. We know from Theorem 2.5.1 that fλ(λ1, λ2) = c2,ne− 1 2 P2 i=1λi 2 Y i=1 λ 1 2(n−3) i Y i<j (λi− λj),

where λ1≥ λ2. We then introduce the following change of variables

λ1(x, y) =

y(1 +√1 − x)

2 λ2(x, y) =

y(1 −√1 − x)

2 .

This change of variables gives µ2

1 = x. The condition λ1 ≥ λ2 gives the set

D = {y : y ≥ 0}and the Jacobian becomes J = y(1 − x)−1/2/4. Integrating this function from Theorem 2.5.1 over D gives us

fµ2(x) = c2,n Z ∞ 0 y 2(1+ √ 1 − x)−y 2(1− √ 1 − x)xy 2 4 (n−1)/2−1 e−y/2 y 4√1 − xdy. Simplifying and factoring out everything that does not depend on y gives the integral

fµ2(x) = c2,n21−nx(n−1)/2−1

Z ∞

0

yn−1e−y/2dy. By substituting t = 2y, dt = 2dy we get

fµ2(x) = c2,n21−nx(n−1)/2−1

Z ∞

0

2ntn−1e−tdt.

By moving the factor 2n outside the integral we se that the remaining integral

is by denition equal to Γ(n), this and the denition of c2,nyields

fµ2(x) =

2π2 2nΓ

2(1)Γ2(n/2)

x(n−1)/2−1Γ(n).

Now we make use of the fact Γ2(a) =

√ πΓ(a)Γ(a − (1/2)) fµ2(x) = 21−nπ Γ(1)Γ(1/2)Γ(n/2)Γ((n − 1)/2)x (n−1)/2−1Γ(n). Since Γ(1/2) =√πwe have fµ2(x) = 21−n√π Γ(1)Γ(n/2)Γ((n − 1)/2)x (n−1)/2−1Γ(n).

Theorem 2.2.1 for z = n/2 yields 21−n√πΓ(n) Γ(n/2) = Γ( n 2 + 1 2).

(29)

3.5. Derivation of the probability density function for p = 3 17

Plugging this in gives us fµ2(x) =

Γ((n − 1)/2 + 1) Γ((n − 1)/2)Γ(1)x

(n−1)/2−1,

and the proof is complete.

3.5 Derivation of the probability density function

for p = 3

We seek to nd an expression for the probability density function for µ2 1 for a

matrix W with W ∼ W3(n, I3). We know from Theorem 2.5.1 that

fλ(λ1, λ2, λ3) = C3,ne− 1 2 P3 i=1λi 3 Y i=1 λ 1 2(n−4) i Y j<i (λi− λj),

where λ1≥ λ2≥ λ3. We then introduce the following change of variables

λ1= y 2 1 + √ 1 − x, λ2= z, λ3= y 2 1 − √ 1 − x. This change of variables gives µ2

1= xand the jacobian J = y

4√1−x. Expressed

in the new variables the probability density function is f (x, y, z) = c3,n Y i<j (λi− λj)  xy2z 4 12(n−4) e−(y+z)/2.

The condition λ1 ≥ λ2 ≥ λ3 results in the set D = {(y, z) : y ≥ 0,y2 1 −

√ 1 − x ≤ z ≤ y 2 1 + √ 1 − x} and fµ2 1(x) = Z D c3,n Y 1≤i≤j≤n (λi− λj)  xy2z 4 12(n−4) e−(y+z)/2 y 4√1 − xdydz. We begin by expanding the product inside the integral

Y

i<j

(λi− λj) = (λ2− λ1)(λ3− λ1)(λ3− λ2)

(30)

18 Chapter 3. Antieigenvalues

Substituting this into the original integral yields fµ2 1(x) = Z D c3,n zy2− z2y − y3x 4  xy 2z 4 12(n−4) e−(y+z)/2y 4dydz. Using the linearity of the integral and factoring out anything not dependent on y or z we get fµ2 1(x) = c3,n2 2−nxn/2−2 " Z D yn−1zn/2−1e−y/2e−z/2dydz − Z D yn−2zn/2e−y/2e−z/2dydz −x 4 Z D ynzn/2−2e−y/2e−z/2dydz # . We now proceed to treat each term separatly. We get that

I1(x) = Z D yn−1zn/2−1e−y/2e−z/2dydz = Z ∞ 0 yn−1e−y/2 Z y2(1+ √ 1−x) y 2(1− √ 1−x) zn/2−1e−z/2dzdy.

We substitute u = z/2 which yields 2du = dz and the following integral I1(x) = 2n/2 Z ∞ 0 yn−1e−y/2 Z y4(1+ √ 1−x) y 4(1− √ 1−x) un/2−1e−ududy.

Using the incomplete gamma function we evaluate the inner integral to I1(x) = 2n/2 Z ∞ 0 yn−2e−y/2Γ n 2, y 4(1 − √ 1 − x)  − −Γ n 2, y 4(1 + √ 1 − x)  dy = = 2n/2 Z ∞ 0 yn−1e−y/2Γ n 2, y 4(1 − √ 1 − x)  dy −2n/2 Z ∞ 0 yn−1e−y/2Γ n 2, y 4(1 + √ 1 − x)  dy.

Now we make use of Theorem 2.2.3 To evaluate the above integrals. After some simplifying we get I1(x) = 25n/2(1 −1 − x)n/2Γ(3n 2) n(3 −√1 − x)3n/2 2F1  1,3n 2 ; n + 1; 2 3 −√1 − x  −

(31)

3.5. Derivation of the probability density function for p = 3 19 −2 5n/2(1 +1 − x)n/2Γ(3n 2) n(3 +√1 − x)3n/2 2F1  1,3n 2 ; n + 1; 2 3 +√1 − x  . A similar method of proof gives the following results

I2(x) = − Z D yn−2zn/2e−y/2e−z/2dydz = −2 5n/2−1(1 −1 − x)n/2+1Γ(3n 2) (n − 1)(3 −√1 − x)3n/2 2F1  1,3n 2 ; n; 2 (3 −√1 − x)  +2 5n/2−1(1 +1 − x)n/2+1Γ(3n 2) (n − 1)(3 +√1 − x)3n/2 2F1  1,3n 2 ; n; 2 (3 +√1 − x)  and I3(x) = x 4 Z D ynzn/2−2e−y/2e−z/2dydz = = −x2 5n/2−1(1 −1 − x)n/2−1Γ(3n 2 ) (n + 1)(3 −√1 − x)3n/2 2F1  1,3n 2 ; n + 2; 2 (3 −√1 − x)  +x2 5n/2−1(1 +1 − x)n/2−1Γ(3n 2 ) (n + 1)(3 +√1 − x)3n/2 2F1  1,3n 2 ; n + 2; 2 (3 +√1 − x)  . This gives the following expression for the probability density function

fµ2 1(x) = c3,nx n/2−222+3n/2Γ3n 2  " (1 −√1 − x)n/2 n(3 −√1 − x)3n/22F1  1,3n 2 ; n + 1; 2 3 −√1 − x  − (1 + √ 1 − x)n/2 n(3 +√1 − x)3n/22F1  1,3n 2 ; n + 1; 2 3 +√1 − x  − (1 − √ 1 − x)n/2+1 2(n − 1)(3 −√1 − x)3n/22F1  1,3n 2 ; n; 2 (3 −√1 − x)  + (1 + √ 1 − x)n/2+1 2(n − 1)(3 +√1 − x)3n/22F1  1,3n 2 ; n; 2 (3 +√1 − x)  − x(1 − √ 1 − x)n/2−1 2(n + 1)(3 −√1 − x)3n/22F1  1,3n 2 ; n + 2; 2 (3 −√1 − x)  x(1 +√1 − x)n/2−1  3n 2 #

(32)

20 Chapter 3. Antieigenvalues

3.6 Graphs of f

µ2

1

(x)

for p = 3 and various n

We want to get an intuition for how the function behaves for various n, therefore we have plotted graphs for n = 3, n = 5, n = 10 and n = 25. The plots were done with Matlab.

n = 3 n = 5

n = 10 n = 25

The tendency seems to be that the probability for larger µ2

1 increases when n

(33)

Chapter 4

Discussion and further

research

4.1 Simplifying and approximating the PDF

The probability density function for fµ2(x)is very long and unpractical to use,

if it could be simplyed more that would be of interest. There are various transformations for the hypergeometric function, such as the quadratic or Euler transformations. It would be interesting to research if simplication using one of the aformentioned transformations is possible.

An approximation of the probability density function would also be of inter-est. Since previous simulations have shown that the function resembles a linear mixture of beta distribution, it seems plausible to think that there are some way to approximate the probability density function using such a linear mixture of beta functions.

4.2 Solution for an arbitrary choice p

Although it is probably possible to use the same method of proof for p ≥ 4; the number of terms that are needed grows to rapidly for the method to be feasible. Another method of proof would therefore be of interest.

(34)
(35)

Bibliography

[1] G. Casella and R.L. Berger. Statistical Inference. Brooks/Cole, 2002. [2] I.S. Gradshteyn and I.M. Ryzhik. Table of Integrals, Series, and Products.

Academic Press, Inc, 1965.

[3] K. Gustafson. Antieigenvalue Analysis. World Scientic Publishing, 2012. [4] R.A. Horn and C.R. Johnsson. Topics in matrix Analysis. Cambridge

uni-versity press, 1991.

[5] R.A. Horn and C.R. Johnsson. Matrix Analysis, 2nd edition. Cambridge university press, 2013.

[6] A.T. James. The distribution of the latent roots of the covariance matrix. Ann. Math. Stat., 31(1):151158, 1960.

[7] R. Johnson and D. Wichern. Applied multivariate statistical analysis. Pear-son Education Limited, 2014.

[8] R.J. Muirhead. Aspects of Multivariate statistical theory. John Wiley and sons, 1982.

(36)
(37)

Linköping University Electronic Press

Copyright

The publishers will keep this document online on the Internet  or its possible replacement  from the date of publication barring exceptional circumstances.

The online availability of the document implies permanent permission for anyone to read, to download, or to print out single copies for his/her own use and to use it unchanged for non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional upon the consent of the copyright owner. The publisher has taken technical and administrative measures to assure authentic-ity, security and accessibility.

According to intellectual property law the author has the right to be men-tioned when his/her work is accessed as described above and to be protected against infringement.

For additional information about the Linköping University Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its www home page: http://www.ep.liu.se/.

Upphovsrätt

Detta dokument hålls tillgängligt på Internet  eller dess framtida ersättare  från publiceringsdatum under förutsättning att inga extraordinära omständig-heter uppstår.

Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säkerheten och tillgängligheten nns lösningar av teknisk och administrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sammanhang som är kränkande för upphovsmannens litterära eller konstnärliga anseende eller egenart.

För ytterligare information om Linköping University Electronic Press se för-lagets hemsida http://www.ep.liu.se/.

References

Related documents

(1997) studie mellan människor med fibromyalgi och människor som ansåg sig vara friska, användes en ”bipolär adjektiv skala”. Exemplen var nöjdhet mot missnöjdhet; oberoende

“Ac- celerating fibre orientation estimation from diffusion weighted magnetic resonance imaging using GPUs”. “Us- ing GPUs to accelerate computational diffusion MRI: From

Inside the magnetic trap, where the magnetic field lines are at both ends in contact with the target, the plasma potential will therefore be typically a few V more positive than U rev

The analysis is based on extractions for (spelling variants of) the noun way from the Early Modern (EEBO, PPCEME2) and Late Modern (CEAL, PPCMBE1) English periods, with a focus on

In this paper the work within a global student team has been observed for the purpose to describe the prototyping process and the use of rough prototypes in a team based

Resultaten tyder på att NoV orsakar många allvarliga diarréer hos barn och att det mestadels är varianter av en specifik underart, GGII.4, som orsakar sjukdom

eft defenfor impietatis. Quid? quod ipfum regem ita poflidebant Litur- gilbe, ut animum ejus quo vei-. lent cunque

Pughe - We call ourselves Extension Home Economists or Extension Agents in the area in which we work now.. Except for the county director, and he is called a