• No results found

Limit theorems for generalizations of GUE random matrices

N/A
N/A
Protected

Academic year: 2021

Share "Limit theorems for generalizations of GUE random matrices"

Copied!
37
0
0

Loading.... (view fulltext now)

Full text

(1)

Limit theorems for generalizations of GUE

random matrices

MARTIN BENDER

Doctoral thesis Stockholm, Sweden 2008

(2)

TRITA-MAT-08-MA-05 ISSN 1401-2278 ISRN KTH/MAT/DA--08/04--SE ISBN 978-91-7178-973-0 KTH Matematik SE-100 44 Stockholm SWEDEN Akademisk avhandling som med tillstånd av Kungl Tekniska högskolan framlägges till offentlig granskning för avläggande av filosofie doktorsexa-men i matematik fredagen den 13 juni 2008 klockan 10.00 i Kollegiesalen, F3, Kungl Tekniska högskolan, Lindstedtsvägen 26, Stockholm.

© Martin Bender, May 2008 Tryck: Universitetsservice US-AB

(3)

iii

Abstract

This thesis consists of two papers devoted to the asymptotics of random matrix ensembles and measure valued stochastic processes which can be considered as generalizations of the Gaussian unitary ensemble (GUE) of Hermitian matrices H = A + A†, where the entries of A are independent identically distributed (iid) centered complex Gaussian random variables.

In the first paper, a system of interacting diffusing particles on the real line is studied; special cases include the eigenvalue dynamics of matrix-valued Ornstein-Uhlenbeck processes (Dyson’s Brownian motion). It is known that the empirical measure process converges weakly to a deterministic measure-valued function and that the appropriately rescaled fluctuations around this limit converge weakly to a Gaussian distribution-valued process. For a large class of analytic test functions, explicit formulae are derived for the mean and covariance functionals of this fluc-tuation process.

The second paper concerns a family of random matrix ensembles interpolating between the GUE and the Ginibre ensemble of n × n matrices with iid centered complex Gaussian entries. The asymptotic spectral distribution in these models is uniform in an ellipse in the complex plane, which collapses to an interval of the real line as the degree of non-Hermiticity diminishes. Scaling limit theorems are proven for the eigenvalue point process at the rightmost edge of the spectrum, and it is shown that a non-trivial transition occurs between Poisson and Airy point process statistics when the ratio of the axes of the supporting ellipse is of order n−1/3.

(4)

iv

Sammanfattning

Denna avhandling består av två vetenskapliga artiklar som handlar om gräns-värdessatser för slumpmatriser och måttvärda stokastiska processer. De modeller som studeras kan betraktas som generaliseringar av den gaussiska unitära ensem-beln (GUE) av hermiteska n × n-matriser H = A + A†, där A är en matris vars element är oberoende, likafördelade, centrerade, komplexa normalfördelade stokas-tiska variabler.

I artikel I betraktas ett system av växelverkande diffunderande partiklar på re-ella linjen, vissa specialfall av denna modell kan tolkas som egenvärdesdynamiken för matrisvärda Ornstein-Uhlenbeck-processer (Dysons brownska rörelse). Sedan tidigare är det känt att den empiriska måttprocessen konvergerar svagt mot en de-terministisk måttvärd funktion och att fluktuationerna runt denna gräns, i lämplig skalning, konvergerer svagt mot en distributionsvärd gaussisk process. För en stor klass av analytiska testfunktioner härleds explicita formler för medelvärdes- och kovariansfunktionalerna för denna fluktuationsprocess.

Artikel II behandlar en familj av slumpmatrisensembler som interpolerar mellan GUE och Ginibre-ensembeln, bestående av matriser A som ovan. För denna modell är egenvärdena komplexa och asymptotiskt likformigt fördelade i en ellips i kom-plexa planet. Skalningsgränsvärdessatser för egenvärdet med maximal realdel och för egenvärdespunktprocessen kring detta visas för ett allmänt val av interpolations-parametern i modellen. Då förhållandet mellan axlarna i den asymptotiska ellipsen är av storleksordning n−1/3 uppträder en övergångsfas mellan Airypunktprocess-och Poissonprocessbeteendena, typiska för GUE respektive Ginibre-ensembeln.

(5)

v

Acknowledgments

I am deeply indebted to my advisor Kurt Johansson for his constant support and encouragement. His wealth of fruitful ideas and mathematical expertise is remarkable, as is the humility with which he has shared it.

My friends and colleagues at the department of mathematics have con-tributed to an excellent ambiance of generosity, both mathematically and socially. In particular, I would like to thank Alan Sola for his careful proof-reading and valuable comments.

I am grateful to my father for the leap of faith into mathematics he once made, and the influence it has had on me.

Finally, I thank him and the rest of my family for always being there. Martin Bender

(6)

Contents

Contents vi

1 Random matrix theory . . . 1

2 The Gaussian unitary ensemble . . . 2

3 Limit theorems for the GUE . . . 6

4 Random measures . . . 10

5 Non-Hermitian random matrices and the ellipse ensemble . . 16

6 Dyson’s Brownian motion and the log Coulomb gas . . . 19

7 Overview of paper I . . . 22

8 Overview of paper II . . . 24

Bibliography 29

List of Papers

Paper I: Global fluctuations in general β Dyson’s Brownian mo-tion

Stochastic Process. Appl. 118, no. 6 (2008), 1022-1042.

Paper II: Edge scaling limits for a family of non-Hermitian ran-dom matrix ensembles

(7)

Introduction

1

Random matrix theory

The theory of random matrices is a branch of probability theory, with many connections to other parts of mathematics, originating in and motivated by mathematical physics. More specifically, in the study of highly excited energy levels (eigenvalues) of complicated quantum systems, such as atomic nuclei, the exact properties of the Hamilton operator of the system are often either not known or too complicated to be analyzed. Nevertheless, the eigenvalues exhibit quite characteristic statistical properties which do not depend on the details of the operator. Therefore the Hamiltonian (an operator on an infinite-dimensional Hilbert space) can be modeled by a

random operator on a space of finite (but large) dimension, represented by a

matrix with random elements. A review of the subject from a mathematical physics point of view can be found in [11].

In mathematical terms, the general program of random matrix theory is thus to consider a so-called random matrix ensemble, consisting of a proba-bility measure Pn on some space M (n) of n × n matrices, and analyze the asymptotic properties of the induced probability measure on the eigenvalues as n tends to infinity. This is in general very difficult, since the eigenvalues are the zeros of an n-degree polynomial in the matrix elements. For certain choices of the measure Pnhowever, the eigenvalue measure can be calculated explicitly and detailed results about the asymptotics of the eigenvalues can be proven. In many cases, such theorems have subsequently been generalized to larger classes of measures, and are often conjectured to hold true in even greater generality; these claims are referred to as universality conjectures.

The models studied in this thesis are generalizations in different direc-tions of the Gaussian Unitary Ensemble (GUE), which is perhaps the most well-studied random matrix ensemble. In fact, much of the random matrix

(8)

literature can be considered to concern the GUE or some natural generaliza-tion of it, so an appropriate starting point is the definigeneraliza-tion and elementary properties of this model. After a brief discussion of those limit theorems for the GUE which will subsequently be generalized, the GUE is put into more general contexts so that it becomes apparent how the model can be naturally extended. The specific models studied are then introduced, and finally summaries of the included papers are provided.

This introduction is intended to be accessible to a general mathemat-ical audience and provide the background material required to follow the included articles. Most of the material is standard and lack of specific ref-erences does not imply any claim of originality. For a general introduction to random matrix theory, see e.g. the standard reference work [18].

2

The Gaussian unitary ensemble

2.1 Definition

For each positive integer n, let Hndenote the set of Hermitian n×n matrices, that is, the set of complex n × n matrices A = (ajk)nj,k=1such that ajk = akj, or A = A†. Since an element A ∈ Hn is uniquely specified by n2 real parameters, for instance the set {Re(ajk)}j≤k ∪ {Im(ajk)}j>k, Hn can be identified with Rn2. The GUE is the probability measure

dPn(A) = Cne− Tr A 2 dA = Cnexp    − n X j=1 a2jj− 2 X 1≤j<k≤n |ajk|2    dA (2.1)

on Hn, where dA is Lebesgue measure on Rn2 and Cn is a normalizing con-stant. It is clear from the definition that the diagonal entries and the real and complex parts of the sub-diagonal entries of a GUE matrix are inde-pendent centered normal random variables of variance 1/2 (on the diagonal) and 1/4 respectively. Since the density

Fn(A) = Cne− Tr A

2

of the GUE measure satisfies

(9)

for every unitary n × n matrix U , the GUE is unitarily invariant, meaning that for every continuous function ϕ : Hn→ R and U ∈ U(n),

Z Hn ϕ(U†AU )dPn(A) = Z Hn ϕ(A)dPn(A). (2.3)

In fact, up to different normalizations, the GUE is the unique unitarily invariant probability measure on Hn such that the matrix entries are inde-pendent.

2.2 The eigenvalue measure

If λ1,n≥ . . . ≥ λn,n are the n real eigenvalues of A ∈ Hn, put λ(A) = (λ1,n, . . . , λn,n),

and for any permutation σ ∈ Sn, define

λσ(A) = (λσ(1),n, . . . , λσ(n),n).

The unitary invariance allows one to explicitly calculate the induced prob-ability measure ˜Pn on the (unlabeled) eigenvalues, defined, for every Borel set B ⊂ Rn, by

˜

Pn(B) := Pn({A ∈ Hn: λσ(A) ∈ B for some σ ∈ Sn}) . (2.4) For every A ∈ Hn there is a unitary matrix U ∈ U (n) such that U†AU = Λ(λ(A)) = diag(λ(A)), and U is uniquely determined if (say) the first non-zero element in each column is required to be real and positive. The space Ωn of such unitary matrices can be parameterized by n(n − 1)/2 real variables, p = (p1, · · · , pn(n−1)/2), so, disregarding a set of Lebesgue measure 0, this assignment defines a bijection

Ψn: Rnord× Ωn→ Hn

(λ, p) 7→ U (p)Λ(λ)(U (p))†, where Rn

ord = {x ∈ Rn|x1 > . . . > xn}. It can be shown that the Jacobian of this transformation has the form

(10)

for some function integrable function gn on Ωn, where ∆n(λ) :=

Y

1≤j<k≤n

(λk,n− λj,n) = det(λj−1k,n)nj,k=1 (2.5) is called the Vandermonde determinant. Considering for a moment the induced probability measure on the ordered eigenvalues, a change of variables and the unitary invariance property (2.3) give, for any Borel set B ⊂ Rnord,

˜

Pordn (B) := Pn({A ∈ Hn|λ(A) ∈ B}) = Z {A∈Hn|λ(A)∈B} Fn(A)dA = Z B×Ωn Fn(U (p)Λ(λ)U (p)†) Jac(Ψn)dλdp = Z B×Ωn Fn(Λ(λ))∆n(λ)2gn(p)dλdp = 1 Z0 n Z B |∆n(λ)|2e− Pn j=1λ2j,ndλ,

where Zn0 is a normalizing constant. Apart from a normalization factor n! due to permutations of the eigenvalues, this measure clearly coincides with the restriction to Rnord of the joint distribution of the unlabeled eigenvalues. Thus ˜Pn is absolutely continuous with respect to Lebesgue measure on Rn and has density

fn(x1, . . . , xn) = 1 Zn |∆n(x)|2e−Pnj=1x2j, (2.6) where Zn= n!Zn0 = Z Rn |∆n(x)|2e− Pn j=1x2jdx

is a normalization constant (the partition function).

2.3 Marginal densities and the Hermite kernel

The particular structure of the eigenvalue density for the GUE allows for the marginal densities to be calculated. After suitable row operations on the Vandermonde determinant it is seen that it can be written

∆n(x) =   n−1 Y j=0 1 κj  det(pj−1(xk))nj,k=1

(11)

for an arbitrary choice of polynomials pj of degree j, where κj is the leading coefficient of pj. Putting Bn = (bjk)j,k=1n , where bjk = pj−1(xk)e−x

2 k/2, it follows that fn(x) = Cn,κ(det Bn)2 = Cn,κdet BnTBn = Cn,κdet (Kn(xj, xk))nj,k=1, (2.7) where Kn(x, y) = n−1 X k=0 pk(x)pk(y)e− 1 2(x 2+y2) , (2.8)

and Cn,κ = Zn−1(κ1κ2· · · κn)−2. Choosing pj as the orthonormal polyno-mials with respect to the measure e−x2dx, i.e. as the normalized Hermite polynomials hj, the relation

Z R Kn(x, y)Kn(y, z)dy = Z R   n X j,k=0 hj(x)hk(z)e− 1 2(x 2+z2

)hj(y)hk(y)e−y

2  dy = n X j,k=0 hj(x)hk(z)e− 1 2(x 2+z2 )δjk = Kn(x, z) (2.9) is satisfied. Let KnH denote the function (2.8) for this particular choice of polynomials; it is called the correlation kernel of the GUE, also known as the

Hermite kernel. Using the definition of the determinant, it is not difficult

to show that (2.9) implies that Z R det KnH(xj, xk) m j,k=1dxm = (n − m + 1) det K H n(xj, xk) m−1 j,k=1. Repeated application of this formula finally allows for the m-dimensional marginal density umn of ˜Pn to be explicitly calculated from (2.7);

umn(x1, . . . , xm) = Z Rn−m fn(x1, . . . , xn)dxm+1· · · dxn = (n − m)! n! det K H n(xj, xk) m j,k=1, (2.10)

(12)

for 0 ≤ m ≤ n, where the normalization is determined by the requirement u0n = 1. The Hermite kernel can be simplified by means of the

Christoffel-Darboux formula, which states that, for any sequence of orthonormal

poly-nomials (pj)∞j=0 and positive any integer k, k X j=0 pj(x)pj(y) = κk−1 κk pk(x)pk−1(y) − pk−1(x)pk(y) x − y ,

where κj is the leading coefficient of pj. Hence

KnH(x, y) =r n 2 hn(x)hn−1(y) − hn−1(x)hn(y) x − y e −1 2(x 2+y2) . (2.11)

3

Limit theorems for the GUE

Since all marginal densities of the GUE eigenvalue distribution can be ex-pressed as determinants of the Hermite kernel (2.11), the study of the large n behaviour of the eigenvalues reduces to asymptotic analysis of the Hermite polynomials. It should be emphasized that, although the theorems proven in this way apply only to the GUE, most results are conjectured (and in many cases, proven) to carry over to more general situations. Loosely speaking, these universality conjectures claim that the local statistical behaviour of the eigenvalues is independent of the particular choice of probability measure on the matrix elements.

The questions one might ask about the asymptotics of the eigenvalues fall into two basic categories. On the one hand, questions about the global behaviour, concerning properties of the spectrum as a whole, without distin-guishing the individual eigenvalues, and on the other hand, questions per-taining to the local properties, such as nearest neighbour distances, largest eigenvalue distribution, and so on.

3.1 Wigner’s semi-circle law

Let {ξjn}n

j=1:= {λ1,n, . . . , λn,n}nj=1denote the set of unlabeled eigenvalues of an n×n GUE matrix A. It turns out that, with the normalization chosen, the largest eigenvalue of the GUE will be of order√2n, so to capture the global behaviour, define the rescaled eigenvalues ˜ξjn = ξjn/√2n. A natural way to

(13)

study the global spectrum is to consider the linear statistics, which, for an appropriate class of real functions ϕ, are defined as the random variables

hXn, ϕi := Z ϕ(s)dXn(s) = 1 n n X j=1 ϕ( ˜ξjn), where Xn= 1 n n X j=1 δξ˜n j

is a random probability measure, the empirical measure. A classical result on the global eigenvalue distribution, in its weak form due to Wigner and true for general Hermitian matrices with independent entries with variances as in the GUE, is the following:

Theorem 3.1 Let {ξjn}n

j=1 be the eigenvalues of an n × n GUE matrix A

and put ˜ξjn= ξjn/√2n. For every bounded continuous real function ϕ,

lim n→∞En   1 n n X j=1 ϕ( ˜ξjn)  = Z R ϕ(t)dµ(t), (3.1) where dµ(t) = 2 π p 1 − t2χ {|t|≤1}dt (3.2)

is Wigner’s semi-circle law.

This result can be strengthened; in the terminology of random probabil-ity measures, the empirical measure Xn converges weakly in M(R) (in fact, even almost surely) to the deterministic measure µ ∈ M(R), see Section 4.1.

3.2 Global fluctuations

Theorem 3.1 can be seen as a law of large numbers for the linear statistics, and it is natural to study the rate of convergence, i.e. an analogue of the cen-tral limit theorem. Note that if {zj}∞j=1is a set of independent, µ-distributed random variables, it follows from the central limit theorem that

√ n   1 n n X j=1 ϕ(zj) − Z R ϕ(t)dµ(t)   d − → N (0, σ2[ϕ]), (3.3)

(14)

where σ2[ϕ] = Z R (ϕ(t))2dµ(t) − Z R ϕ(t)dµ(t) 2 . (3.4)

The eigenvalues of a GUE matrix are clearly not independent, in fact the tendency towards regular spacing, level repulsion, between eigenvalues, ap-parent in the form of the Vandermonde determinant, is so strong that the fluctuations of hXn, ϕi around its mean are not of order n−1/2, as the central limit theorem would suggest, but of order n−1.

Theorem 3.2 (Johansson, 1998, [15]) Let {ξnj}n

j=1 be the eigenvalues of an n × n GUE matrix A and put ˜ξjn = ξjn/√2n. If ϕ ∈ C2+(R) ∩ L∞(R) for

some  > 0, then n   1 n n X j=1 ϕ( ˜ξjn) − Z R ϕ(t)dµ(t)   d − → N (0, σ2[ϕ]), (3.5) where σ2[ϕ] = 1 2π2 Z 1 −1 ϕ(s) √ 1 − s2 p. v. Z 1 −1 ϕ0(t)√1 − t2 (s − t) dt ! ds = 1 π2 ∞ X k=0 k Z π 0 ϕ(cos θ) cos(kθ)dθ 2 . (3.6)

Theorem 3.2 can equivalently be expressed in terms of convergence of the Fourier transform (characteristic function) of hXn, ϕi; for every s ∈ R,

lim n→∞En  exp    is   n X j=1 ϕ( ˜ξnj) − n Z R ϕ(x)dµ(x)       = e− 1 2σ 2[ϕ]s2 . (3.7)

The techniques used to prove this theorem involve analysis of the Stieltjes

transform of the empirical measure Xn, where the Stieltjes transform of a finite measure ν on R is the analytic function ˆν on C \ R defined by

ˆ ν(z) = Z R dν(x) x − z. (3.8)

The variance (3.6) in Theorem 3.2 is a complicated, but explicit, quadratic functional of ϕ. However, if one allows for complex test functions of the form

(15)

ϕz(x) = (x − z)−1 for z ∈ C \ R, the complex “variance” σ2[ϕz], defined by (3.7), takes the simple form

σ2[ϕz] = 1 4(1 − z2)2.

The central limit theorem proven in [15] applies in much greater generality than just the GUE case cited in Theorem 3.2. There are also many other re-lated results for various classes of random matrix ensembles, see for instance [1], [3], [6], [19] and [23].

3.3 Largest eigenvalue distribution

One of the most striking results on the local statistics of the GUE is the scaling limit law of the largest eigenvalue, λ1,n, due to Tracy and Widom. The Airy function, Ai : R → R is a C∞ function defined by

Ai(x) = 1 2π

Z ∞ −∞

ei(t+iδ)3/3+i(t+iδ)xdt,

for δ > 0, independently of the choice of δ, by Cauchy’s theorem. Let the

Airy kernel, KA: R2 → R, be the function KA(x1, x2) =

Ai(x1) Ai0(x2) − Ai0(x1) Ai(x2) x1− x2

, (3.9)

defined by continuity for x1 = x2. An alternative representation of the Airy kernel is given by the double integral formula

KA(x1, x2) = 1 4π2 Z Z γ,γ ei3v 3+ix 2v+i3u3+ix1u i(u + v) dudv, (3.10)

where γ is the contour R 3 t 7→ γ(t) = t + iδ for some δ > 0.

Theorem 3.3 (Tracy, Widom, 1994, [26]) Let λ1,n be the largest eigenvalue of an n × n GUE matrix A. Then

lim n→∞Pn  2n2/3 λ1,n√ 2n− 1  ≤ t  = FT W(t), (3.11)

where the Tracy-Widom distribution, FT W, is given by FT W = ∞ X m=0 (−1)m m! Z (t,∞)m det(KA(xj, xk))mj,k=1dmx. (3.12)

(16)

Tracy and Widom also proved that the distribution function (3.12) has a representation FT W(t) = exp  − Z ∞ t (x − t)q2(x)dx  , (3.13)

where q(t) is the unique solution to the Painlevé II equation q00(t) = tq(t) + 2q(t)3,

with the asymptotic behaviour

q(t) ∼ Ai(t) as t → ∞.

For comparison, the classical extreme value theorem for a set {zi}n i=1 of

independent, identically F -distributed random variables, for a large class

of probability distributions F , states that the appropriately shifted and rescaled maximum converges in distribution to the Gumbel distribution, FG(t) = e−e

−t

(see e.g. [20] for necessary and sufficient conditions on F ). To take a specific example, if zi∈ N (0, 1/2), then

lim n→∞P  2 log n  (log n)−1/2 max 1≤i≤nzi− 1 + log(4π log n) 4 log n  ≤ t  = FG(t). (3.14) There are several interesting features of Theorem 3.3, for one thing the prob-ability distribution FT W itself, which, in the last few decades, has emerged in a series of seemingly unrelated contexts in the scaling limit of various dis-crete probabilistic models, see for example [2], [13] and [14]. It thus appears to be a natural extreme value distribution, quite distinct from the classi-cal extreme value distributions for iid random variables such as FG. Note also the scaling factor n2/3 which grows much faster than the correspond-ing logarithm in (3.14); again this is a manifestation of the rigidity of the eigenvalues.

4

Random measures

Random matrix models in general and the GUE in particular, are most naturally studied in the framework of point processes, or more generally,

random measures. For a complete account, see [17]. The general setup, to

(17)

Let Λ be a complete separable metric space. A boundedly finite measure µ on Λ is a Borel measure such that µ(A) < ∞ for every bounded Borel set A ⊂ Λ. Let cM(Λ) be the set of all boundedly finite measures on Λ, and define B( cM(Λ)) to be the smallest σ-algebra on cM(Λ) such that the mappings cM(Λ) 3 µ 7→ µ(A) ∈ R ∪ {∞} are measurable for each Borel set A ⊂ Λ. (Actually, cM(Λ) can itself be given the structure of a complete separable metric space, and B( cM(Λ)) is the Borel σ-algebra with respect to this metric structure.)

A random measure on Λ is a random element of cM(Λ), i. e. a measurable function from a probability space (Ω, B, P) to ( cM(Λ), B( cM(Λ))).

4.1 Measure-valued and distribution-valued processes

In paper I, stochastic processes taking measures and distributions (linear functionals) as values are considered. A sketch of the abstract definitions of these objects is presented in this section as a background, although the technical details will not be used in the sequel.

A random probability measure on Λ is defined in the same way as a general random measure, but with cM(Λ) replaced by the space M(Λ) of Borel probability measures on Λ.

Because of the metric space structure of M(Λ), the space C([0, ∞), M(R)) of continuous functions from [0, ∞) to M(R) is a measurable space with the Borel σ-algebra with respect to the topology of uniform convergence on compact sets. A probability measure-valued process can now be defined as a random element of C([0, ∞), M(R)).

To generalize the concept of a random measure on R slightly, consider some normed linear space U (R) of test functions on R and let U0(R) be the space of continuous linear functionals on U (R) which is again a normed linear space and therefore a measurable space with the Borel σ-algebra. A

random distribution is a random element of U0(R).

For any fixed T > 0, the space C([0, T ], U0(R)) of continuous func-tions from [0, T ] to U0(R) is also a normed linear space with norm kutk = sup0≤t≤T kutkU0(R), so equipped with the Borel σ-algebra with respect to the

norm topology, C([0, T ], U0(R)) is a measurable space too. A

distribution-valued processes is a random element of C([0, T ], U0(R)). (In fact, distribution-valued processes can be defined even when the space U of test functions is not

(18)

normed but only a nuclear Fréchet space; the technicalities will be omitted here.)

Weak convergence in C([0, ∞), M(R)) of a sequence (µtn)∞n=1of probabil-ity measure-valued processes to a limit µtis equivalent to convergence of the finite dimensional distributions, which in this setting means that, for every finite set {ϕ}kj=1of bounded continuous real functions and non-negative real numbers {tj}kj=1, Z ϕ1dµt1 n, . . . , Z ϕkdµtk n  d − → Z ϕ1dµt1, . . . , Z ϕkdµtk  . Similarly, the sequence (ut

n)∞n=1 of distribution-valued processes converges weakly in C([0, T ], U0(R)) to ut if and only if

ut1 n(ϕ1), . . . , utnk(ϕk)  d − → ut1, (ϕ 1), . . . , utk(ϕk) , for every finite set {ϕ}kj=1⊂ U (R) of test functions and {tj}k

j=1 ⊂ [0, T ].

4.2 Point processes

Let N (Λ) ⊂ cM(Λ) be the space of boundedly finite simple counting measures on Λ, meaning that µ(A) ∈ N for every bounded Borel set A ⊂ Λ and µ({x}) ∈ {0, 1} for every point x ∈ Λ, if µ ∈ N (Λ). Define the σ-algebra B(N (Λ)) as in the general random measure case. A point process (or random

counting measure) on Λ is a random element of N (Λ). Since every µ ∈ N (Λ)

has the form

µ =X

i∈I δξi

for some at most countable set ξ = {ξi}i∈I ⊂ Λ of distinct points such that ξ ∩ A is finite for every bounded set A ⊂ Λ, N (Λ) can be identified with the family of such sets.

Let ν be a reference measure on Λ (in the cases considered here, Λ will be R or R2 and ν Lebesgue measure). If ξ is a point process on Λ and, for some n ≥ 1, there exists a measurable function ρn : Λn → R such that for every bounded measurable function ϕ : Λn→ R,

E   X ξkj∈ξ ϕ(ξk1, . . . , ξkn)  = Z Λn ϕ(x1, . . . , xn)ρn(x1, . . . , xn)dnν(x), (4.1)

(19)

then ρn is called an n-point correlation function of ξ. (The sum on the left hand side of (4.1) is over all n-tuples of distinct points, including permuta-tions, of ξ.) Note that if there is a fixed non-negative integer m such that |ξ| = m almost surely, the n-point correlation functions are simply multiples of the n-dimensional marginal densities. It can be shown that if a point process has correlation functions ρn for every n ≥ 1, then

E   Y j (1 + φ(ξj))  = ∞ X n=0 1 n! Z Λn   n Y j=1 ϕ(xj)  ρn(x1, . . . , xn)dnν(x) (4.2)

for every bounded measurable function ϕ with bounded support (by defini-tion, the product over the empty index set is 1). Here the product on the left hand side is over all particles in the point process; since there are only finitely many particles in each bounded set the product is finite for each realization. A comprehensive introduction to point processes in general can be found in [5].

4.3 Determinantal point processes

A point process for which all correlation functions exist and are of the form ρn(x1, . . . , xn) = det (K(xi, xj))ni,j=1 (4.3) for some measurable function K : Λ2 → C (the correlation kernel) is called a determinantal (point) process. An accessible introduction specifically to determinantal point processes is provided in [16].

The term “kernel” alludes to the fact that if K is continuous, it can be considered as the kernel of the (compact) integral operator

TK : L2(Λ) → L2(Λ) (TKf )(s) =

Z

Λ

K(s, t)f (t)dν(t).

If TK is locally trace class (see [24]), the right hand side of Equation (4.2) coincides with the Fredholm determinant

det(I + ϕTK) =Y j

(20)

where λ1(T ) ≤ λ2(T ) ≤ . . . are the (at most countably many) eigenvalues of a compact operator T on L2(Λ), and ϕTK is the integral operator with kernel ϕ(s)K(s, t).

For the remainder of this section, let ξ be a determinantal point process on Λ = R with correlation kernel K unless otherwise stated; everything will apply to determinantal processes ζ = {ζj} = {(ξj, ηj)} on Λ = R2 as well, with the modifications explicitly mentioned.

In general, ξ can have (countably) infinitely many and arbitrarily large points, with high probability. However, if there is a t ∈ R such that E [|ξ ∩ (t, ∞)|] < ∞, ξ is said to have a rightmost or last particle almost surely (For the case Λ = R2, substitute ((t, ∞) × R) for (t, ∞)). The last

particle distribution function, F , of ξ is then defined as

F (t) = P [|ξ ∩ (t, ∞)| = 0] .

Choosing ϕ(x) = χ(t,s) and formally taking the s → ∞ limit in (4.2), sug-gests that the last particle distribution can be written

F (t) = ∞ X m=0 (−1)m m! Z (t,∞)m det(K(xj, xk))mj,k=1dmx; (4.4) indeed this can be justified provided a last particle is known to exist.

As a simple, somewhat degenerate example, consider the Poisson process on R with intensity λ(x), which can be defined as a determinantal point process ξP = {ξjP} on R with correlation kernel KP(x1, x2) = λ(x1)δx1x2.

This definition is equivalent to the standard characterization in terms of finite dimensional distributions, i.e. joint distributions of random variables of the form |ξP ∩ Bj| for bounded disjoint Borel sets Bj ⊂ R, which for the Poisson process are independent and Po(R

Bjλ(x)dx)-distributed. The

independence property corresponds to the vanishing of the correlation kernel off the diagonal, so for general determinantal point processes there is a complicated dependence structure between the particles. If the intensity λ(x) decreases rapidly enough as x → ∞, ξP will have a last particle almost surely; to be specific let λ(x) = e−x. Then, by (4.4),

FP(t) = ∞ X m=0 (−1)m m! Z (t,∞)m det(KP(xj, xk))mj,k=1dmx = ∞ X m=0 (−1)m m! Z (t,∞)m m Y i=1 e−xidmx = FG(t). (4.5)

(21)

4.4 Scaling limits of eigenvalue point processes

In the language of point processes, the ˜Pn-distributed eigenvalues of the GUE constitute a (finite) determinantal point process ξHn = {ξHnj }n

j=1 on R with correlation kernel KnH, the realizations or point configurations of which are simply the unordered sets of eigenvalues. (Strictly speaking the eigenvalues need not be simple for every realization but the event that there are eigenvalues with multiplicity > 1 has probability 0.) The concept of a point process provides a useful tool for studying the asymptotic local properties of random matrix spectra.

Let ξn = {ξjn}n

j=1 be the sequence of eigenvalue point processes (on R, if the eigenvalues are real) of some random matrix ensemble. Instead of considering the limiting behaviour of a single, appropriately rescaled, eigenvalue, as in Theorem 3.3, one can rescale the whole eigenvalue point process by defining ˜ ξn= ξn j − cn an n j=1 , (4.6)

for suitable choices of the scaling parameters an and cn, and try to prove convergence to a limiting (infinite) point process ξ, most naturally in the sense of weak convergence in N (R). This is known as a scaling limit of the eigenvalue point process.

Note that if ξn is determinantal with correlation kernel K, ˜ξn will be determinantal too, with correlation kernel

˜

K(x1, x2) = anK (cn+ anx1, cn+ anx2) . (4.7) (For the Λ = R2 case, the rescaling ζjn = gn( ˜ζjn) := (cn+ anξ˜jn, bnη˜nj) gives the correlation kernel ˜K(z1, z2) = anbnK (gn(z1), gn(z2)) for ˜ζn.) If ˜ξn is determinantal, all information about it is encoded in a single function, the correlation kernel ˜Kn. In fact, weak convergence of the point process then essentially reduces to convergence of the correlation kernel to some limit K, which will be the correlation kernel of a limiting determinantal point process. For the GUE such an edge scaling limit theorem has been proven.

Theorem 4.1 (Tracy, Widom, 1994 [26]; Forrester, 1993, [8]) The GUE

eigenvalue point process ˜ξHn, rescaled as in Theorem 3.3, that is,

˜ ξHn= ( 2n2/3 ξ Hn j √ 2n − 1 !)n j=1 ,

(22)

converges weakly in N (R) to a determinantal point process, ξAi, on R, the Airy point process, with correlation kernel KA.

Again, a corresponding scaling limit for a point process of n iid random variables around their maximum is given for comparison. Let {zi}∞i=1 be a set of iid N (0, 1/2) random variables and define the point process

ξn=  2 log n  (log n)−1/2zi− 1 + log(4π log n) 4 log n n i=1 . (4.8)

Then ξn converges weakly to a Poisson process ξP on R with intensity e−x.

5

Non-Hermitian random matrices and the

ellipse ensemble

The GUE was defined as a probability measure on the space Hnof Hermitian matrices. Now, if H1, H2 are two independent GUEs, consider the (non-Hermitian) random matrix A = (H1+ iH2)/

2. It is easy to verify that the entries of A are iid centered complex Gaussians of variance 1/4 with no symmetry conditions whatsoever imposed. This classical so-called Ginibre

ensemble on the full space Mn ∼= R2n

2

of complex n × n matrices was introduced in [10].

5.1 Introduction of the ellipse ensemble

The observation that the Hermitian and anti-Hermitian parts of a Ginibre matrix are given by independent copies of the GUE suggests the introduction of a family of random matrices

Aτ = r (1 + τ ) 2 H1+ i r (1 − τ ) 2 H2,

for τ ∈ [0, 1], so that the parameter τ determines the extent to which Aτ fails to be Hermitian. Explicitly, if τ ∈ [0, 1) this model can be defined as the probability measure

dPτn(A) = Cnτexp  − 2 (1 − τ2)Tr  AA†− τ Re A2  dA (5.1)

(23)

on Mn, and the case τ = 1 coincides with the GUE. The random matrix ensemble defined by (5.1) was introduced in [9] and can be analyzed in its own right in much the same way as the GUE. One can explicitly calculate the induced measure on the n unlabeled, complex eigenvalues,

d˜Pτn(z1, . . . , zn) = 1 Zτ n |∆n(z)|2exp    − 2 (1 − τ2) n X j=1 |zj|2− τ Re zj2     dnz, (5.2)

which defines a determinantal point process ζτ n = {ζjτ n}n

j=1 on R2 (∼= C), with correlation kernel

Knτ((x1, y1), (x2, y2)) = 2 pπ(1 − τ2) n−1 X k=0 τkhk z√1 τ  hk z√2 τ  exp  −x 2 1+ x22 (1 + τ ) − y21+ y22 (1 − τ )  , (5.3) where zj = xj+ iyj.

5.2 Limit theorems for the ellipse ensemble

As in the case of the GUE, the absolute value of the eigenvalues will typi-cally be of order √n. It will therefore be convenient to define the rescaled eigenvalue point process

˜ ζτ n=n ˜ζjτ non j=1:= (r 2 nζ τ n j )n j=1 .

A counterpart to Wigner’s semi-circle law for the asymptotic global eigen-value density in this model was derived by Ginibre [10] for the case τ = 0, and for general τ in [9].

Theorem 5.1 (Fyodorov, Sommers, Khoruzhenko, 2000, [9]) Let ˜ζτ nbe the rescaled eigenvalue point process of the ellipse ensemble for τ ∈ [0, 1). Then for every bounded continuous real function ϕ on R2,

lim n→∞E τ n   1 n n X j=1 ϕ( ˜ζjτ n)  = Z R2 ϕ(z)dµτ(z), (5.4)

(24)

where dµτ(z) = 1 π(1 − τ2)χEτdxdy, and Eτ =  (x, y) : x 2 (1 + τ )2 + y2 (1 − τ )2 ≤ 1  .

There is no natural order on the complex eigenvalues of the ellipse en-semble, so several possible notions of extreme eigenvalues may be considered when formulating statements corresponding to Theorem 3.3. For the pure Ginibre ensemble, τ = 0, the spectral radius, that is, the maximal absolute value of the properly rescaled eigenvalues, is a natural object.

Theorem 5.2 (Rider, [21], 2003) Let ˜ζ0n be the rescaled eigenvalue point process of the Ginibre ensemble. Then

lim n→∞P 0 n  2pn log n  max 1≤j≤n n |˜ζj0n|o− cn  ≤ t  = FG(t), (5.5) where cn= 1 + 1 2 r log n n − 1

4√n log n(2 log log n + log(2π)) . (5.6) Note that the shift parameter cnas well as the limiting distribution are more reminiscent of the extreme value theorem for iid random variables than the characteristic largest eigenvalue behaviour.

The asymptotic spectral properties at the edge of the general ellipse en-semble are not very well known. As Theorems 3.3 and 5.2 indicate, the edge behaviours of the GUE (τ =1) and Ginibre ensemble (τ =0) are completely different even though they both belong to the one-parameter family of el-lipse ensembles Pτn. In [27] it is shown heuristically that for τ (n) = 1 − c/n, the edge eigenvalue point process essentially behaves as in the GUE case. The object of the second paper of this thesis is to prove edge scaling limits, both for the largest (in a suitable sense) eigenvalue, and the eigenvalue point process of the ellipse ensemble, for a general choice of τ = τ (n).

(25)

6

Dyson’s Brownian motion and the log

Coulomb gas

The GUE eigenvalue measure also has a natural interpretation in terms of statistical mechanics. By (2.6), the joint density fn has the form

fn(x1, . . . , xn) = 1 Znexp    −β   X j6=k u(xk− xj) + n X j=1 v(xj)      , (6.1)

which is the joint density of the equilibrium distribution (Gibbs measure) of a system of n particles on the real line with an interaction potential u, constrained by an external potential v, at temperature 1/β. Choosing units so that β = 2, the potentials u(x) = − log |x|/2 and v(x) = x2/2 given by the GUE measure correspond to the natural electrostatic repulsion potential in one dimension and a quadratic potential well respectively. This obser-vation immediately suggests two generalizations of the GUE model. Since the eigenvalue measure corresponds to the equilibrium state of a particle system at a particular temperature, it is natural to study the corresponding

dynamical particle system for a general temperature 1/β.

6.1 Stochastic differential equation and matrix interpretation

Let ξtn = {ξn,jt }n

j=1 be the dynamical system of n particles on the real line, and let Vn(x) = − n X j6=k u(xk− xj) − n X j=1 v(xj)

denote the potential of the whole system. The dynamics of this so-called log

Coloumb gas is governed by the system of stochastic differential equations

tn,j = √2 βdB n,j t − ∂Vn(ξtn) ∂ξn,jt dt = 2 √ βdB n,j t −ξ n,j t dt+2 X k6=j dt ξtn,j− ξn,kt , (6.2) for j = 1, . . . , n, where {Btn,j}n

j=1is a set of independent standard Brownian motions. Again, define the rescaled particles ˜ξtn,j = ξtn,j/√2n. Given a random counting measure (with n points almost surely) on R specifying the

(26)

initial particle distribution { ˜ξ0n,j}n

j=1, (6.2) defines a probability measure-valued process, the empirical measure process

Xtn:= 1 n n X j=1 δξ˜n,j t . (6.3)

Returning again to the random matrix interpretation for the choice β = 2, it can be shown that Equations (6.2) are satisfied by the set of eigenvalues of a Hermitian matrix-valued Ornstein-Uhlenbeck process At, that is, At = (Ajkt )nj,k=1 is Hermitian and satisfies the stochastic differential equations

dAt= −Atdt + 1 √ 2  dBt+ dBt†  , (6.4)

where Bt is an n × n matrix the entries of which are independent, stan-dard, complex Brownian motions. In fact, (6.2) for the cases β = 1 and β = 4 can also be interpreted as the eigenvalue dynamics of matrix-valued Ornstein-Uhlenbeck processes, but on the spaces Sn of n × n real symmetric matrices (β = 1) and Qn of n × n self-dual real quaternion matrices (β = 4) respectively. Just as the Gibbs measure in the case β = 2 coincides with the GUE eigenvalue measure, the Gibbs measures for β = 1, β = 4 are the eigenvalue measures of the Gaussian orthogonal ensemble (GOE) on Snand the Gaussian symplectic ensemble (GSE) on Qn respectively, which are two other classical invariant random matrix ensembles. This connection was first discovered by Dyson, [7], so the process defined by (6.2) for β = 1, 2, 4 (and by analogy, for any β > 0) is also known as Dyson’s Brownian motion.

6.2 Global density and fluctuations

If X0n is chosen as the empirical measure of the (rescaled) equilibrium dis-tribution (6.1), Xtn will be stationary and converge weakly in M(R) to the Wigner semi-circle law as n tends to infinity, by the strong version of The-orem 3.1. However, a corresponding result for every t ≥ 0 holds for an

arbitrary asymptotic initial distribution of particles X0.

Theorem 6.1 (Rogers and Shi, 1993, [22]; Cépa and Lépingle, 1997, [4])

Suppose that a sequence of random probability measures (X0n)∞n=1 converges weakly in M(R) to a deterministic probability measure X0 ∈ M(R). Then Xtn converges weakly in C([0, ∞), M(R)) as n → ∞ to a deterministic

(27)

measure-valued function Xt ∈ C([0, ∞), M(R)), depending only on X0 and

such that the family {Xt}t≥0 of probability measures converges weakly to the

Wigner semi-circle law, µ, as t → ∞.

The existence of Xt allows one to study the rate of convergence of Xtn. In [25], the equilibrium fluctuations for the model corresponding to (6.2) on the circle were analyzed, but in contrast to the large body of literature on central limit theorems such as (3.2) for various matrix models, relatively little has been known in the dynamical, general β case. However, a central limit theorem for Xtn, expressed in terms of distribution-valued processes, was proven by Israelsson in [12]. (The technical definition of the spaces Hk(R) ⊂ Ck(R) ∩ L(R), S(R) = ∩

k=1HK(R) of test functions considered is omitted here.)

Theorem 6.2 (Israelsson, 2001, [12]) Let Yn

t = n(Xnt − Xt) be the (signed)

measure-valued process of scaled fluctuations of Xtn. Suppose that the ran-dom signed measure Y0n converges weakly in S0(R) to a deterministic Y0 ∈ S0(R) as n → ∞ and that there is a constant C such that for every n and z = a + bi, b 6= 0, the inequality

E ˆ X0n(z) − ˆX0(z) 2 ≤ C n2b2

holds. Then the sequence (Ytn)∞n=1 of distribution-valued processes converges weakly in C([0, T ], H6(R)) to a Gaussian C([0, T ], H0 6(R))-valued process, for0

any fixed T > 0.

In other words, for any test functions ϕk ∈ H6(R) and non-negative

numbers tk, k = 1, . . . m, the random vector   n X j=1 ϕk( ˜ξtn,jk ) − n Z R ϕk(s)dXt(s)   m k=1 (6.5)

converges in distribution to an m-dimensional normal random vector as n tends to infinity.

Note the correspondence with Theorem 3.2, and that an analogue of an essential part of that result, the explicit expression for the variance func-tional, is missing in Theorem 6.2. The first paper of this thesis is devoted to proving a corresponding (co-)variance formula for this more general central limit theorem.

(28)

7

Overview of paper I

The model considered in this paper is the log Coloumb gas in a quadratic external potential (Dyson’s Brownian motion) for general β, defined by the stochastic differential equations

dλit= √2σ nβdB i t− λitdt + 2σ2 n X j6=i dt λit− λjt, for i = 1, . . . , n, (7.1) where Bit are independent standard Brownian motions. By Theorem 6.1, the empirical measure process

Xtn= 1 n n X i=1 δλi t

converges weakly in C([0, ∞), M(R)) as n → ∞ to a deterministic probabil-ity measure-valued function Xt such that the probability measures {Xt}t≥0 converge weakly as t → ∞ to the semi-circle law

dµ(x) = 1 2πσ2

p

4σ2− x2χ

{|x|<2σ}dx;

the parameter σ in (7.1) thus fixes the equilibrium variance. For an arbi-trary choice of initial asymptotic particle distribution X0 and under appro-priate conditions on the initial fluctuations, the fluctuation process Ytn = n(Xtn − Xt) converges weakly to a Gaussian process Yt by Theorem 6.2, and Israelsson [12] characterizes it uniquely by deriving a partial differential equation satisfied by the Fourier transform of its Stieltjes transform, F ( ˆYt) (which is a deterministic function of z and t).

The main result of the paper is an explicit expression for the mean and covariance of the finite dimensional distributions of ˆYt, obtained by solving this PDE using the method of characteristics. To state the theorem, some notation is needed. Let

ht(z) = zet+ σ2(et− e−t) ˆXt(z) (7.2) and

gt(w) = e−tw − σ2(et− e−t) ˆX0(w); (7.3) by Theorem 6.1 they are both analytic functions in C \ R. It is shown that gt◦ ht = id and, defining htt21 = gt2 ◦ ht1 for t1 ≥ t2 ≥ 0, that ht1 =

(29)

ht2◦ h

t2

t1. Also define the generalized Schwarzian derivative, denoted Sv, as

the function of two complex variables defined by 1 6(Sv)(z1, z2) = ∂2 ∂z1∂z2 log v(z1) − v(z2) z1− z2  = v 0(z1)v0(z2) (v(z1) − v(z2))2 − 1 (z1− z2)2 (7.4) for z1 6= z2, and by continuity for z1= z2. In fact, (Sv)(z, z) coincides with the ordinary Schwarzian derivative

(Sv)(z) = v 000(z) v0(z) − 3 2  v00(z) v0(z) 2 of a univalent function v.

Theorem 7.1 Suppose that Y0n = n(X0n− X0) satisfies the conditions of

Theorem 6.2 so that the sequence (Ytn)∞n=1 converges weakly to a Gaussian process Yt. Let 0 ≤ tk ≤ tk−1. . . ≤ t1 and z = (z1, . . . , zk) ∈ (C \ R)k. Then

the normal random vector

U = (U1, . . . , Uk), where Uj =  Ytj, 1 · − zj  , has mean µj = E [Uj] = 1 2  2 β − 1 h00 tj(zj) h0tj(zj) + * Y0, h 0 tj(zj) · − htj(zj) + (7.5)

and covariance matrix

Λlj = Λjl= Cov(Uj, Ul) = 2 β ∂2 ∂zj∂zl log htj(zj) − htl(zl) htl tj(zj) − zl ! = 1 3βh tl tj 0 (zj) (Shtl) (h tl tj(zj), zl), if l ≥ j. (7.6) In particular, Var(Uj) = 1 3 h000tj(zj) h0tj(zj) − 1 2 h00 tj(zj) h0tj(zj) !2 = 1 3β Shtj (zj). (7.7)

(30)

Using Cauchy’s integral formula, an integral representation formula of the covariance functional for more general analytic test functions is derived.

Theorem 7.2 Suppose that F1 and F2 are analytic and bounded in a strip Ωδ = {z : | Im(z)| < δ} for some δ > 0. Let t1 ≥ t2 ≥ 0, and define the

random variables Z1 = hYt1, F1i and Z2= hYt2, F2i. Then

Cov(Z1, Z2) = −1 24π2β I Γ1 I Γ2 (Sgt2) (w1, w2) (F1(gt1(w1)) − F2(gt2(w2))) 2 dw2dw1, (7.8)

where Γi = hti(γ), γ = γ−∪ γ+ and the oriented lines γand γ+ in Ωδ

are given by the parameterizations R 3 s 7→ s − iδ0 and R 3 s 7→ −s + iδ0 respectively, for any positive δ0 < δ. For Z1 = Z2 this reduces further to

Var(hYt1, F1i) = 1 4π2β I Γ1 I Γ1  F1(gt1(w1)) − F1(gt1(w2)) w1− w2 2 dw2dw1. (7.9) A significant feature of Theorem 7.2 is that, apart from the contours of integration which in many cases can be deformed, there is no reference to the functions htj defined in terms of the Stieltjes transform ˆXt of the empirical

measure at time t, but only to gt, which is explicitly defined in terms of the initial conditions.

8

Overview of paper II

This paper deals with determinantal point processes Z = {(xj, yj)} on R2, in particular, scaling limit theorems for the eigenvalue point process at the rightmost edge of the spectrum of the ellipse ensemble are proven.

Let ZA be a point processes on R2 with correlation kernel

MA(ζ1, ζ2) = e−12(η 2 1+η22) √ π KA(ξ1, ξ2),

where ζj = (ξj, ηj) and KA is the Airy kernel, and let ZP be a Poisson process on R2 with intensity e−ξ−η2, considered as a determinantal point

(31)

process with correlation kernel

MP(ζ1, ζ2) = δζ1ζ2

e−ξ1−η21

√ π .

These processes can be interpreted as the Airy point process and the Poisson process respectively in the first variable, with each particle subject to an iid centered Gaussian displacement in the second variable, so ZA and ZP both have last particles almost surely, with distribution functions FT W(t) and FG(t) respectively. The first result is the construction of a family of determinantal point processes interpolating between ZAand ZP.

Theorem 8.1 (Interpolating process) For each σ ∈ [0, ∞) there exists a

determinantal point process

Zσ = {(xj, yj)}

on R2 with correlation kernel Mσ(ζ1, ζ2) = 1 4π5/2 Z γ1 Z γ2 e−12(σv−η2) 2+i 3v 3+iξ 2v−12(σu+η1)2+i3u3+iξ1u i(u + v) dudv, (8.1)

where ζj = (ξj, ηj) and γj is the contour t 7→ γj(t) = t + iδj, independently

of the choice of δj > 0.

Define the rescaled point process

˜ Zσ = xj− cσ aσ , yj bσ  , where aσ= σ √ 6 log σ, bσ = σ3/2 (6 log σ)1/4, and cσ= aσ  3 log σ −5

4log(6 log σ) − log(2π) 

.

The processes Zσ, appropriately rescaled, interpolate between ZAand ZP in

(32)

∞. For any fixed σ, Zσ has a last particle almost surely, with distribution function Fσ(t) = Pσ[|Zσ ∩ ((t, ∞) × R)| = 0] = det(I − Mσ)L2((t,∞)×R) = ∞ X r=0 (−1)r r! Z ((t,∞)×R)r det(Mσ(ζj, ζk))rj,k=1drξdrη, (8.2) and furthermore Fσ(cσ+ aσt) → FG(t) as σ → ∞.

Now consider the ellipse ensemble for a given sequence {τn}∞n=1⊆ [0, 1), i.e. the probability measure Pτn

n on the space Mnof complex n × n matrices defined by dPτn n (A) =  n π√1 − τn2 n2 exp  − n 1 − τn2 Tr(AA †− τ nRe A2)  dA, (8.3) where dA = n Y j=1 n Y k=1 d(Re Ajk)d(Im Ajk).

(Note that the normalization differs from that of Section 4.3.) Let Zτn

n = {(xj, yj)}nj=1 be the determinantal eigenvalue point process on R2 induced by Pτn

n , with correlation kernel

Kτn n ((ξ1, η1), (ξ2, η2)) = n pπ(1 − τn2) n−1 X k=0 τnkhk r n 2τn ζ1  hk r n 2τn ζ2  e− n 2 „ ξ21 +ξ22 (1+τn)+ η21 +η22 (1−τn) « . (8.4)

Theorem 8.2 For the choices of scaling parameters ˜an, ˜bnand ˜cn specified

below, define the rescaled edge eigenvalue point process

˜ Zτn n =  xj− ˜cn ˜ an ,yj ˜bn n j=1 .

(33)

Let Fτn n (t) = Pτnn h Z˜ τn n ∩ ((t, ∞) × R) = 0 i = Pτn n  max1≤j≤n{xj} − ˜cn ˜ an ≤ t 

be the last particle distribution of ˜Zτn

n . Put σn= n1/6p(1 − τn).

(i) Suppose σn→ ∞ as n → ∞. Choose

˜ an= ˆτn1/2 σnn−2/3 √ 6 log σn , ˜ bn= ˆτn−1/4 σ 5/2 n n−2/3 (6 log σn)1/4, and ˜ cn= 2ˆτn+ ˜an  3 log σn−5

4log(6 log σn) − log(2π ˆτ 3/4 n )

 ,

where ˆτn:= (1 + τn)/2. Then ˜Znτn converges weakly to ZP and Fnτn(t)

converges to FG(t) as n → ∞.

(ii) Suppose σn→ σ ∈ [0, ∞) as n → ∞. Choose ˜an= n−2/3, ˜bn= σnn−2/3

and ˜cn = (1 + τn). Then ˜Zτn

n converges weakly to Zσ and Fnτn(t)

converges to Fσ(t) as n → ∞.

In particular, if σ = 0, ˜Zτn

n converges weakly to ZA and Fnτn(t)

con-verges to FT W(t).

The interpretation of this result is that, in the case (i), the imaginary parts of the eigenvalues near the rightmost edge of the spectrum are much greater than the spacings between their real parts. Therefore, the eigenvalues are not close and do not interact, which accounts for the Poisson process behaviour. On the other hand, when σn → 0, the imaginary parts of the eigenvalues are negligible and the edge point process is essentially a one-dimensional Airy point process (as in the GUE case), but with independent Gaussian fluctuations in the imaginary direction.

However, if σn → σ > 0, the imaginary parts and the spacing between the real parts are of the same magnitude, so the eigenvalues interact non-trivially and the resulting point process Zσ is essentially two-dimensional; this is reflected in the fact that the correlation kernel Mσ no longer factorizes.

(34)
(35)

Bibliography

[1] G. Anderson, O. Zeitouni, A CLT for a band matrix model, Probab. Theory Related Fields 134, no. 2 (2006), 283-338.

[2] J. Baik, P. Deift, K. Johansson, On the distribution of the length of the longest increasing subsequence of random permutations, J. Amer. Math. Soc. 12, no. 4 (1999), 1119-1178.

[3] Z. Bai, J. Yao, On the convergence of the spectral empirical process of Wigner matrices, Bernoulli 11, no. 6 (2005) 1059-1092.

[4] E. Cépa, D. Lépingle, Diffusing particles with electrostatic repulsion, Probab. Theory Related Fields 107, no. 4 (1997), 429-449.

[5] D. J. Daley, D. Vere-Jones, An introduction to the theory of Point processes, Springer-Verlag, New York, 1988.

[6] I. Dumitriu, A. Edelman, Global spectrum fluctuations for the β-Hermite and β-Laguerre ensembles via matrix models, J. Math. Phys. 47, no. 6 (2006).

[7] F. Dyson, A Brownian-Motion Model for the Eigenvalues of a Random Matrix, J. Math. Phys. 3 (1962), 1191-1198.

[8] P. Forrester, The spectrum edge of random matrix ensembles, Nuclear Phys. B 402, no. 3 (1993), 709-728.

[9] Y. V. Fyodorov, H. J. Sommers, B Khoruzhenko, Universality in the random matrix spectra in the regime of weak non-Hermiticity, Ann. Inst. Henri Poincaré A 68 (2000), 449-488.

[10] J. Ginibre, Statistical ensembles of complex, quaternion, and real ma-trices, J. Math. Phys. 6, no. 3 (1964), 440-449.

(36)

[11] T. Guhr, A. Müller-Groeling, H. Weidenmüller, Random-matrix the-ories in quantum physics: common concepts, Phys. Rep. 299, no. 4-6 (1998), 189-425

[12] S. Israelsson, Asymptotic fluctuations of a particle system with singular interaction, Stochastic Process. Appl. 93, no. 1 (2001), 25-56.

[13] K. Johansson, Discrete orthogonal polynomial ensembles and the Plancherel measure, Ann. of Math. (2) 153, no. 1 (2001), 259-296. [14] K. Johansson, Discrete polynuclear growth and determinantal

pro-cesses, Commm. Math. Phys. 242, no. 1-3 (2003), 11-148.

[15] K. Johansson, On fluctuations of eigenvalues of random Hermitian ma-trices, Duke Math. J. 91 (1998), 151-204.

[16] K. Johansson, Random matrices and determinantal processes, in

Les Houches Summer School on Mathematical statistical physics, Les Houches, session LXXXIII, 2005, Elsevier 2006.

[17] O. Kallenberg, Foundations of modern probability, 2nd ed., Springer-Verlag, New York, 2001.

[18] M. Mehta, Random matrices, 2nd ed., Academic press, New York, 1991. [19] L. Pastur, Limiting laws of linear eigenvalue statistics for Hermitian

matrix models, J. Math. Phys. 47, no. 10 (2006).

[20] S. Resnick, Extreme values, regular variation, and point processes, Springer-Verlag, New York, 1987.

[21] B. Rider, A limit theorem at the edge of a non-Hermitian random ma-trix ensemble, J. Phys. A: Mat. Gen. 36 (2003), 3401-3409.

[22] L. Rogers, Z. Shi, Interacting Brownian particles and the Wigner law, Probab. Theory Related Fields 95, no. 4 (1993), 555-570.

[23] Y. Sinai, A. Soshnikov, Central limit theorem for traces of large random symmetric matrices with independent matrix elements, Bol. Soc. Brasil. Mat. (N.S.) 29, no. 1 (1998), 1-24.

[24] A. Soshnikov, Determinantal random point fields, Russian Math. Sur-veys. 55, no. 5 (2000), 923-975.

(37)

[25] H. Spohn, Dyson’s model of interacting Brownian motions at arbitrary coupling strength, Markov Process. Related Fields 4 (1998), no. 4, 649-661.

[26] C. Tracy, H. Widom, Level spacing distributions and the Airy kernel, Comm. Math. Phys. 159, no. 1 (1994), 151-174.

[27] A. M. García-García, S. M. Nishigaki, J. J. M. Verbaarschot, Critical statistics for non-Hermitian matrices, Phys. Rev. E (3) 66, no. 1 (2002).

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Tillväxtanalys har haft i uppdrag av rege- ringen att under år 2013 göra en fortsatt och fördjupad analys av följande index: Ekono- miskt frihetsindex (EFW), som

The aim of the present article is to find convergence regions and a few conjectures of convergence regions for these functions based on a vector version of the Nova q-addition1.

Kontogeorgos S, Thunström E, Johansson MC, Fu M.Heart failure with preserved ejection fraction has a better long-term prognosis than heart failure with reduced ejection fraction

Nevertheless, many gaps are based on fundamental questions such as how to debrief, whom to debrief and what to debrief, that remain unanswered and needs to be explored and studied

Re-examination of the actual 2 ♀♀ (ZML) revealed that they are Andrena labialis (det.. Andrena jacobi Perkins: Paxton &amp; al. -Species synonymy- Schwarz &amp; al. scotica while

In the third chapter, a one term Szeg¨o type asymptotic formula with a sharp remainder estimate for a class of integral operators with symbols hav- ing discontinuities in both

In the first half we start, as background information, by quoting the law of large numbers and the law of the iterated logarithm for random sequences as well as for random fields,