• No results found

Symmetric Functions and Kronecker Coefficients

N/A
N/A
Protected

Academic year: 2021

Share "Symmetric Functions and Kronecker Coefficients"

Copied!
82
0
0

Loading.... (view fulltext now)

Full text

(1)

U.U.D.M. Project Report 2021:6

Examensarbete i matematik, 30 hp

Handledare: Volodymyr Mazorchuk

Examinator: Martin Herschend

April 2021

Department of Mathematics

Uppsala University

Symmetric Functions and Kronecker Coefficients

(2)
(3)

Abstract. Kronecker coefficients are defined as the expansion coefficients Sλ⊗ Sµ=P gν

λµS

ν, where Sλ are Specht modules of symmetric group S n

and λ, µ, ν ` n. A fully combinatorial description of Kronekcer coefficients has yet to be found. On the other hand, the space of symmetric functions, which enjoys a similar algebraic structure as characters of Sn, has a well-established

theoretic foundation. The approach of studying Kronecker coefficients through symmetric functions has been shown to be fruitful. The aim of this thesis is to give a brief account of the theory of symmetric functions, illustrate its interplay with the representation theory of symmetric groups and manifest its usage.

Chapter 1 gives a brief introduction to the algebraic structure of the ring of symmetric functions and its various bases, except the most known one, Schur functions, which are intentionally left for the next. In Chapter 2, Schur functions are discussed in details with the purpose to make the parallel between the ring of symmetric functions and the ring of class functions of Snobvious.

(4)

CHAPTER 1

Basic Theory of Symmetric Functions

1.1. Monomial symmetric functions and ring of symmetric functions Notations and structures in this chapter will basically follow [EGG], [SAG] and [MAC]. For some particular discussions from sources other than these, ref-erences will be made in place. In this article, variables x1, x2, ... and y1, y2, ... are

used to represent commutative variables, while u1, u2, ... are used to represent

non-commutative ones.

We start with Z [x1, ..., xn] the ring of polynomials with interger coefficients in

finitely many indeterminants. Instead of x1, ..., xn, we use Xn as abbreviation of

this set of n variables. For π ∈ Sn, we define an action on Z [Xn] by letting π

permute on subscripts of monomials,

πxk1 i1x k2 i2...x kl il  = xk1 π(i1)x k2 π(i2)...x kl π(il),

and extend linearly.

Example 1.1.1. Let σ = (123) ∈ S3, f = x1x32− 2x 2 2x3. Then σf = xπ(1)x3π(2)− 2x 2 π(2)xπ(3) = x2x33− 2x 2 3x1. Obviously, σf 6= f .

With some straightforward calculations, the following properties follow from the definition.

Proposition 1.1.2. Let f and g ∈ Z [Xn], π and σ ∈ Sn. Let c be a constant.

Then

1) π (cf ) = cπ (f ) ,

2) π (f + g) = π (f ) + π (g) , 3) π (f g) = π (f ) π (g) , 4) (πσ) (f ) = π (σ (f )) .

Two kinds of polynomials should have our attention at this stage. The first kind is those which are invariant under the action, namely {f ∈ Z [Xn] | π (f ) = f, ∀π ∈ Sn}.

These can be obtained by taking any element f ∈ Z [Xn] and symmetrizing over

Sn,

X

σ∈Sn

σ (f ) ,

which is obviously invariant under action of any permutation. The second kind is those which are invariant up to sign under the action of Sn. It’s natural in that

every element of symmetric group is born with a sign. Similarly, these polynomials

(5)

can be obtained by antisymmetrizing, X

σ∈Sn

sgn (σ) σ (f ) ,

where sgn (σ) is the sign of σ.

We will dig into the first kind now and the second kind will come into play later.

Definition 1.1.3. A polynomial f ∈ Z [Xn] is symmetric if π (f ) = f for

∀π ∈ Sn. Denote Λn to be the set of all symmetric polynomials of Z [Xn] and Λkn

to be the set of homogeneous symmetric polynomials of degree k together with the zero polynomial.

Observe that Proposition 1.1.2 imposes an obvious vector space structure over Z on Λkn and Λn under usual definition of addition and scalar multiplication.

Fur-thermore, with 3) of Proposition 1.1.2, it can be readily seen that Λnis closed under

multiplication. Thus Λn forms a subring of Z [Xn],

Λn = Z [Xn] Sn.

Also, Λn has a graded ring structure,

Λn= ⊕ k≥0Λ

k n.

Before proceeding, we introduce some notations here. An interger sequence is an n-tuple α = (α1, ..., αn) ∈ Zn. A composition is an interger sequence with

nonnegative parts. A partition λ = (λ1, ..., λn) is a composition with all parts

arranged in weakly decreasing order,

λ1≥ λ2≥ ... ≥ λn.

We say a partition λ of k, denoted by λ ` k, if X

1≤i≤n

λi= k.

Equivalently, one can say the size of partition λ is k, denoted by |λ| = k.

Note that notations above can easily be extended to have infinite parts, as long as there are only finitely many nonzero parts.

The length of a partition λ, denoted by l (λ), is defined to be the maximal index l such that λi= 0 for all i > l. The width of a partition λ, denoted by w (λ),

is defined to be equal to λ1. The conjugate of a partition λ ` k is another partition

of k, denoted by λ0, where

λ0i= | {j | λj ≥ i} |.

For a composition α of length l, write xα= xα1 1 x α2 2 ...x αl l .

The diagram or shape of a partition λ is an array of left justified boxes arranged in such a way that the i-th row contains exactly λi boxes. Note that the diagram

of λ0 is the transpose of diagram λ, i.e. the diagram obtained by reflection w.r.t main diagonal. Obviously, we have

(6)

For two partitions µ and λ such that µi≤ λi

for every index i, we say µ ⊆ λ. In this case, the skew diagram or skew shape λ/µ is the diagram obtained from λ with boxes of µ deleted.

A tableau T is a diagram filled one letter in each box. In most part of this article, the alphabet of letters will simply be N>0. We denote the entries of tableau

T in the same fashion as matrix, namely

Ti,j= the letter in box (i, j) ,

where (i, j) means the box in i-th row and j-th column. In case of ordinary alphabet of N>0, the content of a tableau T is a sequence cont (T ) = (c1, ..., cm) where

ck= |{k | Ti,j = k, f or some i, j}| ,

namely ck is the number of k’s in T . Note that the content of a tableau is always a

composition. A semistandard tableau is a tableau whose entries strictly increase from top to bottome in each column and weakly decrease in each row. A standard tableau is a semistandard tableau with content (1n), where n is the size of the tableau. The sets of semistandard tableaux and standard tableaux are denoted by SSY T and SY T respectively, and SSY T (λ/µ) and SY T (λ/µ) are used for corresponding subsets of shape λ/µ.

Figure 1.1.1. skew tableau T ∈ SSY T ((4, 3, 2) / (3, 1)) with con-tent (1, 2, 1)

T =

/ / / / / 2 2 1 3

In Figure 1.1.1, one can say T2,3 = 2 or cell (2, 3) of T has entry 2 or is filled

with 2.

Definition 1.1.4. Let λ be a partition. Define the monomial symmetric polynomial mλ(Xn) as mλ:= X f ∈Sn(xλ) f, where Sn xλ  = f | f = σ xλ , σ ∈ S

n . Equivalently, mλ is the sum of all

distinct images of xλunder Sn,

mλ= X xλ1 i1x λ2 i2...x λl il. If l (λ) > n, we set mλ(Xn) = 0. Example 1.1.5. m(3,1)(X2) and m(3,1)(X3).

Let λ = (3, 1), then xλ= x31x2. Since S2= {e, (12)}, S2 xλ contains only two

monomials, x31x2and x32x1. Similarly,

S3= {e, (12) , (13) , (23) , (123) , (321)} ,

and

(7)

As a result, we have

mλ(X2) = x31x2+ x32x1,

mλ(X3) = x31x2+ x31x3+ x32x1+ x32x3+ x33x1+ x33x2.

Proposition 1.1.6. For any partition λ, mλ(Xn) is symmetric.

Proof. If l (λ) > n, mλ(Xn) = 0 by definition and it’s obviously symmetric.

So it’s safe to assume l (λ) ≤ n. It follows from definition that every monomial in mλ(Xn) has coefficient 1 since it’s defined as a sum over distinct images of Sn. So

it suffices to show two facts: that the image of any monomial summand in mλ(Xn)

is again a monomial summand in mλ(Xn) and that every monomial summand is

an image of another monomial summand under some σ ∈ Sn.

Since every Sn contain the identity element e, xλ is one monomial summand in

mλ(Xn). Now consider monomial b in mλ(Xn). According to the definition there

exists π ∈ Sns.t. b = π xλ. This makes b an image, thus the second assertion is

proved. Then for ∀σ ∈ Sn, σ (b) = σ π xλ = (σπ) xλ is also a monomial in

mλ(Xn) as σ (b) is the image of xλunder (σπ) ∈ Sn.This justifies the first fact. 

After a justification of the name of monomial symmetric polynomials, we pro-ceed further to show that they actually form a basis of Λkn, the space of homogeneous

symmetric polynomials of degree k.

Proposition 1.1.7. For positive integers n and k with n ≥ k, {mλ| λ ` k}

forms a basis of Λk n.

Proof. First, we show that {mλ| λ ` k} span Λkn. Pick an element f ∈ Λ k n

and do induction on number of terms of f . If f = 0 the case is trivial. Assume f 6= 0. For any monomial summand of f , say cxα1

1 x α2 2 ...x αl l , we have l ≤ k and P

1≤i≤lαi = k, in general α = (α1, ..., αl) is just an positive interger sequence.

Write cxα= cxα1

1 x α2

2 ...x αl

l . It’s obvious that x

αappears in exactly one m

λ, namely

when λ is the unique rearrangement of parts of α in non-decreasing order. Now since f is symmetric, every monomial summand in mλ also appears in f with coefficient

c.Therefore, the whole mλappears in f with coefficient c. Then f − cmλ∈ Λkn and

has less terms than f . Induction hypothesis completes this part of the proof. Next, we show that {mλ| λ ` k} is linear independent. LetPλ`kcλmλ(Xn) =

0. As discussed above, every monomial xαshowing up at the left hand side of the equation can only appear in exactly one mλ. Put it another way, for this term to

vanish, the only way is cλ= 0. 

Note that Proposition 1.1.7 also asserts that dim Λk

n = p (k), the number of

partitions of k. And more importantly, the dimension of Λkn is independent of n, the number of variables, provided that n ≥ k.

Driven by this observation, we take a closer look at the algebraic structure of Λkn by studying how a product of two monomial symmetric monomials mλmµ is

expressed with respect the basis mν, where λ ` p, µ ` k − p and ν ` k, and through

(8)

Example 1.1.8. Consider the product m(1,1)(Xn) m(3,1)(Xn) and express it

as an Z-linear combination of mν(Xn) with ν ` 6. First

m(1,1)(X2) m(3,1)(X2) = x1x2 x31x2+ x1x32

 = x41x22+ x21x42.

Obviously it’s the same as m(4,2)(X2), so for n = 2, we have

m(1,1)(X2) m(3,1)(X2) = m(4,2)(X2) . Next, consider n = 3. m(1,1)(X3) m(3,1)(X3) = (x1x2+ x1x3+ x2x3) x31x2+ x1x32+ x 3 1x3+ x1x33+ x 3 2x3+ x2x33  = x41x22+ x21x42+ x41x23+ x21x43+ x42x23+ x22x43 + 2x41x2x3+ 2x1x2x43+ 2x1x42x3 + x1x22x 3 3+ x 2 1x 3 2x3+ x1x32x 2 3+ x 2 1x2x33+ x 3 1x 2 2x3+ x31x2x23.

In the last step, first row(six terms) makes m(4,2)(X3), second row(three terms)

makes 2m(4,1,1)(X3) and the last row(six terms) makes m(3,2,1)(X3). So for n = 3,

we have

m(1,1)(X3) m(3,1)(X3) = m(4,2)(X3) + 2m(4,1,1)(X3) + m(3,2,1)(X3) .

Results for some n ≥ 4 are calculated the same way thus are listed without details:

m(1,1)(X4) m(3,1)(X4) = m(4,2)(X4) + 2m(4,1,1)(X4) + m(3,2,1)(X4) + 3m(3,1,1,1)(X4) ,

m(1,1)(X5) m(3,1)(X5) = m(4,2)(X5) + 2m(4,1,1)(X5) + m(3,2,1)(X5) + 3m(3,1,1,1)(X5) ,

m(1,1)(X6) m(3,1)(X6) = m(4,2)(X6) + 2m(4,1,1)(X6) + m(3,2,1)(X6) + 3m(3,1,1,1)(X6) .

As suggested by the above calculation, the algebraic structure of Λk

n evolves with

the number of variables when it is small and stops evolving at some particular point. In the example above, this particular point is n = 4. Here we give another example of λ = (2, 1) and µ = (2, 1, 1, 1) where the expanding coefficients become stable when n = 6. Both examples are summurized in tables.

(9)

Table 2. m(2,1)(Xn) m(2,1,1,1)(Xn) n m(2,1)(Xn) m(2,1,1,1)(Xn) 1,2,3 0 4 m(4,2,1,1)(X4) + 2m(3,3,1,1)(X4) + 2m(3,2,2,1)(X4) 5 m(4,2,1,1)(X5) + 2m(3,3,1,1)(X5) + 2m(3,2,2,1)(X5) + 4m(4,1,1,1,1)(X5) + 4m(3,2,1,1,1)(X5) +6m(2,2,2,1,1)(X5) 6 m(4,2,1,1)(X6) + 2m(3,3,1,1)(X6) + 2m(3,2,2,1)(X6) + 4m(4,1,1,1,1)(X6) + 4m(3,2,1,1,1)(X6) +6m(2,2,2,1,1)(X6) + 8m(2,2,1,1,1,1)(X6) 7 m(4,2,1,1)(X7) + 2m(3,3,1,1)(X7) + 2m(3,2,2,1)(X7) + 4m(4,1,1,1,1)(X7) + 4m(3,2,1,1,1)(X7) +6m(2,2,2,1,1)(X7) + 8m(2,2,1,1,1,1)(X7)

The two examples above suggest an important fact in the theory of symmetric functions: the algebraic structure of Λk

ndoes not depend on the number of variables

if there are enough variables. Moreover precisely, in this expanding coefficient problem, the number of variables n is enough if it’s no less than the sum of lengths of two partitions λ and µ, i.e. when n ≥ l (λ) + l (µ). It agrees with our observation above, where in the first example l (λ) = 2, l (µ) = 2 and n = 4; while in the second l (λ) = 2, l (µ) = 4 and n = 6.

It’s not hard to understand this phenomenon from the fact that l(λ) represents the number of variables needed in every monomial summand in mλ(Xn). Indeed, if

n < l (λ) then the number of variables is not enough for the monomial x1x2...xl(λ)

to appear in mλ(Xn). Thus mλ(Xn) is not yet “completed” until n reaches l (λ).

Note here despite of the fact that the expansion coefficients of mλ(Xn) mµ(Xn)

remains the same as n grows beyond l (λ) + l (µ), the corresponding monomial sum-mands, e.g. m(4,2)(X4) and m(4,2)(X5) in the first example, are in fact different:

m(4,2)(X4) = X 1≤i<j≤4 (x4ix2j+ x2ix4j) m(4,2)(X5) = X 1≤i<j≤5 (x4ix2j+ x2ix4j).

The other part of the fact is that the algebraic structure of Λk

n remains the

same for different n ≥ k. We formulate it as follows.

Proposition 1.1.9. Let m ≥ n ≥ k. Then Λm is isomorphic to Λn as graded

rings.

Proof. Consider map

ρ : Z[Xm] → Z[Xn],

(

xi7→ xi 1 ≤ i ≤ n

xi7→ 0 n + 1 ≤ i ≤ m

.

Denote ρm,nthe map obtained by restricting ρ to Λm. Obviously

ρm,n: Λm→ Λn

is a homomorphism of the rings of symmetric polynomials. Similarly, further re-stricting ρm,nto Λkm, we get a Z-linear map, denoted by

(10)

Two facts are worth noted here. The first is what ρkm,n does to mλ(Xm). Every

monomial summnd of mλ(Xm) does not contain xi with n + 1 ≤ i ≤ m is sent

to itself which is also a monomial summand in mλ(Xn), while every monomial

summnd of mλ(Xm) contains xi with n + 1 ≤ i ≤ m is sent to 0. Thus the image of

mλ(Xm) under ρkm,nis exactly mλ(Xn). The second is that despite of the difference

in varaibles, Λk mand Λ

k

n share the same dimension dim Λ k

m= p (k) = dim Λ k n.

Therefore ρkm,nsends basis to basis, thus is bijective. So ρm,nis an isomorphism

Λm→ Λn. 

Discussion above clarifies the reason why the number of variables is immaterial in the theory of symmetric functions as long as it’s large enough. Also it provides a motivation to work with symmetric functions in infinitely many variables. There-fore, in the rest of this article, we will always assume that the amount of variables is sufficient to describe the algebraic structure unless otherwise stated.

1.2. Elementary symmetric functions

In the last section, we saw that Λk, the ring of homogeneous symmetric

func-tions of degree k, has {mλ| λ ` k} as a basis. Here in this section we will focus on

introducing other bases of Λk and the transition matrices between different bases.

Results from linear algebra requires that other bases have the same cardinality as {mλ| λ ` k}, namely p (k) the numbers of partitions of k. So it’s reasonable to

expect that the bases to be introduced are all indexed by partitions of degree k. We start with the symmetric functions that contain terms having their expo-nents ditributed most “evenly” among variables, or in language of formal power series, thoes f ∈ Λk that are only nonzero on vetices of n-unit cube in N∞, namely a sequence contains only 0’s and k 1’s.

Definition 1.2.1. The k-th elementary symmetric f unction is ek=

X

i1<...<ik

xi1...xik.

We use ek or ek(X) to denote the k-th elementary symmetric functions in

infin-itely many variables x1, x2, ..., and denote ek(Xn) or ek(x1, ..., xn) for the k-th

elementary symmetric polynomial in n variables x1, ..., xn.

By convention, set e0= 1 and ek(Xn) = 0 for k > n.

In the rest part of this article, we will only stick with the definitions of symmet-ric functions, in that the parallel between definitions of symmetsymmet-ric functions and symmetric polynomials will be completely analogy.

Example 1.2.2. e2= x1x2+ x1x3+ ... + x2x3+ ... =Pi<jxixj.

Elementary symmetric polynomials appear natrually in several different con-texts. One way to obain these is to expand monic polynomial of y with roots x1, ..., xn,

f (y) = (y − x1) ... (y − xn) .

(11)

Remark 1.2.3. Denote S∞ = qn≥1Sn. Recall the definition of monomial symmetric function of λ, mλ= X xλ1 i1x λ2 i2...x λl il∈S∞(xλ) xλ1 i1x λ2 i2...x λl il,

where the sum is over all distinct monomial images of xλ= xλ1

1 x λ2 2 ...x λl l under S∞. For λ = 1l, we have m(1l)= X xi1xi2...xil = X i1<...<il xi1xi2...xil = el.

In particular, it can be deduced from the equation above that ek ∈ Λk.

As discussed in the beginning of this section, the basis of Λk should be indexed by partitions of k. Τhus the next definition comes up naturally.

Definition 1.2.4. Let λ ` k and l = l (λ). The elementarysymmetricf unction of λ is

eλ= eλ1eλ2...eλl.

It follows directly from this definition that eλ∈ Λkfor λ ` k since permutations

act as homomorphism.

Now at one hand we have {mλ| λ ` k}, a basis of Λk; at the other hand another

set of elements in Λk, {eλ| λ ` k}, has just been defined. A natural question to

ask is what is the coefficients when eν is expressed in terms of a sum of mµ’s where

ν, µ ` k.

Following the notations of [MAC], denote these coefficients as M (e, m)ν,µ, eν=

X

µ`k

M (e, m)ν,µmµ,

where e, m in M (e, m)ν,µ should be viewed as a pair of indices indicating the two bases between which the transition happens. In this manner, M (e, m)ν,µ forms a matrix indexed by partitions of k. Once an order is defined on the set of all partitions of k later, the structure of M (e, m)ν,µ will become clearer.

Example 1.2.5. Consider the case k = 4, ν = (3, 1).

eν= e3e1

= (x1x2x3+ x1x2x4...) (x1+ x2+ ...) .

From the expression above one can easily see than m(4) =P x4i, m(3,1)=P x3ixj

and m(2,2)=P x2ix 2

jdo not occur in eν because the product is only able to produce

monomials with at most one variable being squared. For m(2,12)=

X x2i

(12)

where the sum is over distinct monomial images of x21x2x3 under permutations

π ∈ Sn for some n. More concretely,

m(2,12)=

X

i1<i2<i3

x2i1xi2xi3

= x21x2x3+ x21x2x4+ ... + x22x3x4+ x22x3x5+ ... .

Since mµ occurs in eν as an whole entity, to determine the multiplicity of m(2,12)

in eν, it suffices to determine the multiplicity of any summand of above in eν. Pick

x2

1x2x3. There’s only one way to obtain it from the product (x1x2x3+ x1x2x4...) (x1+ x2+ ...),

namely x1x2x3from the first parenthesis and x1 from the second. So the

multiplic-ity of m(2,12)in eν is 1. Similarly, for

m(14)=

X

i1<i2<i3<i4

xi1xi2xi3xi4

= x1x2x3x4+ x1x2x3x5+ ... ,

pick x1x2x3x4. There are four different ways to form this monomial: x1x2x3

times x4, x1x2x4 times x3, x1x3x4 times x2, x2x3x4 times x1. Hence the

coeffi-cient(multiplicity) of m(14)in evis 4. Putting together, we arrive at the expression

of ev as a sum of mµ’s,

e(3,1)= m(2,12)+ 4m(14).

Observe that from the example above, only some of the partitions have nonzero coefficients. This is in fact a general result for coefficients M (e, m)ν,µ. This phe-nomenon is best described using the lexicographic order on partitions.

Definition 1.2.6. Let λ, µ ` k. Then λ < µ in lexicographic order if there exists index i,

λj = ujfor j < i and λi < µi.

Lexicographic order is defined in analogy to the familiar natural order used on decimal numbers. Note that lexicographic order is a total order on partitions.

Example 1.2.7. In the case of k = 4, all partitions arranged increasingly in lexicographic order is

14 < 2, 12 < 22 < (3, 1) < (4) .

Take a second look at Example 1.2.5 with the help of lexicographic order, it can be seen that those mµ with nonzero coefficients in eν are exactly those with

µ ≤ ν0 in lexicographic order. Furthermore, when µ = ν0, the coefficient is 1. These observations are shown to be general in the next proposition.

Proposition 1.2.8. Let |ν| = |µ|. Then 1) if µ > ν0 then M (e, m)ν,µ= 0, 2) M (e, m)ν,ν0 = 1.

(13)

Recall that the M (e, m)ν,µis equal to the number of ways of constructing mµ from the product eν= ev1ev2...evl =Xxi1...xiν1 X xi1...xiν2... X xi1...xiνl

and it’s free to pick any monomial summand from mµ. For simplicity, pick xµ11x µ2

2 ...xµmm,

thus M (e, m)ν,µis equal to the way of constructing xµ1

1 x µ2

2 ...xµmm from eν. The

con-struction is just choosing one term from the sum of each factor in ev1ev2...evl. Now

µ1 = ν10 = | {j | νj≥ 1} | implies that in order to satisfy the requirement on the

quantity of x1’s, the only way is that each term chosen from ev1, ev2, ..., evl must

contain x1, since en is only able to contribute any variable at most once. That is

to say xµ1

1 x µ2

2 ...xµmm can only be obtained from eν like

eν = ev1ev2...evl =Xx1...xiν1 X x1...xiν2... X x1...xiνl.

Argue similarly, we come to one conclusion: to obtain xµ1

1 x µ2

2 ...xµmm from eν, each

individual term chosen from ev1, ev2, ..., evlneeds to contain x1, x2, ..., xk−1, namely

of form x1x2...xk−1.... So only eν= ev1ev2...evl =Xx1...xk−1...xiν1 X x1...xk−1...xiν2... X x1...xk−1...xiνl contribute to xµ1 1 x µ2

2 ...xµmm. Now problem arises when proceeding to xk. Among all

l factors ev1ev2...evl there are only ν

0

k = | {j | νj≥ k} | factors that have monomial

summands containing no less that k variables. That is to say, there are only νk0 factors in ev1ev2...evl are able to contribute xk while simultaneously contributing

x1...xk−1. Therefore µk > νk0 implies that there is no way to construct enough xk

needed in xµ1

1 x µ2

2 ...xµmm. So M (e, m)ν,µ= 0.

2) follows from the same argument as above since µi = νi0 for all i implies

that there is only one way to complete each step thus only one way to construct xµ1

1 x µ2

2 ...xµmm from eν. So M (e, m)ν,ν0 = 1. 

Example 1.2.9. As an illustration, full matrices of M (e, m) for cases 4 and 5 are listed here.

Figure 1.2.1. M (e, m)λ,µ0 for |λ| = |µ| = 4

(14)

Figure 1.2.2. M (e, m)λ,µ0 for |λ| = |µ| = 5 λ\µ 15 2, 13 22, 1 3, 12 (3, 2) (4, 1) (5) (5) 1 0 0 0 0 0 0 (4, 1) 5 1 0 0 0 0 0 (3, 2) 10 3 1 0 0 0 0 3, 12 20 7 2 1 0 0 0 22, 1 30 12 5 2 1 0 0 2, 13 60 27 12 7 3 1 0 15 120 60 30 20 10 5 1

Corollary 1.2.10. The set of elementary symmetric functions {eλ| λ ` k}

forms a basis of Λk.

Proof. Equipped with the lexicographic order, we are able to form n

M (e, m)ν,µ| ν, µ ` ko

as entries of a matrix. Fix the indices left to right, top to bottom, increasingly w.r.t lexicographic order. Denote this matrix M (e, m). Let A be the matrix defined by entries Aν,µ= M (e, m)ν0. Thus A is the transition matrix describing eν0 as linear

combination of mµ, i.e. eν0 = X µ`k M (e, m)ν0mµ =X µ`k Aν,µmµ.

Note that A has its row indices arranged decreasingly from top to bottom w.r.t lexicographic order, since α > β ⇐⇒ α0 < β0. The last proposition then asserts that A is a lower triangular matrix with unital diagonal entries,

ν < µ ⇐⇒ (ν0)0 < µ =⇒ M (e, m)ν0= 0 ⇐⇒ Aν,µ= 0.

From linear algebra we know that det (M (e, m)) = 1, and that M (e, m) is invert-ible. Therefore each mµcan be expressed as a linear combination of eλ’s. It follows

that {eλ| λ ` k} spans Λk, as {mµ| µ ` k} forms a basis.

Combining with the observation that {eλ| λ ` k} = p (k) = {mµ| µ ` k}

com-pletes the proof. 

We can see from the two matriced of Example 1.2.9, that matrices Aν,µ =

M (e, m)ν0 enjoys a symmetry w.r.t the anti-diagonal(when we conjugate back

the row indices and discuss M (e, m)ν,µ, it becomes a symmetry w.r.t the diagonal). This turns out to be a general result.

Proposition 1.2.11. M (e, m)ν,µ= M (e, m)µ,ν.

Proof. We first prove a claim

M (e, m)ν,µ=|    A ∈ {0, 1}k×k| X 1≤j≤k Ai,j= µi, X 1≤i≤k Ai,j= νj, f or all 1 ≤ i, j ≤ k    | .

(15)

As discussed in the proof of Proposition 1.2.8, M (e, m)ν,µequals to the number of ways to construct monomial

xµ1 1 x µ2 2 ...x µl(µ) l(µ) from product eν = eν1eν2...eνl(ν) =   X a1<...<aν1 xa1...xaν1     X b1<...<bν2 xb1...xbν2  ...   X r1<...<rνl(ν) xr1...xrνl(ν)   = (x1...xν1+ ...) (x1...xν2+ ...) ... x1...xνl(ν)+ ... .

First to show is that every such construction yields a unique element in Kµ,ν. In

each such contruction of xµ1

1 x µ2

2 ...x µl(µ)

l(µ) , there are exactly µ1 “sources” of x1 since

each can at most contribute one x1. That is to say there are µ1elementary functions

from the product above that contributes x1, say ei1,1, ei1,2, ..., ei1,µ1. Putting 1 into

entries indicated by these subscripts and 0 elsewhere gives the first row of the desired matrix. Same procedure forms other rows of the matrix. It’s not hard to check that the so constructed matrix satisfies the restriction.

Conversely, given an element in Kµ,ν, we can use it to locate the monomial in

each factor of eν = eν1eν2...eνl(ν) : say in the first column, the ν1 1’s are located

at row 1 ≤ i1 ≤ i2 ≤ ... ≤ iν1, then we choose the monomial xi1xi2...xiν1 from

eν1. Proceeding in the same way, the row and column summing restrictions of K

µ,ν

ensures that one can always get a unique way to construct xµ1

1 x µ2 2 ...x µl(µ) l(µ) from eν = eν1eν2...eνl(ν).

With the claim proved, the proposition follows from the fact that transpose map is a bijection between Kµ,ν and Kν,µ.  Definition 1.2.12. Let ai∈ Λ for i ≥ 1. {ai}i≥1is algebraically independent,

if any polynomial p (y1, ..., yn) such that p (a1, ..., an) = 0 is indentically 0.

As an illustration, let’s consider m(1), m(12)and m(2). We have

m2(1)= e2(1)= e(1,1)= 2m(12)+ m(2).

So for polynomial q (y1, y2, y3) = y21− 2y2− y3, m(1), m(12), m(2) is a zero,

q m(1), m(12), m(2) = m2(1)− 2m(12)− m(2)

= 0.

Obviously q is not identically 0. So m(1), m(12) and m(2) are not algerbaically

independent.

Theorem 1.2.13. Λ = Z [e1, e2, ...] and {er}r≥1 is algebraically independent

over Z.

Proof. Since {mλ| λ ` k} is a Z-basis of Λk, {mλ| λ ` k, k ≥ 1} forms a

Z-basis of Λ = ⊕k≥1Λk. So eλforms another Z-basis as mλcan be expressed in terms

of a Z-linear combination of eµ. Therefore every element of Λ can be expressed as

a polynomial in eλ. Note that with algebraically independence, this expression is

unique.

To prove the algebraically independence, let f (y1, ..., yn) be a polynomial with

f (e1, ..., en) = 0. It suffices to show that f = 0. Note that each monomial summand

(16)

symmetric function eµfor µ being a partition of some positive number c. Note that

c =| µ |= deg (eµ) for µ ` c. Collecting terms with the same c and rearranging

f (e1, ..., en), we have

0 = f (e1, ..., en) = ... + (ac,1eµ+ ac,2eν+ ...) + ... ,

which further implies that each parenthesis is 0 since different parenthesis col-lects terms with different degrees. Then by Corollary 1.2.10, we know that these eµ’s in the same parenthesis are linearly independent. Hence all coefficients are 0.

f (y1, ..., yn) = 0. 

Remark 1.2.14. When the number of variables is finite, say x1, ..., xn, Theorem

1.2.13 reduces to Λn = Z [e1, ..., en] and that e1, ..., en is algebraically independent,

which is the “fundamental theorem on symmetric functions”.

Remark 1.2.15. Theorem 1.2.13 introduces a new perspective on Λ. As every element of Λ can be uniquely written as a polynomial in e1, e2, ..., we can think

of Λ as the set of all polynomials in formal variables e1, e2, .... This point of view

makes it possible to substitute values directly on variables e1, e2,... , instead of on

“bottom layer” variables x1, x2, ... . We will see later that this perspective provides

some insights.

1.3. Generating functions and more bases of Λ

In this section we will start with a brief introduction on generating functions. After that more bases of Λ as well as their connections will be discussed. As a jutification of the deviation, we shall prove some results of Λ with the help of generating functions, which greatly simplifies the proof. For the part of generating functions, we basically follow the lines of [SAG]§4.1.

Definition 1.3.1. Let (an)n≥0= a0, a1, ... be a sequence of integers. Then the

generating f unction of an is the power series

f (x) =X

n≥0

anxn.

If there’s a sequence of sets (Sn)n≥0= S0, S1, ... such that an =| Sn| for all n ≥ 0,

i.e. an enumerates set Sn, then f (x) is said to be the generating f unction of Sn.

Example 1.3.2. Consider sequence ai =

 n i  , 0 ≤ i ≤ n. Its generating function is f (x) = (1 + x)n = X 0≤i≤n  n i  xi.

f (x) is also the generating function of i-element subsets of {1, ..., n}. Let’s examine this example in more details.

First, given the sequence ai =

 n

i 

(17)

{∅, {1} , ..., {n} , {1, 2} , ..., {1, ..., n}}. Assigning each element in S its cardinality as its “parameter”. For example {1, 2, 3, 4} ∈ S has parameter 4. Note that set S together with the parameters have the property that:

the number of elements in S whose parameter equals k is ak.

Next, observe that any subset T of {1, ..., n} can be described as

T = (1 ∈ T OR 1 /∈ T ) AND (2 ∈ T OR 2 /∈ T ) AND ... AND (n ∈ T OR n /∈ T ) . Finally, rewrite the generating function f (x) = (1 + x)n as

f (x) = x0+ x1 · x0+ x1 · ... · x0+ x1 ,

where the number of factors are n. The following correspondences is almost obvious: AND corresponds to multiplication ×, OR corresponds to addition +. That 1 ∈ T and 1 /∈ T corresponds to x1and x0 can be understood in this way: since the exponent

of xn corresponds to the parameter(namely cardinality of T ), the choice of 1 ∈ T causes the cardinality of T to increase by 1 just as x1 increases the exponent of x by 1.

The preceeding analysis is actually just an instance of a basic general method for finding corresponding generating function given a sequence (an)n≥0:

(1) Find a set S with parameters such that the number of elements of S whose parameter equals n is an.

(2) Express elements of S as a sentence made up with logic operators OR, AND and the parameter.

(3) Translate the sentence into a generating function using +, × and xn.

Remark 1.3.3. From the previous examples, we see that what parameters do is exactly to keep track of the exponents of variable x so that the xncan “collect” only those elements with paramenter n from S and reflect the quantity as its coefficient. In order to keep track on more information, we simply use more variables and more parameters. For example, we can consider use x1 to keep track of the number of

subsets that contains 1(that will be the parameter corresponding to x1), similarly

x2 to the number of subsets that contains 2, while the total degree of x’s records

the number of elements in the subset. Thus the logical phrase (1 /∈ T OR 1 ∈ T ) AND (2 /∈ T OR 2 ∈ T ) is translated into x0

1+ x11 · x02+ x12 . In general, we are able to derive more

information from generating functions in more variables, at the cost of growing complexity.

A more rigorous approach to steps of deriving generating functions uses weighting f unction from our set S to the space of formal power series. With weighting func-tion defined, the set S is usually referred as weighted set. The so called weight generating f unction then can be associated to a weighted set S. We will not get into too much details here. Extensive discussions can be found in [SAG]§4.1 and [STA 11]§3, §4.

(18)

Theorem 1.3.4. The generating function for partitions is X n≥0 p (n) xn= 1 1 − x 1 1 − x2 1 1 − x3... .

Proof. Let S be the set of all partitions {λ = (1m1, 2m2, ...) | m

i ≥ 0, f or all i ≥ 0} .

Let parameter n be equal to the size of partition | λ |. For any partition λ ` n, we can express λ as 10∈ λ OR 11∈ λ OR 12∈ λ OR... AND 20∈ λ OR 21∈ λ OR 22∈ λ OR... AND 30∈ λ OR 31∈ λ OR 32∈ λ OR... AND... .

At the first row above, we determine the number of parts that equals to 1. The choice of 1k contributes 1 × k = k to the parameter n(the size of partition | λ |),

thus the first parenthesis translates into

x0×1+ x1×1+ x2×1... .

Similarly the second second row describes the number of parts that equals to 2. Now since the choice of 2k contributes 2 × k = 2k to the parameter n, the second parenthesis translates into

x0×2+ x1×2+ x2×2... . Multiplying them together, we have

X n≥0 p (n) xn= x0×1+ x1×1+ x2×1... · x0×2+ x1×2+ x2×2... · ... = 1 1 − x 1 1 − x2 1 1 − x3... .  Getting back to symmetric functions, we first describe the generating function of en.

Proposition 1.3.5. Consider Z [x, t] in commutaing variables t, x1, x2, .... The

generating function of {en}n≥0, denoted by E (t) , is

E (t) =X n≥0 en(x) tn= Y i≥1 (1 + xit) .

Proof. Consider the set of all partitions with distinct parts, S = {λ = (1m1, 2m2, ...) | m

i∈ {0, 1} for all i ≥ 0} .

Let t keep track of the length l (λ) of partition λ ∈ S, and let xi keep track of the

number of parts in λ that equals i. In this way, any element λ of S can be described as

(1 /∈ T OR 1 ∈ T ) AND (2 /∈ T OR 2 ∈ T ) AND... .

(19)

that equals i by 1. So, i ∈ T is translated into xit. Therefore we have the generating function X n≥0 en(x) tn= x01t0+ x11t1 · x02t0+ x12t1 · ... =Y i≥1 (1 + xit) .  Note that we can also prove this equation simply by exploiting the definition of en and comparing the coefficients of tn(which is a mononial in x1, x2, ...) on both

sides.

Definition 1.3.6. The k-th power sum symmetric f unction is pk =

X

i≥1

xki.

The k-th complete homogeneous symmetric f unction is hk=

X

i1≤i2≤...≤ik

xi1xi2...xik.

By convention, set h0= 1, hi<0= 0 and leave p0 undefined.

Just as ek can be thought of the homogeneous polynomial of degree k with

terms having their exponents most “evenly” ditributed, pk can be thought of those

with terms having their exponents ditributed in the most “clustered” way. Recall for ek, we have

ek= m(1k)

formulizing this perspective. For pk, it directly follows from definitions that

pk = m(k),

as both sides can be obtained by symmetrizing monomial xk

1. Following this line,

with ek and pk taking the two end points and the rest mλ, λ ` k distributed in

between, we see that hk is in fact the sum of the whole “spectrum”(justifying the

name “complete”),

hk =

X

λ`k

mλ.

Indeed, this equation can be seen from that they both are sums of terms xλ1 i1x λ2 i2...x λl il

where λ = (λ1, ..., λl) ` k, with multiplicity 1.

Example 1.3.7. For k = 3, p3= x31+ x32+ ... , e3= x1x2x3+ x1x2x4+ ... , h3= x31+ x 3 2+ ... + x 2 1x2+ x22x1+ ... + x1x2x3+ x1x2x4+ ... .

We can see the occurence of e3, p3 and other mλ, λ ` 3 in h3, as expected.

(20)

Definition 1.3.8. Let λ ` n and l = l (λ). The powersumsymmetricf unction of λ is

pλ= pλ1pλ2...pλl,

the complete symmetric f unction of λ is

hλ= hλ1hλ2...hλl.

Proposition 1.3.9. The generating function of {hn}n≥0, denoted by H (t), is

(1.3.1) H (t) =X n≥0 hntn= Y i≥1 1 1 − xit .

Proof. Let S be the set of all partitions λ. Let t keep track of l (λ) and xi

the number of parts that equal i. Any partition λ then can be expressed as 10∈ λ OR 11∈ λ OR 12∈ λ OR... AND 20∈ λ OR 21∈ λ OR 22∈ λ OR... AND 30∈ λ OR 31∈ λ OR 32∈ λ OR... AND... which is translated into

1 + x1t + x21t 2+ ... · 1 + x 2t + x22t 2+ ... · ... = 1 1 − x1t · 1 1 − x2t · ... . 

Instead of the generating functions for {pn}, it’s easier to derive that for

pn

n .

Proposition 1.3.10. The generating function of pnn

n≥1, denoted by P (t), is (1.3.2) P (t) =X n≥1 pn nt n= lnY i≥1 1 1 − xit .

Proof. To prove this euqation, we use the traditional way.

lnY i≥1 1 1 − xit =X i≥1 ln 1 1 − xit =X i≥1 X n≥1 (xit) n n =X n≥1 tn n X i≥1 xni =X n≥1 pn nt n.

(21)

Remark 1.3.11. In [MAC]§1.2, one can find a generating function for {pn}

itself instead of pn

n , with a slight deviation from the definition of generating

f unctions we use here.

X r≥1 prtr−1= X r≥1 X i≥1 xritr−1 =X i≥1 X r≥1 xritr−1 =X i≥1 xi 1 − xit =X i≥1 d dtln 1 1 − xit .

The “deviation” mentioned above is that here the pn appears as the coefficient of

tn−1instead of tn. In this article we will not refer this as the generating function of {pn}, instead, we follow the terminology of [SAG] and [EGG], using P (t) to denote

the generating function of pn

n as stated in (1.3.2). However, this obeservation

provides a useful formula.

Proposition 1.3.12. Let zλ=Qi≥1i mi·m

i!, where mi= mi(λ) = |{j | λj= i}| ,

i.e. the number of parts in λ that equals to i. Then

(1.3.3) H (t) =X λ z−1λ pλt|λ|, and (1.3.4) hn= X |λ|=n zλ−1pλ.

(22)

Treat P0(t) as known function and H (t) as unknown, from the theory of ordinary differential equations, we have

H (t) = eP (t) = ePn≥1pnnt n = Y n≥1 epnnt n = Y n≥1 ∞ X i=0 (pntn) i nii! =X λ zλ−1pλt|λ|.

Taking coefficients of tn for both sides leads to hn=

X

|λ|=n

zλ−1pλ.



As an immediate result, we prove a formula between {hn} and {pn} .

Proposition 1.3.13. For n ≥ 1, we have

(1.3.5) nhn=

n

X

r=1

prhn−r.

Proof. From the proof of the last proposition, we have H0(t)

H (t) = P

0(t)

H0(t) = P0(t) H (t) .

Taking coefficients of tn−1 on both sides proves (1.3.5), remember that p r is the

coeffcient of tr−1 in P0(t).



From the the expression of E (t) and H (t), E (t) =Y i≥1 (1 + xit) , H (t) =Y i≥1 (1 − xit)−1, we have (1.3.6) E (t) H (−t) = 1.

The coefficient of tn of the left hand side should vanish for every n ≥ 1, therefore

(23)

Taking derivatives of (1.3.6) on both sides, we have E0(t) H (−t) + (−1) E (t) H0(−t) = 0 H0(−t) H (−t) = E0(t) E (t)

Therefore we can easily get a formula for {en} and {pn} similar to (1.3.5).

Proposition 1.3.14. For n ≥ 1, we have

(1.3.8) nen= n

X

r=1

(−1)r−1pren−r.

Proof. From proof of Proposition 1.3.13 and the equation above, we have P0(−t) = H 0(−t) H (−t) = E0(t) E (t) , therefore E0(t) = P0(−t) E (t) which by comparing coefficients of tn−1, we have 1.3.8.

 Proposition 1.3.15. Let λ be a partition. Then

(1.3.9) det (hλi−i+j) = det eλ0i−i+j .

Proof. We here prove a stronger equation of determinants det hλi−µj−i+j = det

 eλ0 i−µ 0 j−i+j  ,

for λ and µ being two partitions. Recall that hk= ek= 0 for k < 0 and h1= e1= 1.

Consider matrice H = (hi−j)0≤i,j≤n=       1 0 ... 0 0 h1 1 ... 0 0 ... ... ... ... ... hn−1 hn−2 ... 1 0 hn hn−1 ... h1 1       and E =(−1)i−jei−j  0≤i,j≤n =       1 0 ... 0 0 −e1 1 ... 0 0 ... ... ... ... ... (−1)n−1en−1 (−1)n−2en−2 ... 1 0 (−1)nen (−1) n−1 en−1 ... −e1 1       ,

with n + 1 > l (λ) + l (λ0) and l (µ) + l (µ0) . Note that both are lower trangular matrices with n + 1 rows and columns and with unital diagonal, so det (E) = det (H) = 1. By (1.3.7), we have EH = HE = I. Now from the theory of matrices, it follows that det(H)I,J= (−1) P Ii+ P Jj det(E)J0,I0  det (E) = (−1) P Ii+ P Jjdet  (E)J0,I0  ,

(24)

Now let p, q be such that p is no less than lengths of λ and µ, q is no less than widths of λ and µ, and that p + q = n + 1. Let

I = {1 ≤ i ≤ p | λi+ p − i} , I0 =1 ≤ j ≤ q | p − 1 + j − λ0j .

It’s easy to see that these two sets form a complementary pair of indices of {0, 1, ..., n} by putting shape λ into the rectangle q × p, and numbering the p + q border seg-ments from bottom to top with {0, 1, ..., n} . For instance, the border segseg-ments of λ = (4, 3, 1) in a rectangle 4 × 6 is:

× × × × × × × × × × × × × × × ×

where cells belonging to λ are indicated by and those out of λ by ×. Then those in I are exactly numbers in vertical segments and those in I0 in horizontal ones. Same applies to

J = {1 ≤ i ≤ p | µi+ p − i} , J0 =1 ≤ j ≤ q | p − 1 + j − µ0j .

Hence, the previous determinant equation further gives det hλi−µj−i+j  1≤i,j≤p= (−1) |λ|+|µ| det(−1)λ0i−µ 0 j−i+je λ0 i−µ0j−i+j  1≤i,j≤q = deteλ0 i−µ0j−i+j  1≤i,j≤q.

The desired equation is thus obtained by setting µ = 0. 

Now since {eλ| λ ` n, n ≥ 0} forms a basis of Λ, one can define a linear map

ω : Λ → Λ by

ω : eλ7→ hλ,

for all λ ` n ≥ 0. It can be seen that ω is a homomorphism ω (eλeµ) = ω (eλ◦µ)

= hλ◦µ

= hλhµ,

where λ ◦ µ represents the unique partition of n + m obtained from the union of sets of parts λ ∪ µ, given that λ ` n and µ ` m. For example,

λ = (3, 1, 1) ` 5

µ = (4, 2) ` 6 λ ∪ µ = {3, 1, 1, 4, 2} λ ◦ µ = (4, 3, 2, 1, 1) ` 11. Further more, it’s obvious that ω preserves degree, which leads to that ω is actually a homomorphism of graded rings Λ → Λ.

Proposition 1.3.16. ω2is the identity map. Equivalently, ω is an involution. Proof. By definition of ω, in particular, we have

(25)

for all n ≥ 0. Apply ω to both sides of formula (1.3.7), ω’s linearity and being homomorphism implies n X k=0 (−1)kω (ek) ω (hn−k) = 0 n X k=0 (−1)khkω (hn−k) = 0 n X k=0 (−1)−khkω (hn−k) = 0 n X k=0 (−1)n−khkω (hn−k) = 0 n X l=0 (−1)lω (hl) hn−l= 0.

Comparing the last equation with (1.3.7)

n X k=0 (−1)kekhn−k= 0, we have ω (hn) = en, thus ω2(en) = en for all n ≥ 0. 

Injectivity and surjectivity of ω follows immediately. So ω is an automorphism of Λ. Therefore, a same result as Theorem (1.2.13) of {en}n≥0 can be derived for

{hn}n≥0without effort.

Theorem 1.3.17. Λ = Z [h1, h2, ...] and {hn}n≥1 is algebraically independent

over Z.

From equation (1.3.5) we see that hn∈ Q [p1, ..., pn] because of the coefficient

n at the left hand side, and also that pn ∈ Z [h1, ..., hn]. Therefore we are led to

the following result.

Proposition 1.3.18. Q [p1, ..., pn] = Q [h1, ..., hn] .

Proof. “⊆” follows from pn∈ Z [h1, ..., hn] and “⊇” follows from hn ∈ Q [p1, ..., pn].

 Corollary 1.3.19. {pn}n≥1is algebraically independent over Q.

Proof. It follows immediately from the above two results as algebraical inde-pendence is always defined on finite terms.  It follows that the analogy of Theorem 1.3.17 on {pn} holds over Q, not Z.

Denote ΛQ= Λ ⊗ZQ, namely use field Q in place of Z as coefficient ring.

Theorem 1.3.20. ΛQ= Q [p1, p2, ...] and {pn}n≥1 is algebraically independent

(26)

We put an end on this section by the image of pn under involution ω.

Proposition 1.3.21. For n ≥ 1, we have ω (pn) = (−1)

n−1

pn.

Proof. To prove, one simply combines formula (1.3.5), (1.3.8) and notice that

(27)

Schur Functions and Representations of Symmetric

Groups

2.1. Schur functions

Schur function occupies a central position in the theory of symmetric functions, its having multiple different but equivalent definitions is one manifestation of this fact. On the other hand, Schur function also plays a critical role in the second part of this article, namely in solving the problem of Kronecker coefficient. So in this section, we will start with the historically first definition [MAC]§1.3 which is discovered by Jacobi but first linked with representations of symmetric groups by Schur. Then we shall reach the formula using raising operators(to be defined later), and following lines of [HAR], prove the tableau formula which is frequently used as definition in more modern approaches, e.g. [SAG]§4. During the process, we may get a first glance of how the connection with representations of symmetric groups emerges naturally.

So far, in previous sections, we’ve been focusing our attention on Λ, the space invariant under the action of permutations. Now it is high time to take a closer look at the second “naturally” arising space mentioned in §1.1: the space invariant up to sign under permutations, namely those polynomials f (x1, ..., xn) such that

π (f ) = sgn (π) f, where π ∈ Sn.

Given xα= xα1

1 x α2

2 ...xαnn with α = (α1, α2, ..., αn) a partition of length n, one

obvious way to obtain such polynomial is antisymmetrizing xα:

aα:=

X

π∈Sn

sgn (π) π (xα) .

It can be easily seen that

(28)

Note that if there are identical parts in α, say αi = αj for i 6= j, then aα = 0.

Indeed,

aα= (i, j) aα= sgn (i, j) aα= −aα.

Thus without lost of generality, we may assume that all parts of α are distinct, i.e. α1 > α2 > ... > αn ≥ 0. Therefore α can be written as α = λ + δ, where λ is a

partition of length no more than n and δ = (n − 1, n − 2, ..., 1, 0). Then according to the definition, aα= aλ+δ = X π∈Sn sgn (π) π xλ+δ = X π∈Sn sgn (π) πxλ1+n−1 1 x λ2+n−2 2 ...x λn n  = detxλj+n−j i  1≤i,j≤n.

In particular, when λ = 0, we have

aδ= det



xn−ji = Y

1≤i<j≤n

(xi− xj)

which is the V andermonde determinant.

Observe that aα = aλ+δ is divisible by (xi− xj) in Z [x1, ..., xn] for each 1 ≤

i < j ≤ n, since in matrixxλj+n−j

i



, after adding a negative copy of j-th row to i-th row, we can factor out (xi− xj) without altering determinant

det       ... ... ... ... ... xλ1+n−1 i x λ2+n−2 i ... ... x λn i ... ... ... ... ... xλ1+n−1 j x λ2+n−2 j ... ... x λn j ... ... ... ... ...       = det       ... ... ... ... ... xλ1+n−1 i − x λ1+n−1 j x λ2+n−2 i − x λ2+n−2 j ... ... x λn i − x λn j ... ... ... ... ... xλ1+n−1 j x λ2+n−2 j ... ... x λn j ... ... ... ... ...       = (xi− xj) det (...) .

This observation further implies that aλ+δ is divisible by aδ in Z [x1, ..., xn].

Definition 2.1.1. Let δ = (n − 1, ..., 1, 0) and λ be a partition with l (λ) ≤ n. Then the Schur f unction of partition λ in variable x1, ..., xn is

sλ(x1, ..., xn) :=

aλ+δ

.

(29)

Proof. If λ = 0 claim follows. Let λ > 0, then π (sλ(x1, ..., xn)) = π (aλ+δ) π (aδ) = sgn (π) aλ+δ sgn (π) aδ = sλ(x1, ..., xn) . 

As we increase the number of variables, as long as n ≥ l (α), we have aα(x1, ..., xn, 0) = aα(x1, ..., xn) .

In notation of §1.1, we have

ρn+1,n(sλ(x1, ..., xn, xn+1)) = sλ(x1, ..., xn) ,

where ρm,n: Λm→ Λnis an isomorphism. Hence for each partition λ, sλ(x1, ..., xn)

will stablize once there is “sufficient” variables, namely when n ≥ l (λ), just as the discussion in §1.1. Therefore we can denote sλ to be the unique element in Λ as

n → ∞.

As we have seen in the last section, elementary symmetric functions {en}n≥0

and complete symmetric functions {hn}n≥0 form algebraic bases of Λ. The next

formula, usually referred as Jacobi-Trudi identities, expresses sλin terms of en and

hn seperately.

Theorem 2.1.3. Let λ be a partition. Let n ≥ l (λ) and m ≥ l (λ0). Then (2.1.1) sλ= det (hλi−i+j)1≤i,j≤n,

and

(2.1.2) sλ= det eλ0 i−i+j



1≤i,j≤m.

Before proving Jacobi-Trudi identities, we prove the following claim first. De-note

e(k)r = er(x1, ..., xk−1, xk+1, ..., xn) .

Claim 2.1.4. For any interger sequence α ∈ (α1, ..., αn) ∈ Nn, let

(30)

and M =(−1)n−ie(j)n−i 1≤i,j≤n =        (−1)n−1e(1)n−1 (−1)n−1e(2)n−1 ... (−1)n−1e(n)n−1 (−1)n−2e(1)n−2 (−1)n−2e(2)n−2 ... (−1)n−2e(n)n−2 ... ... ... ... −e(1)1 −e(2)1 ... −e(n)1

1 1 ... 1        . Then Aα= HαM .

Proof of claim. Note that the assertion is an equation of n × n matrices. Denote the genrating function of e(k)r by E(k)(t). Then

E(k)(t) = n−1 X r=0 e(k)r tr =Y i6=k (1 + xit) = E (t) 1 + xkt ,

which can be viewed as removing the option of “select xk” in the construction of

elementary function er. Hence from H (t) E (−t) = 1, one can derive

H (t) E(k)(−t) = 1 1 − xkt

= 1 + xkt + x2kt 2+ ... .

Expanding both sides and comparing coefficient of tαi, observing that h

r and e (k) r

are both homogeneous of degree r, we have

n X j=1 hαi−n+j· (−1) n−j e(k)n−j= xαi k

which directly leads to HαM = Aα. 

Proof of Theorem 2.1.3. By formula (1.3.9), it suffices to prove one. Here we prove (2.1.1). Takind determinants on both sides of Aα= HαM , we have

aα= det (Aα) = det (Hα) det (M )

for any interger sequence α ∈ (α1, ..., αn) ∈ Nn. In particular, when α = δ, Hδ =

(hj−i)1≤i,j≤nwhich is a upper triangular matrix with diagonal entries equal 1 since

hr= 0 for r < 0 by definition. Therefore det (Hδ) = 1, and

(31)

For α with α1> ... > αn ≥ 0, write α = λ + δ where λ is a partition of length no

longer than n, we arrive at

sλ= aλ+δ aδ =aα aδ = det (Hα) .  Remark 2.1.5. From Jacobi-Trudi identities and the fact that involution ω interchanges en with hn, we have

ω (sλ) = det (ω (hλi−i+j)) = det (eλi−i+j) = dete0)0 i−i+j  = sλ0.

So far we have a relative clear picture of the involution ω on various bases Λ: eλ7→ hλ

(2.1.3)

hλ7→ eλ

pλ7→ (−1)|λ|−l(λ)pλ

sλ7→ sλ0.

Two things to be noticed about the table above: first, the third equation can be derived from Proposition 1.3.21, namely

ω (pn) = (−1)n−1pn;

second, we haven’t proved that Schur functions form a basis of Λ yet, but they do, and we intend to postpone the proof to a later point.

Remark 2.1.6. Consider extreme partitions (n) and (1n) for Jacobi-Trudi iden-tities, one easily obtains

s(n)= hn,

s(1n)= en.

Now we are ready to derive the raising operator formula of Schur functions which is usually used to define Schur functions in contexts focusing more on alge-braical structures.

Definition 2.1.7. Given any integer sequence a = (a1, ..., an) ∈ Zn and any

pair of intergers i, j with 1 ≤ i < j ≤ n, define Rij: Zn→ Zn by

Rij(a) = (a1, ..., ai−1, ai+ 1, ai+1, ..., aj−1, aj− 1, aj+1, ..., an) .

Any product R =Q

i<jR rij

ij is called a raising operator.

Note that different Rij’s commute. Therefore the sequence of forming a raising

operator doesn’t matter. Also notice that when applying to partitions, a raising operator does “raise” partitions with respect to lexicographic order:

(32)

Let raising operators act on hλ through subscripts, i.e.

Rijhλ= hRijλ.

We are now able to express the Schur functions as a polynomial in complete homo-geneous symmetric functions using raising operators. It may be viewed as another way to express Jacobi-Trudi identity (2.1.1).

Proposition 2.1.8. Let λ be a partition. Then

(2.1.4) sλ=

Y

1≤i<j≤l(λ)

(1 − Rij) hλ.

Proof. In the ring of polynomials containing formal inverses of variables Zx±11 , ..., x±1n , we have Y 1≤i<j≤l(λ) (1 − Rij) xλ= Y 1≤i<j≤l(λ) 1 − xix−1j  · x λ = xλ· xn−1 1 · x n−2 2 · ... · x 1 n−1· Y 1≤i<j≤l(λ) x−1i − x−1j  = xλ+δa−δ = X π∈Sn sgn (π) xλ+δ−π(δ),

where R acts on xλ through exponents Rxλ= x

. Define Z-linear map

ϕ : Zx±11 , ..., x ±1 n  → Λn

xα7→ hα

for all integer sequence α ∈ Zn

. ϕ is well-defined since ϕ Zx±1

1 , ..., x±1n  ⊆ Λn=

Z [h1, ..., hn] and h<0 = 0, h0= 1. Let’s check how ϕ interacts with raising operator

Rij: ϕ (Rijxα) = ϕ xRijα  = hRijα = Rijhα = Rijϕ (xα) .

(33)

holds in hα’s instead of xα’s. Indeed, Y 1≤i<j≤l(λ) (1 − Rij) hλ= Y 1≤i<j≤l(λ) (1 − Rij) ϕ xλ  = ϕ   Y 1≤i<j≤l(λ) (1 − Rij) xλ   = ϕ X π∈Sn sgn (π) xλ+δ−π(δ) ! = X π∈Sn sgn (π) ϕxλ+δ−π(δ) = X π∈Sn sgn (π) hλ+δ−π(δ) = det (hλi−i+j) = sλ. 

Note that the proof doesn’t exploit the condition of λ being a partition, so the formula actually holds for integer sequences.

Example 2.1.9. For λ = (4, 3, 1), we have s(4,3,1)= (1 − R12) (1 − R13) (1 − R23) h(4,3,1) = (1 − R12− R13− R23+ R12R13+ R12R23+ R13R23− R12R13R23) h(4,3,1) = h(4,3,1)− h(5,2,1)− h(5,3,0)− h(4,4,0)+ h(6,2,0)+ h(5,3,0)+ h(5,4,−1)− h(6,3,−1) = h4h3h1− h5h2h1− h5h3− h4h4+ h6h2+ h5h3+ 0 − 0 = h4 h5 h6 h2 h3 h4 0 1 h1 .

If we treat the formula of raising operator 2.1.4 as definition of Schur functions, then in fact they are well-defined not only with partitions, but also with all integer sequences(which has only finitely many nonzero components)

sα:=

Y

1≤i<j≤l(α)

(1 − Rij) hα,

with hα generalized naturally. Since the proof of 2.1.4 remains valid for integer

sequences, Jacobi-Trudi identities also generalize to integers sequences, sα= det (hαi−i+j)1≤i,j≤n.

(34)

Lemma 2.1.10. Let λ, µ be partitions with |λ| − |µ| = p ≥ 0 and α be a compo-sition of p. Then we have

(2.1.5) X α sµ+α= X µ→λp sλ and (2.1.6) X α sλ−α= X µ→λp sµ,

where µ → λ means λ is a tableau obtained from µ by adding p boxes in such ap way that no two boxes are added in same column. Moreover, for equation (2.1.5), re-stricting sums of both sides to compositions {α | l (α) ≤ n} and partitions {λ | l (λ) ≤ n} for any n ≥ l (µ), the equation remains true. Same applies to equation (2.1.6).

Note that the sum of left hand side contains more summands than that of right hand side, as each summand in the right sum also occur in the left. This equation states that only those form a particular set of new partitions survive. The only possible explaination is that those do not belong to this set all cancel each other. This is in fact the idea of the proof.

Proof. Only equation (2.1.5) is proved here. Given µ as in the statement, consider the following set of integer sequences

S = {β | |β| = |µ| + p, such that µi < βi+1for some i ≥ 1} .

Observe that S contains exactly those integer sequences that are not shown in the right sum of (2.1.5), since violation of condition µi< βi+1 implies immediately that

there are two boxes added in the same column at the i-th and (i + 1)-th row. After this, we define a map that connects those pairs to be cancelled:

ι : S → S

ν = (ν1, ..., νk, νk+1, ...) 7→ (ν1, ..., νk+1− 1, νk+ 1, ...) .

It’s obvious that ι is an involution. With the help of Jacobi-Trudi identity in integer sequences, we have

sν+ sι(ν)= 0

which follows from the alternating property of determinants det (hνi−i+j) = −det hι(ν)i−i+j ,

since matrix hι(ν)i−i+j is simply obtained from matrix (hνi−i+j) with the k-th

and (k + 1)-th row interchanged. 

Theorem 2.1.11 (Pieri Rule). Let λ, µ be partitions with |λ| − |µ| = p ≥ 0. Then

(2.1.7) hpsµ =

X

µ→λp

sλ,

(35)

Proof. Let l = l (α), we have hpsµ= hp· Y 1≤i<j≤l (1 − Rij) hα = Y 1≤i<j≤l (1 − Rij) h(α,p) = Y 1≤i<j≤l+1 (1 − Rij) l Y k=1 (1 − Rk,l+1) −1 h(α,p) = Y 1≤i<j≤l+1 (1 − Rij) l Y k=1 1 + Rk,l+1+ Rk,l+12 + ... h(α,p) = X |α|=p sλ+α.

The last equality holds since for any m 6= p we have Rmk,l+1h(α,p) = 0 .

In-deed, for m < p, then (1 − R1,l+1) ... (1 − Rl,l+1) from the first product eliminates

Rm

k,l+1h(α,p); for m > p, then Rmk,l+1h(α,p) = h(β,q) = 0 because q < 0. Now by

(2.1.5), hpsµ= X |α|=p sλ+α = X µ→λp sλ. 

Remark 2.1.12. In the theory of representation theory of symmetric groups, for special case of p = 1, Pieri rule corresponds to the Young’s branching rule

Sµ↑Sn+1∼=

λ=µ+

Sλ,

where µ ` n, µ+ = λ ← µ and S1 µ is the Specht module of symmetric group S n.

This correspondence will occur in the next chapter with more details and more parallels.

As promised, we herein prove the tableau formula which frequently used as initial definition of Schur functions, for exmample in [EGG], [SAG].

Theorem 2.1.13. Let λ be a partition. Then sλ=

X

T ∈SSY T (λ)

xcont(T ).

Example 2.1.14. Let λ = 2, 12 . Then the corresponding Schur function sλ

(36)

Proof of Theorem 58. Let Y be another sequence of commuting variables y1, y2, ... . Let (Xn, Y ) be variables x1, ..., xn, y1, y2, ... . Consider the generating

function in this sequence of variables, and extract the part of xn, we have ∞ X r=0 hr(Xn, Y ) tr= n Y i=1 (1 − xit)−1· ∞ Y j=1 (1 − yjt)−1 = (1 − xnt) −1 ·   n−1 Y i=1 (1 − xit) −1 · ∞ Y j=1 (1 − yjt) −1  .

Note that in the last equation, the first parathesis is the generating function for hi(xn) and the second is for hi(Xn−1, Y ). Comparing the coefficient of term tp,

we have hp(Xn, Y ) = p X i=0 hi(xn) hp−i(Xn−1, Y ) .

For any integer sequence β, we have hβ(Xn, Y ) = X α≥0 hα(xn) hβ−α(Xn−1, Y ) =X α≥0 x|α|n hβ−α(Xn−1, Y ) ,

where β − α is defined component-wise. By replacing β with Rλ on both sides, where R =Q

i<j(1 − Rij) is a raising operator and λ is a partition, we get

hRλ(Xn, Y ) = X α≥0 x|α|n hRλ−α(Xn−1, Y ) Rhλ(Xn, Y ) = X α≥0 x|α|n hR(λ−α)(Xn−1, Y ) Rhλ(Xn, Y ) = X α≥0 x|α|n Rhλ−α(Xn−1, Y ) , further leads to sλ(Xn, Y ) = X α≥0 x|α|n sλ−α(Xn−1, Y ) = ∞ X rn=0 xrn n X |α|=rn sλ−α(Xn−1, Y ) . By (2.1.5), we arrive at sλ(Xn, Y ) = ∞ X rn=0 xrn n X µrn→λ sµ(Xn−1, Y ) ,

which is usually referred as reduction f ormula. Note that this formula can be used recursively, another application leads to

(37)

After nsteps, all x-variables x1, ..., xn are extracted out, we have sλ(Xn, Y ) = ∞ X rn=0 xrn n ∞ X rn−1=0 xrn−1 n−1... ∞ X r1=0 xr1 1 X η→...νr1 rn−1→ µrn→λ sη(Y ) .

The equation above can be understood in two parts. First, the sum of Schur functions

X

η→...νr1 rn−1→ µrn→λ

sη(Y )

describes processes to obtain Young diagram η from Young diagram λ(reversing all arrows) by removing rn−i+1 boxes at the i-th step in such a way that no two(or

more) boxes at the same column are removed in one same step. Observe that there can be more than one “path” from a given “starting” Young diagram λ and “destination” Young diagram η. Secondly, the n sums of x’s,

∞ X rn=0 xrn n ∞ X rn−1=0 xrn−1 n−1... ∞ X r1=0 xr1 1 ,

helps to distinguish these paths. For instance, for λ = (4, 3, 1) and η = (2, 1) there exist paths(there are more):

(2, 1)→ (4, 1)1 → (4, 3, 1) ,2 (2, 1)→ (3, 2)1 → (4, 3, 1) ,2 (2, 1)→ (3, 2)1 → (4, 3)2 → (4, 3, 1) .3

As a crucial step of the proof, we claim that with λ and η fixed as starting diagram and destination diagram, there’s a bijection between



paths η r1

→ ...νrn−1→ µrn

→ λ such that no two boxes in the same column are removed in one step



and

{T ∈ SSY T (λ/η) | max (Ti,j) ≤ n} .

The second set is the set of semistandard Young tableaux of shape λ/η with entries no greater than n, denote it by SSY Tn(λ/η). The bijection is realized through the

algorithm that put n − k + 1 in those boxes removed in the k-th step. Then the condition “no two boxes in the same column are removed in one step” corresponds exactly to the strictly increasing requirement in columns of SSY T .

For illustration, use the instance above, λ = (4, 3, 1) and µ = (2, 1) , the paths and their corresponding elements in SSY Tn(λ/η) are

(38)

(2, 1)→ (3, 2)1 → (4, 3)2 → (4, 3, 1) ←→3

/ / 1 2 / 1 2 3

.

Proceed with the equation and we get

sλ(Xn, Y ) = ∞ X rn=0 xrn n ∞ X rn−1=0 xrn−1 n−1... ∞ X r1=0 xr1 1 X ηr1→...νrn−1→ µrn→λ sη(Y ) = X T ∈SSY Tn(λ/η) xcont(T )X η⊆λ sη(Y ) (2.1.8)

where the second sum is taken over partitions η. Given η, the coefficient of sη(Y ),

as a polynomial of x’s, has exactly

|{T ∈ SSY T (λ/η) | max (Ti,j) ≤ n}|

terms. Since (2.1.8) holds for arbitray n, we can rewrite it with X = (x1, x2, ...)

instead, sλ(X, Y ) = X T ∈SSY T (λ/η) xcont(T )X η⊆λ sη(Y ) .

Note that correspondingly, the subscript of SSY T (λ/η) is removed. Now substi-tuting Y = 0, namely 0 = y1 = y2 = ... , we completes the proof of the tableau

formula sλ= sλ(X) = X T ∈SSY T (λ) xcont(T ). 

Definition 2.1.15. The Kostka numbers are

Kλ,µ= |{T ∈ SSY T (λ) | cont (T ) = µ}| ,

where λ is a (skew) partition and µ is a composition.

The matrix formed by Kostka numbers {Kλ,µ| λ, µ are both partitions} is in

fact the transition matrix M (s, m)λ,µ.

Proposition 2.1.16. Let λ be a partition. Then sλ=

X

µ

Kλ,µmµ,

where the sum is taken over partitions µ.

Proof. From Theorem 2.1.13, Schur function can be written as sλ=

X

α

Kλ,αxα,

where the sum is taken over all compositions α, since the definition of Kostka num-bers simply describes with the multiplicity of xα in s

λ. Then from the symmetry

of Schur functions, first we have

(39)

where α is the image of α under some permutation. That is to say, those termse xα whose exponent sequences α are identical up to permutations share the same coefficient with Kλ,α. Collecting these terms together, we have

sλ= X µ Kλ,µ   X xα∈S(xµ) xα   =X µ Kλ,µmµ,

where µ represents the unique element whose exponent sequences forms a partition.  Similar to the transition matrices M (e, m) between eλ and mµ, we have the

analogous results for Schur functions.

Proposition 2.1.17. Let |ν| = |µ|. Then 1) if µ > λ then Kλ,µ= 0,

2) Kλ,λ= 1.

Proof. 2) is obvious as there’s only one semistandard tableau in shape λ with content λ, namely the one obtained by filling i in the i-th row, since the number of i’s equals λi which also equals the number of boxes in i-th row. This tableau is

usually referred as superstandard.

For 1), from the definition of lexicographic, there is an index i such that µk= λk

for all 1 ≤ k < i and µi > λi. For the first i − 1 rows, there is only one way to

fill in numbers not violating semistandard-ness, namely inputting k for all boxes in k-th row. But µi > λi implies that there are more i’s than the boxes in i-th row.

Therefore there’s no such semistandard tableau in shape λ with content µ such that

µ > λ. 

Corollary 2.1.18. The set of Schur functions {sλ| λ ` k} forms a basis of

Λk.

Proof. Anology to proof of Corollary 1.2.10. 

2.2. Scalar product on Λ and more about Schur functions

In this section we will equip a scalar product on Λ and see the interactions with various bases. Then attention will be focused on skew Schur functions which can be viewed as a natural generalization of Schur function in the sense of its tableau formula. We will put an end of this section using the well-known Littlewood-Richardson rule.

Proposition 2.2.1. Let µ, λ be partitions with |µ| = |λ| . Then hµ=

X

λ

Kλ,µsλ,

(40)

Proof. Recall the Pieri rule 2.1.7, hpsν =

X

ν→λp

sλ,

where |λ| − |ν| = p ≥ 0. It is a recursive formula in the following sense,

hqhpsν = X ν→λp hqsλ = X ν→λp X λ→κq sκ = X ν→λp →κq sκ,

which can be easily generalized for composition α, hαsν =

X

ναr→λ1...λr−1α1→λ

sλ,

where r = l (α) , ν = λ0 ≤ λ1 ≤ ... ≤ λr = λ are ascending partitions w.r.t

lexicographic order with λi+1 −

λi

= αr−i for 0 ≤ i ≤ r − 1. Note that the

sum is taken over all paths ν αr

→ λ1 αr−1

→ ...α2

→ λr α1

→ λ such that no two boxes are added in the same column in one step(here we are following the directions of arrows thus the sequence can be interpreted as transforming from tableau ν to tableau λ1

by adding αrboxes, and so on, while in proof of Theorem 2.1.13 we interpreted it

backwards; they are in obvous bijection). By the bijection in the proof of Theorem 2.1.13, we have hαsν = X λ s.t. T ∈SSY T (λ/ν), cont(T )=α sλ,

where the sum is taken over partitions λ. This equation says that the multiplicity of sλ in this sum equals the number of semistandard (skew)tableaux in shape λ/ν

with content α, while the latter one is simply Kostka numbers Kλ/ν,α. Therefore

we obtain

hαsν=

X

λ

Kλ/ν,αsλ.

Letting ν = 0 and restricting αto be a partition µ, we actually get the transition matrix M (h, s)µ,λ= Kλ,µ, hµ= X λ Kλ,µsλ. 

Recall that K = M (s, m) is a triangular matrix with unital diagonal entries, therefore we have

(41)

Proposition 2.2.2. Let X = (x1, x2, ...) and Y = (y1, y2, ...) be two sets of

commuting variables. Then

(2.2.1) X λ sλ(X) sλ(Y ) = X µ mµ(X) hµ(Y ) .

Proof. Write A = K−1 and B = Kt, X λ mλ(X) hλ(Y ) = X λ X µ Aλ,µsµ(X) ! X ν Bλ,νsν(Y ) ! =X µ,ν X λ Atµ,λBλ,ν ! sµ(X) sν(Y ) =X µ,ν δµ,νsµ(X) sν(Y ) =X µ sµ(X) sµ(Y ) .  We herein state the Cauchy identity. The proof follows [MAC]§1.4.

Theorem 2.2.3. We have (2.2.2) X λ sλ(X) sλ(Y ) = Y i,j≥1 1 1 − xiyj .

Proof. From the generating function H (t) =Qi≥1 1 1−xit, we have Y i,j≥1 1 1 − xiyj =Y j≥1 H (yj) =Y j≥1 X r≥0 hr(X) yjr =X α≥0 hα(X) yjα =X α≥0 yjαhα(X) .

Viewing yj’s as the coefficients of hα’s and noticing that hαis symmetric, we collect

those hα’s whose α are images of a unique partition λ under some permutation and

use λ as representative of each family of compositions, we have

Y i,j≥1 1 1 − xiyj =X λ    X yα j∈S∞(yλj) yαj   hλ(X) =X λ mλ(Y ) hλ(X) .

The last step follows from the definition of mλ since the sumPyα

j∈S∞(yλj) is taken

over all distince images of yλj under permutations. Then (2.2.1) completes the

References

Related documents

In the second part of the paper, we consider the corresponding formulas for hyperbolic modular functions, and show that these Möbius transformations can be used to prove

Green organisms such as plants combine energy and matter to make the food on which all life depends, predators hunt prey, and decomposers such as bacteria and fungi recycle

The result that has been obtained is that given the algebra of bounded holomorphic functions (or functions holomor- phic in the domain and continuous up to the boundary) on an

Keywords: platelet-derived growth factor, PDGF-A, PDGF-C, PDGF alpha receptor, extracellular retention, gene targeting, mouse development, epithelial-mesenchymal interaction,

Re-examination of the actual 2 ♀♀ (ZML) revealed that they are Andrena labialis (det.. Andrena jacobi Perkins: Paxton &amp; al. -Species synonymy- Schwarz &amp; al. scotica while

Formulate and prove the Lax-Milgram theorem in the case of symmetric bilinear

Peripheral expression of Aire and TRAs – Lee and co-workers introduced lymph node stromal cells expressing ER-TR7 (expressed on cells in thymic capsule, septa and around blood

In Paper II, we generalise Burger’s method to study the equidistribution of translates of pieces of horospherical orbits in homogeneous spaces Γ\G for arbitrary semisimple Lie groups