• No results found

Subset selection based on likelihood from uniform and related populations

N/A
N/A
Protected

Academic year: 2022

Share "Subset selection based on likelihood from uniform and related populations"

Copied!
30
0
0

Loading.... (view fulltext now)

Full text

(1)

No. 1979-7 April 1979

SUBSET SELECTION BASED ON LIKELIHOOD FROM UNIFORM AND RELATED POPULATIONS

by

Jayanti Chotai

American Mathematical Society 1970 subject classification: Primary 62F07;

Secondary 62A10, 62F05.

Key words and phrases: Subset selection, likelihood ratio, order restric­

tions, uniform distribution.

(2)

ABSTRACT

Let • • • » k

e

k (>2) populations. Let

( i = 1 , 2 , . k ) b e c h a r a c t e r i z e d b y t h e u n i f o r m d i s t r i b u t i o n on (a^, b^), where exactly one of a^. and b^ is unknown.

With unequal sample sizes, suppose that we wish to sel ect a random-size subset of the populations containing the one with the smallest value of 0. » b. - a.. Rule R- selects ïï. iff i l i 1 i a likelihood-based k-dimensional confidence region for the un­

known (0^, 0contains at least one poin t having 0^ as its smallest component. A second rule, R, is derived through a likelihood ratio and is equivalent to that o f Barr and Rizvi (1966) when the sample sizes are equal. Numerical comparisons are made.

The results apply to the larger class of densities g(z; 0^) =

M(z)Q(0^) iff a(0^) < z < b(0^). Extensions to the cases when

both a. and b. are unknown and when 0 i i max is of intere st are

indicated.

(3)

1. INTRODUCTION

Let 7T^, be (k>2) given populations and assume that tnCì

s

1, ..., k) is characterized by a probability dis­

tribution depending on an unknown parameter 0... Let 1 •••

< 9[k] denote their ordered values and let •••»

ïï

[k]

denote the corresponding populations. We denote the k-dimensional parameter space for 0 « (0^, ..., 0^) by ß. Based on indepen­

dent samples from the populations, suppose that we are interested in selecting a random-size subset of the populations which hope­

fully contains the best population (which may be

or ïï

[k]^

#

This problem has been considered extensively in the literature;

see Seal (1955) and Gupta (1965, 1977). A bibliography containing over six hundred items has been compiled by Kulldo rff (1977). Most of the selection procedures which have appeared (when selecting for IT [^-j) may be expressed as

R(h) : Select IT . iff h(T.) > max T.,

1 1 3

where h (x) is a suitable function and for each i, T^ is a suitable estimate of 0^; see Gupta and Panchapakesan (1972). By a correct selection (CS) is meant selection of a subset that con­

tains the best population. Under the P*-approach considered in the above references, one requires

inf P(CS) > P*, (1.1)

0ۧ

where P* is a prespecified constant. Requirement (1.1) is known as the basic probability requirement or the P*-condition.

In this paper, we assume that tt\ (i = 1, ..., k) is char­

acterized by U(a^, b^), the uniform distribution on (a^, b^).

Assume that for each i, one of a. and b. is known and that i i the other is unknown. Suppose that we take a random sample

{Z-t, ..., Z. } of size n. from ïï. (i = 1, ..., k) and that il XIl£ i i .

the best population is the one with the small est value of

(4)

2.

0. = b. - a.. Since we may consider Z.. - a. if a. is known 1 1 1

J

l j i i and b. - Z*. if b. is known, we shall henceforth assume that i lj i

TT. is characterized by U(0, 6.) with 0. >0. In the deriva-

l

J '

i i

tion of the procedure of Section 2, we consider a likelihood- based confidence region for 0 and select TK iff the region contains at least one 0 having 0^ as its smallest component.

Rule R, which reduces to that given by Barr and Rizvi (1966) when n^ = ... = n^» is derived through a likelihood ratio given in Section 3. For the case n^ = ... - n^, comparisons are made between R^ and R for k = 3 in Section 4 and for k « 10 in Section 5. Section 6 contains some extensions and generalizations.

Although the uniform distribution is of interest as such, there are also other reasons why the results of this paper would be of interest. Firstly, the tables and formulae given here in fact apply to a much larg er class of distributions as considered in Section 6.3. Secondly, the approach used here to derive selec­

tion procedures is different from th e ones usually consi dered in the literature where the "slippage configuration

11

plays an impor­

tant part. For the normal means problem, a detailed study of the rules derived through the lik elihood approach appears in Chotai (1978); its extension to cover an exponential class of distribu­

tions and other generalizations will be treated elsewhere. Third­

ly, there has recently been a growing interest to formulate the subset selection problems in terms of realistic loss functions rather than the P*-approach. Since Bayes procedures are often difficult to obtain explicitly, it is of interest to approx imate them by simple but intuitively appealing selection procedures.

From this point of view, the rules derived by the likelih ood ap­

proach are natural competitors of R(h) above; see Section 4.2.

2. THE SELECTION PROCEDURE The likelihood of the tota l sample

z = (z

llt

,..., z

ln

^, z

kl

, z

kn

^)

(5)

for j = 1, ..., ru; i = 1, ..., k; and is zero otherwise. We denote the parameter space by

ß = { ( 0 ^ 0 ^ ) : 0 j

>

0 f o r a l l j } . Now for i = 1, ..., k, let

n. = {e e 1 si: e . = e 1 r L 1] (2.1)

In words, is the subspace of ß where the i:th compo­

nent is the smallest. Now the maximum likelihood estima­

tor of 0 is given by 0 = (Y is ~~

1

i , Y, ), where Y. denotes the K 1 maximum of the observations from tt\ (i = 1, ..., k) . Let c^, 0 < c^ < 1, be a given constant and let

ß(c^) ® {9 € ß: L(z; 0) > c^L(z; 0)}.

Now consider the following selection procedure R_ : Select 1 IT i . iff ß (0 l fl ß. is nonempty. i We thus include IT i . in the selected subset iff a

likelihood-based confidence region for the unknown 0 contains at least one point having its i:th component as the smallest.

This is equivalent to requiring that

sup L(z; 0) > c L(z; 0). (2.2)

0ۧ. ~ i

1

Let 0* = (0*, ..., 0*) denote the val ue of 0 that gives supremum in (2.2). Since 0^ > for all j, it is easy to see that

Y. if Y. < Y.

(6)

4.

Therefore, our rule may be e xpressed as

R. : Select tt. iff II (Y./Y.) - n.

1

> c , j£j.

J 1

where J. = {j: Y. < Y.}.

i j - i

It may be n oted that the distr ibution of Y.^ is U(0, 0.) for n.

J J

each j.

Given Y = let ^(y) take on the value one if is selected and zero otherwise. Obviously R^ is just; that is, for i = 1, ..., k, the function decreasing in y^ and in­

creasing in each y., j ^ i. Now for j = 1, . .., k, let p. =

J J

P(TTj is included in the sele cted subset). It follows easily from Seal (1958, Theorem 4.1) that if n^ = , . . = , then is monotone; that is, 0^ < 0^ implies p^ > p^. We therefore have the following theorem.

Theorem 2.1 Procedure R^ is just and scale invariant. Further­

more, R^ is monotone if n^ = ... = n^.

It may be noted that, as shown in Gupta and Nagel (1971), it follows from the above theorem tha t P(CS), as a function of 0, attains its infimum at a point where 0 = ... = 6. . Also, this K rC infimum is independent of the co mmon parameter value, which may therefore be set equal to unity. The following lemma simplifies our task of determ ining required to satisfy the P*-co ndition.

Lemma 2.2 Assume that 0, = ... = 0, =9 and that n, < ... < 1 k 1 - n^. We have p^ < ... < p .

Proof We may set 0=1. Choose i and I with i < i arbi­

trarily and keep them fixed for rest o f the proof. Let r =

n

|/

n

£ and consider the random variables Y', .... Y,' defined by Y ! =

r 1/r lk i

Y^, Y^ = Y^ and Y^ = Yj for the remaining j. Then the dis-

(7)

tribution of (Y|, Y^) is. the same aa that obtained by in­

terchanging and Y^ in (Y^, Y^l. The lemma follows if we show that

n. nl

P[ n (Y./Y.) 3 > C ] < P[ n (Y!/Y»> J > C ] (2.4)

j

£J

£

J

V

3

where = {j: Y^ < Y^}, = {j: Yj < Y^} and where (n|,...,n^) is obtained by interchanging n^ and n^ in (n^, n^).

Now it is easy to see that

(Y;

/y

J)

1

= (Y^/Yj

/r

)

1

> (Y./Y^)

1

since we have assumed that n^ < n.^. Since c

n! n.

n (YÏ/Y') J > n (Y./Y ) J

j€J^

J Â

j€J

Ä J £

and so (2.4), and consequently the lemma, follow.

The following theorem enables us to d etermine the required c^-value.

Theorem 2,3 Let d^ * - £n c^ and n^ < •.. < n^. For given P*, the value of d^ required to satisfy the P*-condition is obtained by solving for d^ the equation

P* = A + I G (djB k-1 m=l - m i- m (2.5)

k _ 1

where A = E (~l)

m

E (N + 1)

m=0 a€S

a

m

B = I (

P

) (-l)

P_m

E (N + l)"

1

m p

-mW

a€s

a

P

and where S. denotes the set of all subset s of {1, 2, . .., k-l}

having exactly j elements. Also, N = E n./n,; and G (•) a jta ,

J

i k m

(8)

6.

denotes the cumulative distribution function for the s tandard gamma distribution with, paranjeter m.

Proof By Theorem 2.1 and Lemma 2.2 it suffices to assume that 0^ = ... = 0^ = 1, n^ < ... < and then calculate d^ such that P*

s

p^. But

P

k

=

p

(Yi>

Yk

, Y

w

>Y

k

) +

k-1 n.

E E P(Y.<Y for j£a,Y.>Y for j$a, n (Y./Y )

J

>c ) m=l a€S m

J

*

J

"

K

j€a J

J

Now the random variables X^, X^, defined by Xj *»

- iij Jin Yj are independent, each with the standard exponential distribution. With a. = n^/n^, we obtain

p, « E + E k-1 E E(a),

K L

m=l a€S m where

e

I"

p(x

iï°IV ••••Vi ;\-iV and

E(a) « P(X.>a .X, for j£a,X.< a.X, for jéa, E (X.-a.X, )<d,)

J J

K

J

R

j £

a

J J K - 1

Now

00

k-1 -a

.X

E « / n (1 - e

J

)e"

x

dx, 0 j-1

which is equal to A. Also,

00

E(a) = / Pi I (X. -a.x) < d , X. > a.x for i£a]

0 i£a J J ""

1

J J

-a.x

IT . ( 1 - e ^ ) e

X

dx.

(9)

Since the exponential distribution lacks memory, the above inte­

gral is equal to

00

-a. X -a. X

G (d.) / n e

J

n (1 - e

J

)e~

X

dx.

m 1

o j£a jlfa

Now expanding the secon d product in the integ rand above, we ob­

tain

k-1 / X ,

E E(a) = G (d

n

) E I

P

(-1)

P m

E (N +1)

i

.

a€S m

n 1

p-m ^ a€s p

a

We have I hus proved (2,5), which completes the proof.

When the sample sizes are all equal, (2.5) simplifies to

1 k-1 k-l-m A _ I \ -,

P* = £ + E G (d ) E ( * M (-l)

V

(v +m+ 1) (2.6)

k

m=l v=0

where

f

k

~

lN

) = (k- 1)/ [v! m! (k - 1 - v - m) !].

\v,m/

For selected values of k and P*, Table I gives the value of d^ satisfying (2.6).

3. THE SELECTION PROCEDURE R

In this section we are concerned with the following selec­

tion procedure

R: Select TT . iff sup L(z; 0) > c sup L(z, 0) 1 - 1 0esi. een: 1

where = {0€fì: 9^

ö

r x]

or =

®[2]^

anci wliere

is given by (2.1).

(10)

8.

TABLE I

lues of d^ to Implement R^ With Equal Sample Sizes

P* 0.75 0.90 0.95 0.975 0.99

K

2 .693 1.609 2.302 2.983 3.901

3 1.556 2.765 3.622 4.450 5.512

4 2.344 3.795 4.789 5.732 6.926

5 3.108 4.775 5.891 6.935 8.187

6 3.860 5.725 6.945 8.071 9.457

7 4.605 6.665 7.992 9.206 10.694 8 5.347 7.595 9.023 10.320 11.902 9 6.087 8.517 10.042 11.417 13.087 10 6.825 9.433 11.051 12.502 14.255 11 7.564 10.345 12.053 13.576 15.408 12 8.302 11.253 13.049 14.642 16.551 13 9.040 12.159 14.040 15.701 17.683 14 9.779 13.063 15.026 16.754 18.808 15 10.517 13.965 16.010 17.801 19.925 16 11.256 14.865 16.990 18.844 21.035 17 11.996 15.764 17.967 19.883 22.140 18 12.735 16.663 18.942 20.918 23.240 19 13.476 17.560 19.916 21.950 24.335 20 14.216 18.456 20.887 22.980 25.426 25 17.921 22.929 25.715 28.040 30.805

Intuitive justification for this approach is clear . Now the like­

lihood in is maximized by 0* given by (2.3). It is also easy to see that the likelihood in ft! is maximized by 0' given by 0Ï - Y. if (Y./Y.) •* « min (Y /Y.) J J y 1

r

r ]/

r

or if Y. > Y., and j - i»

by 0j • otherwise. This leads us to express R as follows.

n.

R: Select IT . iff min (Y./Y.) ^ > c.

l<j<k J

1

*

When all the sample sizes are equal, the procedure turns out to be the same as that proposed by Barr and Rizvi (1966), and is of the type R(h) given in Section 1.

By using arguments similar to those of Section 2, it can be

shown that the results of Theorem 2.1 and Lemma'2.2 are also valid

(11)

for R. The following theorem enables us to d etermine the c- value for R.

Theorem 3* 1 Let d = - £n c and n^ < ... < n^. For given P*, the value of d required to satisfy the P *-condition is obtained by solving for d the equation

P* = k-1 Z (-l)

m

Z (N + md + 1) (3.1)

m=0 a€S

3

m

where S

m

denotes the set of all subsets of {1, 2, ..., k - 1}

having exactly m elements and N = Z n./n, .

3

jea

J k

Proof We may assume 0^ = ... = 0^ = 1 and then set P* = p^.

By the transformation X. = - n. Jin Y . and with a . = n./n, , we

J J J J J K

have

P

k

= P(Xj - ouXk < d for j = 1, ..., k)

00

k-1 -a.x-d

= / II ( 1 - e ) e

X

d x 0 j=l

which equals the right hand side of (3.1), thus proving the the­

orem

For the case of equal sample sizes, (3.1) simplifies to

P* = [1 - (1 - c)

k

]/ck. (3.2)

Table II below gives the value of d = - in c satisfying (3.2) for selected values of k and P*.

4. THE CASE OF THREE POPULATIONS AND COMMON SAMPLE SIZE

In this section, we investigate in deta il the performances

of R^ and R for the case k = 3 and when a random sample

of size n is taken from each population. For simplicity in no-

(12)

TABLE II

Values of d to Implement Ä With. Equal Sample Sizes

k

f* 0.75 0.90 0.95 0.975 0.99

IV

2 .6931 1.6094 2.3026 2.9957 3.9120 3 1.2901 2.2674 2.9785 3.6803 4.6017 4 1.6636 2.6612 3.3784 4.0831 5.0062 5 1.9353 2.9430 3.6632 4.3694 5.2933 6 2.1488 3,1627 3.8847 4.5917 5.5159 7 2.3247 3.3426 4.0658 4.7733 5.6981 8 2.4744 3.4951 4.2191 4.9271 5.8521 9 2.6045 3.6274 4.3521 5.0604 5.9856 10 2.7196 3.7442 4.4694 5.1779 6.1033 11 2.8228 3.8488 4.5744 5.2831 6.2086 12 2.9164 3.9435 4.6694 5.3783 6.3038 13 3.0020 4.0299 4.7561 5.4652 6.3908 14 3.0808 4.1095 4.8360 5.5451 6.4708 15 3.1538 4.1833 4.9099 5.6191 6.5449 16 3.2219 4.2519 4.9787 5.6880 6.6138 17 3.2857 4.3162 5.0431 5.7525 6.6783 18 3.3456 4.3765 5.1036 5.8131 6.7389 19 3.4021 4.4335 5.1607 5.8702 6.7961 20 3.4556 4.4873 5.2146 5.9242 6.8501 25 3.6871 4.7202 5.4479 6.1576 7.0834 30 3.8750 4.9089 5.6368 6.3466 7.2727 35 4.0331 5.0676 5.7957 6.5057 7.4318 40 4.1696 5.2045 5.9328 6.6428 7.5690 45 4.2896 5.3250 6.0533 6.7634 7.6896 50 4.3968 5.4324 6.1609 6.8710 7.7973

tation we assume that 0^^ < ©

2

< 63. Since the rules are sca le invariant, we may assume that 0j - 6^

n

, 0

2

= 6^

n

and 63 " 1 with 6^ < 6

2

< 1.

In Section 4.1, we compare and R under the P*-ap- proach. In Section 4.2, we assume a loss function. We then com­

pare the procedures in terms of minimum expected loss for a given

model when the Cj-value optimal for Rj and the c-value optimal

for R are used.

(13)

4.1 The P*-approach

The selected subset S would be one of the seven possible nonempty subsets of the three populations. We use the notation Sj, s^j and

s

^23

to

denote

t

^

ie

probability that S =

S = TT.} and S = {ïï^, respectively. The expressions for these probabilities are derived in the Ap pendix (Section 7).

We begin the comparisons with the following theorem.

Theorem 4.1 For k = 3, we have P(CS|R^) > P(CS|R) for any parameter configuration and for any P*.

Proof Using the expressions given in the Appendix and the rela­

tion P(CS) = 1 - s

2

- s^ - S

2

3> we obtain

2

[P(CS|R

1

) = c

1

(£nc

1

+ 1) - 3

C;L

(<S

2

+ 1)726^(4.1) 3ô

2

tP(CS|R) - l ]/ÔJ = C

2

- 3 C (6

2

+ 1)/26

1

. (4.2) Now the constants c^ and c are obtained through

fp* = 1 + c Un c )/3 - 2c /3

2

(4.3)

P* = 1 - c + c /3 which in turn imply

c = c^ £n c^ - 2c^ + 3c. 2 (4.4)

Substituting (4.4) into (4.2), we see by comparing (4.1) with (4.2) that the inequality P(CS|R^) > P(CS|R) is equivalent to

C

1

(2Ô

1 "

6

2 ~

1}

-

C(2Ô

1 "

ô

2 "

1}

'

which is equivalent to c^ < c. It is a straightforward matter

to verify c^ < c by examining (4.3), which proves the theorem.

(14)

With pj denoting the probability of includi ng ïï^ in the selected subset, we let E(|a

f

|)

=

P2

+

P3 denote the e xpected number of nonbest populations selected, and P(CS) = p^. Let

¥ = £j£

a

j/|

a

| denote the average rank of the selec ted set a of indices and E(¥) its expected value. For a good selection rule satisfying the P*-condition, we desire the valu e of P(CS) to be high and the v alues of E(|a

f

|) and E (M

7

) to be low for an y given parameter configuration.

To make comparisons between and R under various con­

figurations of the underlying parameters, the following three types of configurations of (0^, 0^) = (6^

n

, ô3j/

n

, 1) will be considered (with <5 < 1) :

(A)

6

1

CM

II

(B)

6

1

CM

<o •N CM

II

(C)

6

1

= ô2 = 6.

Table III gives performance characteristics of the rules for each of (A), (B) and (C) with 6 = j/40; for P* = 0.75 with j = 1, 4(5)39 and for P* = 0.95 with j = 1(2)9(5)19(10) 39. It can be seen from the table tha t in terms of E(|a

f

|) or EOO, R^ performs better than R for larger values of 6 or P* whereas the opposite holds for smaller values of 6. It can be observed that for (B) and (C), R^ usually gives smaller p^. This is also true for (A) when ô is large.

Tables IV gives lower bound to the v alue of 6 for which R^ performs better than R with regards to diff erent criteria, configuration types and values of P*.

It may be remarked that if the exper imenter employing the P*-approach is willing to rely on a probabi lity model for

(6^, 5^)» we may compare the rules by taking expec tations (over

the parameter space) of the criteria of importan ce.

(15)

w H J S tS

u u

•w a

u v

S o

•J T3 Ö cd

4-J M Ö w a M cu a,

C3 eö MH O

a

CO

}H

Q) u

o 03 cd M

rC

u a cu c

cd

e *-i

MH

o M a) P-i

LO

r>.

*

/—S

O

cd vO r^.

CH ON

<r

O

00 00 as

CM

<t vO vO

O

<î rH r^

CO LO O LO CM

rH r** rH as

O

<r <* <r

CO CM CO

rH

CM O

rH as Os Os 00

• • • • • • • • • • • • • • • • • •

C

S—' r—1 T—1 r—

1

rH rH rH rH rH rH rH rH rH rH rH

o

W

•rH

4J

u cd 3

CO CO <F

<t 00 rH

ON

rH 00 00

CO VO LO S£>

rH rH

CH CM CO CO CM

rH

O CM CM SF

rH o> rH

CM CM CO

o

00 a vO vD

VD LO VO LO

co

CM CM O

rH

O

o

•H

4H • • • • • • • • • • • • • • • •

C O

o

CM

a

I!

CO CM

O rH 00

LO

vO o> <r

CO CM

o rH rH o m

LO

v£> Os 00 o Os CM O <r CM v£> co 00 IT) as VO

/•>

CO

« r^. r^> • • • • 00 • • 00 00 • • 00 00 • • 00 00 • • 00 oo • • 00 00 • • o

PH

vO 00 CM CM

CO

00

00 o CM

VO

o

LO

CM vO

00 00 ON

ON ON I—L

CM a\ 00 00 CM rH

<R

r- o rH

VD

VD o> <r co CM rH rH <î rH o

CO CM O

• • • • • • • • • • • • • • • • • •

w r—"1 r—1 T—l rH rH rH rH rH rH 1—i rH rH rH rH rH rH rH rH

/""N ON

rH

LO LO R-* O

O O

LO

O

LO

vO as rH

/—\ » 00 Os rH

CO

o

LO

v£> rH o rH CM 00 00

LO

Os v£> -d"

PQ cd <r <1*

CO CO

rH CM o O 00

LO CM

rH O O

v-' • • • • • • • • • • • • • • • • • •

S-' r—1 r-H rH rH rH rH rH rH rH rH

C w

•H 4-* o i i

cd

ON

r—1 00 ON io v£>

LO

<r

LO

00 r^ rH

LO

rH 00 rH rH

M ro

CO

vD 00 O rH

CO

O o oo c^ ON 00 CM rH O O

3 a vO vO LO vO <r LO

CO CO

rH rH O o O O o o

fcO • • • • • • • • • • • • • • • •

•H LH

Ö O o O VsO O vO LO

CO

vO O as

CO

CM LO rH as LO

o CM

LO

LO <3* <*

CO

CM rH as

CO

00 O O

CO

00 LO <r

Ou r^. r-- r-» r^ vD vr> vO to LO CM rH O o

' * ' * ' * '

r—< 00 O

CO

00 O

CO

CM vO 00 vO o CM 00 LO vO

CO

vO m rH o m

CO

as CM O

CO

f^. vO 00 00 a>

CJ 00 00 00 00 00 as Os a> as a> ON as

• • • • • • • • • « • • • • • • • •

p^

CO

<r r—1 io rH vO Cs| LO O as

CO

rH rH 00 o

CM ON ON

m vO rH CM vD O O rH o 00

CO

CM vO ON

Os Os Os a> Os 00 00 00 00 r- LO LO

CO

CM o O

"W • • • • • • • • • • • • • • • • • •

w rH rH rH rH rH rH rH rH r-H rH rH rH rH rH rH r-H rH rH /-"s /—N

<3

CO

<f Os O

CO

LO O CM <N

CO CO

vO rH LO O

w » o^ ON v£> rH rH LO rH LO LO 00 rH LO CM LO rH a>

cd <T <r

CO

CO

CO CM CM o o 00

CO

rH o

Ö

__—

• • • • • • • • • • • • • • • • • •

O <w rH »—i rH i—I rH rH rH rH rH rH rH rH

•H 4J W cd M

CO

Ö0 a

*H vO LO o 1^ 00 LO <D vO Os rH v£> oo CM vO 00 LO

MH ti <r CM

CO

ON O LO O CM CM O rH LO <r

C r^ <o vr> vO vO vO LO LO

CO

CM rH o o

0 CM • • • • • • • • • • • • • • « • •

o a.

/-N vO CM <r <r ON <r rH LO rH rH as o>

CO

LO

CO

C/D uo LO as 00 CM rH LO <r a\ CM O LO

CO

r^ r- a> ON

o r-- 00 00 00 00 00 00 CT> as a\ ON as ON

• • • • • • • • • • • • • • • • • •

PH

tO O io o LO O LO O LO

°c> r^ LO CM o LO CM o CM

°c>

ON

00 <r co CM rH O

(16)

00 ON

00

<r VO O CM

o

CO rH CM co co Vf ON LO co

•"N

ON ON

00

co vO ON

o

O Vf CM O co CO

00

ON ON ON ON ON ON

00

ON

00

ON

00

vO

00

vO r** m IO

V—•

• • • • • • • • • • • • • • •

»

rH rH rH rH rH rH rH rH rH rH

I

-H

t—

i rH

i—

i rH rH rH rH

/-s ——.

/-N

ON ON ON

00 00 o

rH

00

CO O vO

00

CM O

00 00 -a- 00 o

» ON ON

00

T—( vO vf CM vO LO

o

CM CM

00

Vf

m

co

V-/ cd

00 00 00 00 00 00 00

LO

<r

CO vO 1—i Vf

o

rH

• • • • • • • • • • • • • • • • • •

ö v

T—4 f—4

rH I—< rH rH rH rH rH rH rH rH rH rH rH rH

i-H

rH

o

W

•H U

U CO

00 ON

ON

1—1

ON CM vf rH vO

00

O

00

Vf LO

00 -a- 3

CO vf Vf r-l co vf ON vO vO ON ON co <f O vO vO

00 a

ON ON ON ON

00 00 00

LO R^.

<r

co vO CM Vf

o

rH

•H • * • • • • • • • • • • • • • •

M-l

Ö o

Ü CM

a

ii

r—1 i—1

O ON co vf vO

00

ON

o

O CM rH Vf co vO

m

m

vO

m

VO vO vO r** vO

00 00 00 r>* 00 /—N

ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON

C/3 • • • • • • • • • • • • • • • • s—'

PL<

r*. 00 o

io m ON CM LO vO ON

00

rH CO rH

00 a» ON

to vO vO rH vO

r—1 00

vO O Vf so Vf CM

=H ON ON ON ON

00

ON

00

vO vO LO iO LO Vf CO CM rH v-/ • • • • • • • • • • • • • • • •

W T—i rH rH rH rH rH rH rH rH rH rH rH rH rH rH rH rH rH

/-N ——

00

vC vO

00

O ON

00

CM CM LO UO rH «3- Vf vO CM

/"™N

» ON ON Vf vO

00

ON io

o

vf O co ON <f vO rH

r-*

LO PQ

.

cd

00 00

• •

00

• •

00

vO Vf • • VO CM co • • rH rH • • ON ON

00 r^.

• • >d- CM • •

r—1 r—<

1—4

r—1 rH rH rH rH

i—•1 i—<

I—• rH

C w

•H O

4J

cd

00

r>* ON LO CM Vf

o

ON

o 00

co ON Vf Vf ON vO co vf vf ON r—1 Vf

<r

LO vf

00

vO ON

00

rH LO LO O O 3 a ON ON

00

ON fx»

00

IO r^ CM vf rH CM rH rH O O

o

O

60 • • • • • • • • • • • • • • • • • •

•H H M

O

o

O ON r>» C0 LO vO ON ON CM rH r*. O O vO vO

U CM LO m Vf vf Vf CO CO rH rH

00

O LO ON rH vO vO Vf a ON ON ON ON ON ON ON ON ON

00

ON

00 00 00

vO Vf CM

• •

" ' *

# * '

• •

/*-\ CO CM ON

00

CM co

00

vO co IO

00

vO ON

00

O ON CO m m vO

00 00

ON 00 ON ON ON ON ON ON ON ON

o

ON u ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON

o

ON

• • • •

t

• • • • • • • •

rH

00

ON r—t vr

00

rH ON rH co Vf LO vO CM co

00 00

/-s ON ON

00 00

m vO co

>3"

O O

00

<f CM 1^» rH

o

CM ON ON ON ON ON ON ON ON ON ON

00 00 00 00

r* m CO V«/ • • • • • • • • • • • « • • • •

w rH rH rH

t

—i rH rH rH rH rH rH rH rH rH rH

iHI

rH

i—

i rH

«Î

00

ON CM r—1 co vO vO ON vO rH CM

00

ON VO O Vf CM

V—' ON ON

00

CM Vt" O

00

co CM vO rH LO CM ON Vf

00

cd

00 00 00 00 00 00

r--

00

vO VO vO LO LO co CM r>. -d- Ö —— • • • • • • • • • • • • • • • • o r—4 rH r—l

r—1

rH rH

T

H rH r—1 rH rH rH rH rH

i—1 i—H

•H w

4-» cd U 2 co 00 Cu

•H ON ON vO rH rH co

00

vr CO LO rH <f O

00

O rH Vf

LH II Vf CO rH CM

00 o

vf VO rH C0 vO vO vi­ Vf

Ö

ON ON ON ON ON ON

00 ON 00 00 00 00

r- vO vo co CM

O

CM • • • • • • • • • • • • • • • • • •

A a

/^S CM r—1

00

Vf CM vO

00

CM CO ON

io

rH vO vf

00

vO ON ON

CO to m vO vO

00 00 00

ON 00 ON ON ON ON ON ON ON ON

U ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON

V-/

• • • • • • • • • • • •

fl«

m LO LO

o

LO

io

lO LO LO

CM

io

CM r-«- CM

r

-v CM

°o

ON vf co CM rH rH O O

(17)

configuration criterion P*

0.75 0.90 0.95 0.99

(A) E(|a'|) .30 .14 .09 .04

EOF) .45 .35 .29 .22

p

3 .30 .14 .09 .04

(B) E(|a'|) .44 .25 .16 .05

EOF) . 46 .26 .16 .05

?3 .30 .14 .08 .01

(C) E(|a*|) .11 .01 .00 .00

E m .00 .00 .00 .00

P T .00 .00 .00 .00

4.2 Some Other Comparisons

Selection of a subset of k populations may be carrie d out by using an approach other than th e P*-approach. One such exam­

ple is th e Bayesian approach. However, except for certain types of loss funct ions and priors, Bayesian procedures are complicated as regards derivation and application. For that reason, i f sim­

ple procedures like R^ and R are available that do almost as well as the Bayes procedure, they may be preferable. The problem then reduces to that of det ermining the constant for each of thes e simple rules that best approximates the Bayes procedure. However, it may be pointed out that determin ation of this op timal value would usually be a cumb ersome task. For the n ormal means problem, comparisons between different rules in this respect appear in Chernoff and Yahav (1977), Chotai (1978) and Gupta and Hsu (1978).

For the present problem, it may be of intere st to ask which

one of R^ and R performs better in the above sense. Generall y,

it is reasonable to expect the an swer to depen d on the sort of

loss function and the prior distribu tion assumed.

(18)

To make a limited comparison between and R, we con­

sider the following loss function:

L = J a'I + a • ICS(0, a),

where |a'| denotes the number of nonbest populations selected, ICS(0, a) equals zero if the best population is included in the selected subset and unity otherwise, and where a > 0 is a given constant. Note that for the loss function, L^, appearing in Gupta and Hsu (1978), we have = L + 1 - ICS(0, a). Assume that (<5^, <$2) = (0|, 0^) have the same joint distribution as (U^ , U(2)), where ^ ^(2)

are t

^

îe orc

*

er

statistics based on two independent random variables with the uniform distribution on the interval (0,1). It may be pointed out that since the choice of this model is based on mathematical simplicity rather than on application considerations, the limitations of the presen t com­

parisons should be borne in mind.

The conditional expectation E(L) of L for given <5^ and

<S

2

is

E(L) = 2 - 2s^ + (a - l)s

2

+ (a - l)s

3

- s

12

~ s

13

+as

23

.

For , an explicit expression for E(L) for each of the cases

C

1 - ^1^2' ^1^2 -

C

1 - ^1^2

anc

* ^1^2 -

C

1

ma

^ °^

ta

i

ne

^ through the expressions given in the Appendix. Similarly, for R we have the cases c < <5< c < ^1^2

and

^1^2 -

c

*

1116

following expectations of E(L) with respect to (6^, ô

2

) for R^ and R respectively may be obtained by strai ghtforward but lengthy computations;

£(R

X

) = 2 + [ (37a- 179) /6 + (71-4a) Un c^/3 - 7(£n

Cl

)

2

+ (£n C

1

)

3

]C

1

/18,

£(R) = 2 + [ (5a - 31)/4 + (25 - 2a)c/9

+ 4.5 £n c - 10 c(£n c)/3 + c(£n c)^/3]c/3.

(19)

a d-value I* p*

1.28 .0 .781 .333

.0 .781 .333

2.0 .400 .991 .464

.325 .984 .451 3.0 .851 1.199 .594 .670 1.196 .575 4.0 1.226 1.348 .684 .966 1.361 .668 6.0 1.840 1.545 .797 1.493 1.599 .792 8.0 2.344 1.668 .861 1.983 1.750 .869 10.0 2.779 1.750 .901 2.461 1.846 .917 15.0 3.686 1.866 .952 3.669 1.957 .975 20.0 4.435 1.922 .975 4.937 1.989 .993

00 OO

2.0 1.0

oo

2.0 1.0

It turns out that the optimal value of or c is unity if a < 37/29 = 1.28, in which case only one population is se­

lected. For several values of a, Table V gives (the d-values) d^ = - £n c^ and d = - Ån c, where c^ and c are the optimal values. The table also gives the expected loss I* and the value

°f P*

=

inf P(CS) attained, when these optimal values are used.

Our study reveals that for the given model, R^ performs better than R if a exceeds approximately 3.5.

5. THE CASE OF MANY POPULATIONS AND COMMON SAMPLE SIZE

When k is large, determination of probabilities of selec­

(20)

ting the various possible subsets becomes lengthy for arbitrary parameter configurations. Assuming common sample size n, we therefore restrict our comparison between and R to the slippage configuration:

e, - <5 1 1/n < i, e 2 0 - ... = e. =i. k

Numerical comparisons will be made for k = 10 under the P*-approach.

For this, let Y^, ..., be independent random variables such that Y^ has the uniform distribution on (0, <5), while each Yj, j ^ 1, is uniform on (0, 1). Now for R^,

P (CS|V - "l ( V - A • V

m=l

N

where A = P(Y

2

> Y^

Yk

> and

m+1

B m • PI ^ 2 ( yv" - V T 2 S Y 1 Vl 2 V

Viî'r ••••

Setting Xj = - £,n Yj, we obtain A = P(X

2

< X

r

..., X

k

< X

x

)

00

= / (1 - e"

x

)

k_1

e"

x

ô

_1

dx = [1 - (1 - 6)

k

]/kÔ.

-&nô Also, with d^ = - &n c^

00

m+1

B = / P( E (X. - x) < d , X

0

> x, . . ., X - > x) m -£nö j=2 nr'oJ

J

-il m+1

• P(X^

+2

< x, ..., X^ < x)e

X

6 *dx.

Since the exponential distribution lacks memory, we get

(21)

m 1

v=0 ^ \

V

'

where G (•) is the cumulative distribution function for stan- ni dard gamma distribution with parameter m.

As regards the probability p^(R^) of selecting each of the nonbest populations, we have

D (R ) = A + A + K R 2 ^ K " 2 \ + K R 2 ^ K ~ 2 V

P K ( R I)

kx

+ A 2 +

I

. YC M + z . YD M , m=l

N 7

m=l

x 7

where

*i "

P<X

1 ^ V •••• \-i : V

A

2

- P(X

1

- X^. < dj, Xj > X. < for 2< j <k-l) m+1

C

ffi

= P( I (X. - X^ < d , X. > £ for 1 < j <m + 1;

i=l ^ ^

ij < X^ for m + 2<j<k-l) m+1

D

ffl

= P( E (Xj - X^ < d

v

Xj > X

k

for 2< j <m+l;

X

x

< X

k

; Xj < Xj^ for m + 2 < j < k - 1) .

Using the property that the ex ponential distribution lacks memo­

ry, computing each of the above terms and collecting them, the expression for P^.^) splits into the tw o cases 6 < c^ and c^ < 6 as follows.

P

k

(R

1

|6 < c

x

) = (k-1)"

1

+ (Ôk(k-l))"

1

c

1

((l-Ô/c

1

)

k

-l)

k-2 k-2-m /.

+ z

m=l , v=0

x

'

• G

m

(d (v +m + 2)) (v + m + l)"

1

^ + m + 2)~

in

"

1

(22)

where

(v~m)

=

<

k-2

>

:

/ ^

m

! (k - 2 - v - m) 1 ].

Also,

p

k

(R

l'

C

l <

ô) =

(k

-1

)"

1

- (6k(k-l))"

1

c

1

k-2 k-2-m /. „\ ,

^ (k-2\. ..v,,., .v+m+1 + Z E L .

m

( - 1 ) l ( 6 / c . )

m=l y

=0 V V '"/ 1

• [G (d-(v+m+2) - G ((d- + £11 ô) ml ml (v + m + 2)) ]

• (v+m+l)"

1

(v+m + 2)~

m

~

1

+ G (d. +£nô)(v + m+l)~ m 1

1

- + &n ô)

m

[m! ô(v+m + 2)] ^*}.

As regards R,

P(CS|R) • [1 - (1 - <5d)

k

] /k<5d

P

k

(R|6 < c) = {1 - [1 - (1 - Ô)

k

]/6}[c(k - l)]"

1

P ^( R |c < <5) = {kô - 1 + (1 - c)^ (k - l)c - kô]} [ôck(k-l) ] ^

Using the above expressions with k = 10, we obtain Table VI, which gives these probabilities for the slippa ge configuration and for selected P*. The table indicates that unles s 6 is small, R^ is preferable to R with respect to P^ Q * Also, P'(CS|R^) >

P(CS|R) seems to hold fo r all 6.

6. EXTENSIONS AND GENERALIZATIONS 6.1 The Case when Both a. and b. are Unknown

1 x

If both the endpoints of the intervals are unknown, t hen the reasoning of previous sections would yield the same rules with

(i = 1, ..•, k) replaced by W^, the sample range from TK .

Theorem 2.1, Lemma 2.2 and the corres ponding results for R

would also hold fo r these rules. It may be no ted that when the

(23)

P* = 0.75 P* = 0.95

j 6 P(CS)

P

10 P(CS)

P

10

1 .7500 .87 .73 .98 .94

.80 .74 .96 .95

2 .5625 .93 .71 .99 .94

.85 .73 .97 .94 3 .4219 .97 .69 1.00 .93 .88 .72 .98 .94 5 .2373 .99 .64 1.00 .91 .93 .68 .99 .93 7 .1335 1.00 .59 1.00 .89 .96 .60 .99 .92 9 .0751 1.00 .54 1.00 .86 .98 .47 1.00 .88 12 .0317 1.00 .46 1.00 .81 .99 .22 1.00 .79 15 .0134 1.00 .39 1.00 .75 1.00 .10 1.00 .55 19 .0042 1.00 .29 1.00 .66 1.00 .03 1.00 .18 23 .0013 1.00 .20 1.00 .56 1.00 .01 1.00 .06

sample sizes are equal, the rul e R for the present case reduces to that given by McDonald (1976). For the rule R^ in the present case, determination of the constants required to satis fy the

P*-condition would be difficult. In conclusion, it may be re marked

that McDonald (1978) considers subset selection rules of type R

based on quasi-ranges for the present problem.

(24)

22.

6.2 Subset Selection for the Population with the Largest Parameter

If selection of interest, then the approach of Section 2 used to derive leads to the follow ing rule R

r

:

Select ïï. iff (Y./Y, .) i i

\ k )

n.

1

> c, —

"

hece y

(D :

y

(2) Ï ••• :

Y

(k)-

However, the approach of Section 3 leads to the unreasonable rule that (in the case of common sample size) selects only the popula­

tion corresponding to ^(k-l)^(k)^

n < C> an<

^

selects all the populatio ns otherwise. It may be no ted that the result of Theorem 2.1 can be shown to hold for rule R! Also, it can be shown using the techniq ue of the proo f of Lemma 2.2 that if 0^ = ... = 0^ and n^ < ... < n^, we have < ... < p^.

Let it also be noted that rule R

1

reduces to the r ule given by Barr and Rizvi (1966) if the sam ple sizes are equal.

6.3 Extensions to a Larger Class of Distributions

Following Barr and Rizvi (1966), we may extend the results of the previous sections to the following class of distributions given in Hogg and Craig (1956). Let the variables (j = 1, ..., n^; i « 1, ..., k) be independent; the distribution of

having density

rM (z )Q (6

i

) , a(0^) < z < b(0^) g(Zf 0^) elsewhere

where a, b, M and Q satisfy the foll owing restrictions:

(i) M(z) is positive and continuous, (ii) a'(0) and b

!

(0) are continuous, and

either

(iii) a(0) is constant, b(0) strictly monotone (or vice versa)

and sup a(0) = inf b(0),

(25)

inf b(0).

The relation

b(0)

1/Q(0) = / M(z)dz a(0)

shows that 1/Q(0) is strictly monotone, and also reveals whether it is decreasing or increasing.

As noted in Barr and Rizvi (1966), the distribution of 1/Q(V.), where V. is the maximum li kelihood estimator of 0. ^ i ' i i and also complete and sufficient for 0_^, is given by the distri­

bution of the largest item o f a random sample of size n^ from the uniform distribution on (0, 1/Q(0.)).

Therefore, we may replace the variables by 1/Q(V^) in the rules above and proceed exactly as given there, using the same tables. However, which one of R^, R or R

?

is derived depends on whether each of the functions a(0) and b(0) is strictly increasing (t), strictly decreasing (4-) or constant (-). For each of the c ases compatible with the given rest rictions,

Table VII gives the rules derived.

7. APPENDIX

Me now derive expressions for the probabilities s. =

P (S*{ïïj}) and = P (S = {n^,

ïï

j}) referred to in Sect ion

4. In what follows, assume that Z^, and Z^ are independent,

each with uniform distribution on the interval (0,1).

(26)

24.

TABLE VII

The Types of Rules Derived

Selection for Selection for the smallest 0^ the largest 0^

b(6) +

-

+ t

a(0)

R' R' R R

R

1

R

1

R R' R' R

R

1

R

i

R R,

R

R-,

R

f

R

f

Procedure R

n

S

1 •

P(C

1

S

2

Z

2 > V i*

C

1

Z

3 >

5

1

Z

1

)

= / P(c

1

6

1

Z

2

> ô^x, c

1

Z

3

> <5^x)dx

(1/2 - 6

2

/6)6

2

C

1

1

if c^ < ô^/ô

2

1 - <S 1 /26 2 C

1

- ô

1

/2c

1

+ ôj/3ô 2 cj if Ô 1 2 < c r

Similarly,

= (1/2 - S

1

/6)6

1

c

1

/S

2

,

= 6

1

c

1

/2 - <sJ

Cl

/66

2

.

References

Related documents

Hon berättar att intresset har ökat enormt för upplevelser som erbjuder välmående för både kropp och själ, både bland thailändare själva och från utländska turister..

Även om SEAT CUPRA gör allt som står i dess makt för att säkerställa att specifikationerna är korrekta vid trycktillfället, bör du alltid vända dig till din auktoriserade

De riktiga XYZ-värdena för färgkartan kan beräknas fram om modellen för kameran byts ut till färgmatchningsfunktionerna för CIEXYZ.. Eftersom nu både RGB-värdena, som ges av

Utifrån denna statistik kan man därför inte säga något om t ex hur många barn i Sverige som mobbas eller utsätts för fysisk misshandel.. Däremot kan man se vilken typ av barn

O FINNFORSFALLET

Övriga IFRS-standarder och tolkningar, samt uttalanden från Rådet för finansiell rapportering som trätt i kraft efter den 31 de- cember 2008 har inte haft någon

Wihlborgs är ett av Sveriges största fastighetsbolag, med verksamheten koncentrerad till tillväxtregionerna Stockholm och Öresund. I dessa regioner finns 92% av bolagets hyresvärde

Klöverns styrelse har från årsstämman 2008 fått förnyat bemyndigande om återköp av egna aktier till högst 10 procent av totalt antal registrerade aktier. Den 31 mars