CONFIDENCE INTERVALS FOR SOME FUNCTIONS

57
>, •.' .. CONFIDENCE INTERVALS FOR SOME FUNCTIONS OF SEVERAL BERNOULLI PARAMETERS WITH RELIABILITY APPLICATIONS 1 by Dar-Shong Hwang and Robert J. Buehler Technical Report No. 161 November 1971 University of Minnesota Minneapolis, Minnesota 1 This research was supported by National Science Foundation Grant GP-24183.

Transcript of CONFIDENCE INTERVALS FOR SOME FUNCTIONS

Page 1: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

>, •.' ..

CONFIDENCE INTERVALS FOR SOME FUNCTIONS

OF SEVERAL BERNOULLI PARAMETERS

WITH RELIABILITY APPLICATIONS 1

by

Dar-Shong Hwang and Robert J. Buehler

Technical Report No. 161

November 1971

University of Minnesota

Minneapolis, Minnesota

1This research was supported by National Science Foundation Grant GP-24183.

Page 2: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

-. \

.. ~- ' '

CONFIDENCE INTERVALS FOR SOME FUNCTIONS

OF SEVERAL BERNOULLI PARAMETERS

WITH RELIABILITY APPLICATIONS1

By

Dar-Shong Hwang and Robert J. Buehler

University of Minnesota

1This research was supported by National Science Foundation Grant GP-24183.

Page 3: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

ABSTRACT

In reliability theory one is often interested in the estimation of

products and other functions of Bernoulli parameters. Previous work has

been mainly concerned with binomial data. In the present paper other

sampling methods are considered, negative binomial in particular. It

is shown that the theory of exponential families can be used to obtain

exact confidence limits for products and quotients of Bernoulli parameters

when negative binomial observations are available from each population.

For the case of two populations the relevant distributions are related

to hypergeometric functions. To estimate more general functions such

as sums of products of Bernoulli parameters, some sampling methods are

suggested which are based on theory of compound distributions. Another

method of obtaining exact confidence limits for products is given which

exploits the independence of the minimum and difference of independent

geometric variates. This last method is a discrete analog of a procedure

due to Lieberman and Ross for estimating sums of scale parameters of

exponential distributions.

Page 4: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

J ••

·,_, :-; -~c.L:.; -~· ~~;.! ..

-~

Page 5: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

1. Introduction.

Let TI1 , TI2

, ••• , denote Bernoulli populations with parameters

p1 , p2

, •••• We are concerned here with interval estimation of various

functions of the parameters, in particular, products, quotients, and

sums of products. Most previous work in this area has been concerned

with data in the form of binomial observations. The present paper shows

that other sampling methods (negative binomial, for example} lead to

new and possibly advantageous way~ of obtaining confidence intervals.

In some cases, sampling rules are suggested which depend upon the

particular parametric function to be estimated.

1.1 Practical applications.

Our motivation comes partly from reliability theory and partly

from biomedical applications, In reliability theory p. may denote J

the reliability of a component from population TI-• Then a system made J

up of one component from each of k populations has reliability

(1.1)

if the components are in series, or reliability

(1.2) (q. =1-p.) l. l.

if they are in parallel. Other systems whose reliabilities are more

general polynomials in the p. are also of interest. In any case we l.

wish to estimate the system reliability using data pertaining to the

individual components.

Interval estimation of R1 or R2

{or special cases} has been

considered by various writers. Buehler [2] and Harris [5] use Poisson

approximations to estimate R2

• Madansky (12] and Myhre and Saunders

- 1 -

Page 6: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

-~-

( ·.· ~ --­-' '··/

'\-._-*'J

-.. .·-~ --J.,..) ':,.t

.... ·,..J ....

~;.: ~r~. I· ..

- · . .; . ·.---

c_·---.. ;~

' )

-s~·-.:Jr o::

. ~- r_o_: :_·.. :.. ........ __ ·_·-~.:..·_::~_ .·":..,.·· •.. ~ ... _':"~~-.. __ ·.-. ~ :·_r, ___ -:.;. _'.,_· __ ._.-.:r:_· ::_-· ,•.=.· .. ·-:··. ('!.!. · ·;:.- • . , , .. ~ .......... ·· · ··. • ·-r .. ..,- - -\ · ,-- --o ·· ···.: ,_ -- :::' - ~ . - - ~ - - "- --· :_ ~~i~ ,.;, -~- ·-.. ':. ..; -·: ,--.- ... -~--'; ._~~.::_·

-:• -· ....

:")_.: : .... --)-·_,----:4·· -

(;_::.0

i) :

~()

- . - .. · :,.· .~.,

Page 7: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

(14] estimate R1 and R2

by a chi-square approximation. Other work

is reviewed by Rosenblatt [18] and by Mann [13].

Biomedical applications of the methods of Section 2 below have been

considered by Hwang [6]. These include: (i) the estimation of the

difference of two bacterial densities by the dilution method; (ii}

comparison of two Yule's birth processes; and (iii} estimation of

ratios and cross ratios of proportions, which arise, for example, in

the theory of effectiveness indices. We hope to consider these in a

later paper.

1.2 Nature of the difficulties.

In general we wish to find confidence limits for a given function

of k Bernoulli parameters. Of course, large sample methods will

always yield approximate solutions, but the present paper is concerned

with mathematical devices for obtaining "exact" solutions despite the

presence of k - 1 nuisance parameters.

The second difficulty arises from the discreteness of the distributions.

For any confidence level y, it is possible to give intervals with

probability of coverage exactly equal to y only by artificial

randomization (see for example [19] or Section 3.5 of [8].). We prefer

the usual "conservative" solutions having coverage probability greater

than or equal to y. But such solutions may justifiably be criticized

if they are in a sense, too conservative. One quantitative measure of

this would be the probability of coverage averaged over the pa~ameter

space. Hopefully this average should be only slightly greater than y;

but for discrete distributions having all (or nearly all) of the probability

concentrated on only a few mass points, it may be closer to unity. It

- 2 -

Page 8: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

.. _.;1;.

. ..... J

~--

C, -

--- -.-,.... ... .. , .... __ ,. \-:"~ _;. J t_j_ .. ..:;. \,., ·i.:

,· ~- '. ] ... _.... \,'--;·.,- ... ;.. •• ~ ._ • .c-::

-:- 'J

( .- -,-".; ._.

. - -,- -­-'4 -- -·_J

- s -

·;.. ·.,.

. - ~-'··.] . ··r. -· - . .,,. '--; ·~ .,; .. : -- ·.J: ,.

- - . , .. - .. - ... ..,~ .c.:; ~:::~~:-:.~ ... ;~:-c.:~--~~

. ··"'::;~:- ~.·.-

l • ·, • .! )

·.~

.-.-· \:). ~-.; ' . - -

_., ....... '\ ·- - . ?--- .:..,r •. J

( ·,· .... ) ...

( ·'.·~-) ...

Page 9: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

. ~.

will be convenient to have an expression for this undesirable state of

affairs, and we will speak in a qualitative way of situations having a

high "discreteness index" to indicate solutions which are highly "conservative."

Examples would be a binomial distribution with small n (n = 3, say), or a

Poisson distribution where prior information would indicate a mean small

compared to unity.

A third difficulty is that in reliability applications in particular,

the parameters pj may be close to unity. This fact may have undesirable

consequences for the discreteness index or for the expected sample size

with certain sampling schemes.

It is probably impossible to deal simultaneously with all these

difficulties in a completely satisfactory way, at least within the Neyman­

Pearson framework. Nevertheless we hope that the methods given below

will find a number of legitimate applications.

1.3 Suuunary.

In Section 2 we show that inverse (i.e., negative) binomial sampling

allows us to use the Lehmann-Scheff' theory of exponential families to

estimate products and quotients of Bernoulli parameters. Formulas are

given for the most general case and for a number of special cases.

Section 3 describes a new sampling method which is specifically aimed

at the estimation of certain parametric functions not covered in Section 2.

Section 4 gives a number of ways of using properties of compound Poisson

distributions to estimate sums of products of Bernoulli parameters. In

Section 5 we show that a method due to Lieberman and Ross for estimating

sums of ~cale parameters of exponential distributions can be modified

so that it applies also to problems involving products of Bernoulli parameters.

- 3 -

Page 10: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

-'=-'·-.. ,·.~~··--, .,._·--•-·· .... =: .• ~--:·.~-~ .. _·.::: · --~-... '.)_,_ . ...,. :·-.·-·.: ~-- · ,_. _-:.:_·:::._.·,,.·_·;.·_.· ..• i::_:":, _:·7? ,-~:t':'7 ,,~ .. ,···.· :··.-~(r - "-"' - - ~ -- -= -- - - - - - - - - - . - .., ._.... ~- :=: , .. :· '".:._ __ ~ "':.: •. -- ·~

.. --.- . .., __

- -- ..... ... ,, :-·~:.,...' .. ~ . "~-_;

(;

.;:O_

·'· ,,:.~ ..... J•.J :) :r: ;f; •_;; -~·-~-~-

. ·- ·-------·-. ;.t.:;;_;:,.~:,.-;·:-.·

··- ._ ....... -~::-W ... -_.,.. ·-·::_;~ -

, . ~"2 .0J:2'_f;~~):-~-

~-·-;~·· ·:· : . :=; ~. -

?. ~';·_·_) •

._.I. •,I'-,.• ~-;r-

-r ........ #

.. _ ..... ~...-·::.'~"--

.-. ··-~- - -. ; • •• '~J 0·-:

~-· '+. c. • .,

Page 11: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

2. Distribution Theory for Estimation of Products and Quotients Using

Inverse Binomial Sampling.

In this section we exploit the theory of exponential families to

remove nuisance parameters when the probability models are negative

binomial. Other reliability applications of the same theory have been

made by Lentner and Buehler [10] for gamma models and by Harris [5] for

Poisson models.

2.1 The general case.

Let a Bernoulli population have probability p of success and for

any fixed r = 1, 2, ••• , let X denote the number of successes observed

before the r th failure. Then X has the negative binomial distribution

(2.1) r+x-1 x( )r P {X = X} = ( )p 1-p X

x=O, 1, 2, •••

which we will abbreviate by

(2.2) X-NB(r,p).

If we sample independently from k + k' Bernoulli populations to

obtain random variables X. - NB(r., Pi·) 1. l.

for i = 1, ••• , k and

Y. -NB(s., pJ!) for j = 1, ••• , k', then it will be seen to be possible J J

to obtain confidence intervals for the function

(2.3)

The essential reasons are that the distribution (2.1) has the exponential

form

(2.4) g(cp)h(x)ecpx

where

- 4 -

Page 12: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

- .. n --.:: i. 3j·_ .. _;~, .. :_:;·c· .. · . .:;.;1.s B:?::::4·.·~,:;::~:I ::.0 __ .__.. ------.-- -• •• .,- •• •-• .,•" 0 -----·-- ••-•••--•-·--•r•• --• ---••• -·

.---:-~ ~-:;-, __ -,".":-~----,.- •. : . .,-1-,-, ..... -~--r"\.c:;.-.'":T _r_:-~_::·_ .. ;_,_ .. ~-~--~ ... _.-_ .. :·: .. ~,_:_.~_:_ - -· . ____ -. -- .----~"-_:,_~_.·>- .~; .. ...,, __ -·,··.· ·.~.::.:.:. ______ ::,,._ _______ -... __ .... _

-_::-.:3. [.

• • -_-.. .. : ·.•. _? :;: ::: ·- r;

. -:·--, ~'- • • ,. - -,.~_.·,:e_-,_~t-,_· .-_;_·,,. _,.··_,·_ ·---~-- .'_.-r._ -- -~:~'2·.'~_:;~_;_r;~~-~-- - \.,, -:- :.:

.. . .. ;;_·

~---· .. ·.,, . ..- .i. = -· ·.10:::

r _;

.' .. :'. ~ -. : - ( :- ,-·) '.· . .,. .:::.

[ .-. ,. ~ ·..;.~...:. -J

..:':.. :~-~~ ... ,-,.· ,. ..... -: ~ - ~ .. .:. = :r-

..

. --~-~~~-- ~·:·rt-~:-_.r:r, -=-c;tw;r..._ .:.J._:.:;t·: Sr:-7

.;.:. .

( - .. -·- ,:· .-.- ,.. <.

.-,~rC.: =

.:.:::. .!.

,.:~ .. ·.:,,,;._ ... ~ .:

.: .. s

r ~ c.-) ~ -·. -

( . ··) ....... _,.;

···(··--· ·,-....) ; ··: ... ,,..:::,~

-·.-.:,:.

,.:

Page 13: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

(2.5)

and that loge is a linear function of ~i = log pi and ~j = log pj •

To apply the theory of Lehmann and Scheffe [9] {or see Sec. 4.4 of [8]),

we transform the random variables by

(2.6) wl = xl

W. = X. - X for i = 2, 3' ••. ' k, l. l. 1

V. = y. + xl for j = 1, 2, ••• , k'. J J

Let U = {w2 , w3

, ••• , Wk, v1, v2, ••• , Vk,) and u = {w2, w3

, ••• , wk*

v1, v2

, ••• , vk,). Then it is straightforward to verify that

(2.7)

where e is defined by (2.3),

(2.8) rl k k' r. w. s. v.

rr TT q. l. l. ' J p! J A{u) = q1 p. q.

i=2 . 1 l. l. J J J=

(2.9)

(2.10) q. = 1 - pl.. , q! = 1 - p~ , l. J J

and the range of the variables is

(2.11) w1

= O, 1, 2, •••

wi =-w1, -w1+ 1, -w1+ 2, •••

vj = w1 , w1+ 1, w1+ 2, ••••

From (2.7) we obtain the conditional distribution

(2.12) wl t

P(W1 = w11u = u} = B{w1, u)9 / ~ B{t, u)e t

where the possible values of w1 are max{O, max{-w2 , ••• , -wk)}~ w1

~ min{v1

, ••• , vkr), and the values of t in the summation are the same

- 5 -

Page 14: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

-~-.,...,!.

( ,::: .. ·_--::-:-' ·~; .:. '-· )

( .. ~ •... )

·.•.'?;:;

(,_·-- 'w • -) 0-

( ,- .. '. --: .·;

_..__#:

,-,· .:-·.­':.- - '""' ..

:.·-z):.

. -, .,.J \., :. - ~ ·_·n - - ·(··~. .-- - --;_- ~ .·, ' •• j -

.. '.·; =

..: =

::( = ~

~ :..-·~

,-.\ -- j

.·. - • l \ .. -· J

C

·,· 1-:

( !-';'_. ~ --- ,i~ .... :- • - • ,:;.

'--;. . V

n-=

. .,. --:- . .,

i'.. ~ -. ~ • ·+ .-,..:

= .-,: _, •,\·-

··- .-: . ..: '--'·-·-

,• - I -.. ·. -,· ~ .• -..

)

.:r '1. '-·

7(tr)·-~-( • ..,.J

:..·

. \,~ ;. . . - ;,

= ( -:--~~ \._

.,. '.·, r - • ~, ·.-

~-:) .. ·t· ... ..;

. .:. _\,.,

. ' •. I ., ..•

)

·_· ;., ... -;,

~; .. .. '•.' .

Page 15: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

as the possible values of w1

• Thus by using known theory of exponential

families we have obtained a conditional distribution (2.12) which dep~nds

on the k + k' parameters only through the function 9. Confidence

intervals can be obtained from (2.12) in the usual way. For any confidence

level y, and any observation w1 = w1 , U = u, an upper confidence limit

e2(wl, u) is defined by

(2.13) = sup { 9: L) P {w 1 = t I U = u} ~ 1 - y}. tswr1

Similarly a lower confidence limit e1

(w1

, u) is defined by

(2.14) = inf {a: L) P (w 1 = t I u = u} ~ 1 - y}. t~l

Because of the discreteness of the distribution these are "conservative"

intervals satisfying the inequalities

(2.15) P{9~02(wl,u)IU=u}~y forall e,u

(2.16) p {a ~ e/w1' u) 1u = u} ~ y for all 9, u.

If randomization is used to make the probabilities equal to y, the resulting

solutions are known to be "uniformly most accurate unbiased" (see Chapter 4

of [8]).

2.2 The product of k parameters.

In this section we give results which are appropriate for the special

case when 9 in (2.3) is replaced by

(2.17)

We then simply drop the variates Y. and V. and redefine U and u J J

by U = (w2

, w3

, ••• , Wk), u = (w2 , w3

, ••• , wk). The conditional distribution

(2.12) then becomes

- 6 -

Page 16: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

( ·.~·: ) '( - .. · .. --·

. :

.·.-~ ..__J_•_

.-. - ... :..r,.,. ,. . .; --~

.rr.r, · t0r-.>s) \. -:'·· ·-

!f \..

i" <.: u

. .:.

:;__t,:,.

·(

.. · =~f :tr:.i: -.. ~t7:~·:!:

. :.,:"· .

, ... < . ; , . = -"-~ ; r

·~ .• --~- ' i:-,----~~:··'·.-: . ... _-:-.··· ••. ··.:----.. :::_.:_·~.·.1·_:.~~:-~-~ -~·., :.., -- ~.:.;_-t;!· 7"'·.- -~ - - - ~ • !__ --

(

= ( :.r ) ,·· ·,,:·

, :- '.·.)

··) r , _: ·-~ s~:: >

·- ·:

1>I){ < .-~

_, .·· ~ _;. ,;--:·

( ·_ .'.3}_

=

( .ti -.. ~- ... ·. ) = !

I ~; .. ·:_: "'.

) .·:r-- : •. ·,.· ;:. .

(. i. ·C•) \ • __ .. '.,;. -.J

I ... • :::. )

( .. ·:. ?)

/ r ..

: ..• -S

: . . ~.

(·· .. ···) \ : ... · .. ··~ .

--~-­,- ~;:~ -

Page 17: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

(2.18)

where

(2.19) * B (w1

, u)

and the range of w 1 ( and t) is max ( 0, -w 2 , ••• , -wk} ~ w 1

< oo. We

now show some relationships between these expressions and hypergeometric

functions.

2.3 The product of two parameters.

The usual definition of the hypergeometric function is [20]:

(2.20) F(a, b, c; z) = (l a•b a a+l)b(b+l)

+ - z + l•c 1•2•c c+l z2

z3 + ... } . Let us define

) r~c) r(a+n)r b+n) n

(2.21) f(a, z

b, c; z, n = r(a r(b) r c+n n!" ,

where r(x) is the gamma function. Then it is straightforward to verify

that

(2.22) (X)

F(a, b, c; z) = ~ f(a, b, c; z, n). n=O

If we further define

X

(2.23) F {a, b, c; z) = ~ f(a, b, c; z, n), X n=O

then F (a, b, c; z) = F(a, b, c; z). (X)

Going back to (2.18), and taking the special case k = 2, replacing

* a by

(2.24)

- 7 -

Page 18: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

·"

( ::::. ,....-~ ·. ·.· .:::,:. I

( ·-·.·,-.- \ .. ·0 1::x:;

(

i- .. -· .•

0 ~""'...: J-

( ·:. ,:- .. '

' ' )

' ,.'..

. ,~ .I .

·-)' . ..

·.· ( ,:·.. - ;. ,,.. -.:.:~ • •J }J \.., - :: ) = '}~ (~·:.

.· (·-. ~~ - .; , - ....

__ ,. .. _ ...... , -· _,...... __ _,, ---·----·----· - ·----~--~- --).::·~~:S b (. ~ .. _; :1-:-; r: -:) 1~ -:~1-1 o ·t. ~-ti s~~;c i: f~ ·.: ~=:; •

( r-:-··-· .. : -; ,.~

.; . ~ .. \ ; . ) = ( .

.... _

)

I. = :-.r. ~ ~

.... ( , .. ,·.i - £:, ~ -

-,.,-

, .. ) -

·.- .- : ,it .... ( . -. . ' ·<:• • • L . _:..... I

'·. '-.-(.-'.r+~:1\.·,(lv .... ,

'··." .. ' . .: :, \ • .! ·) !.

(

,. -__ , . ..- . '12 .--; -~ -· ., ,.:J ~:

~- ~ - ... -

·r .. -.-;, \ )

.•

.. -. ·,' ( ;- ;; ·::·, 'i ( .·. ': /., ._ ~: .- •• I . : •:; / ,:

Page 19: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

we find

(2.25)

where

(2.26) B'(x1

, w)

and where the range of x1 {and t) is max{O, -w) ~ x1

< ~.

It is now straightforward to verify that the conditional distribution

can be written

(2.27)

where x is an integer 2:: 0 if w 2:: O; and an integer 2:: -w if w < o.

By analogy with the incomplete gamma and beta functions, we may call the

distributions in (2.27) "incomplete hypergeometric functions."

If we define

(2.28) a= max(O, -w), ~ = max(O, w)

then (2.27) can be put in the alternative form

(2.29) Fx+a(r1+a, r2+~, l+Jwj; A)

F(r1+a, r 2+~, l+lwl; A)

x = a, a+l, ••• , w = O, ± 1, ± 2, ••••

In reliability applications, A= p1p2

is the reliability of a

series system, and the above formulas are relevant. To use the same

theory to estimate the reliability 1 - q1

q2

of a parallel system we

must redefine x1

and x2

to be the number of failures prior to the

th th r 1 and r

2 successes, and the relevant conditional distribution is

- 8 -

Page 20: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

• 'U~ ·--:.9-

(,- ··-'· .. '0 ~---: .. ·.": I

.:: ... -'

. ,. :: _;- ·,; .. l >·

• .:;,_J_• .. -:• ......

- - ,. ·-;..-,. ( ,;_ ,.. - , .. -!·.:' ,:J. ·:, '..· .·. /

( ·-: •·.:-:.···'; ,• \_.-- -

=

.,, ;

:.· /

~·- :(= ·l .... ~'--:

-

.:-··=

J.. _,

.. :- .. -.-

, I• .-\

~- ~~-- ··-··""'~·-,~ \ _:·: i_-_ .. T .J .

,.·.~_,_ ... ... _ + ~ ·- ..: j

. ~ .

(:5 ·:s.~} ;:-;?rf·fr ~ r~.;,;; .· 1·-.:,;::r._,fao:1·.;-.;;£ {::·

. ' .:J.·~ ~:- -~ ...... :

>

:-....

C ·-~ ·. -1·1) <

,_..., c-/~- ~ -.-.,. )' = .;_.- - -- :"'- ~ ( -- ... ;( l 'I •

::-~-~~ -; .~:· ... i_-.• ~+:~--)

( - .. ---•. \ ·- ,· -· . ;_.. ,_, ~-) x~- - .: ... ·"

-:-..:·

.... •.--:-:.

:·. )-

.. ~· ... _ - .. """".i..:

. )

:· <.. -;-[•' • I

.. -~- . ·,

:r-:-:~ =>i)

Page 21: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

(2.30) f(r1+a, r 2+f3, l+jwj; µ, x 1- a)

F(r1+a, r 2+~, l+lwl; µ)

x1 = a, a+l, •••

w = 0, ± 1, ± 2, •••

where a and ~ are defined in {2.28) and where µ = q1q2

• From (2.30)

we can obtain confidence intervals for µ or for the reliability 1 - µ.

2.4 Some special cases.

Suppose r 1 = r2

= 1, so that x1 and x2

are the numbers of

successes before the first failure in each population. Then x1

and

x2

have geometric distributions and either from (2.27) or by a more

direct argument we find that whether w > 0 or w < O,

(2.31) X = 0, 1, 2,••••

Since the last expression is the same for all w, we conclude that

min(x1, x2

) and x2

- x1

are independently distributed and that the

unconditional distribution of min(x1, x2 ) is the geometric distribution

given by (2.31). These facts have been noted previously by Ferguson [4].

In this case the upper and lower confidence limits given by (2.13)

and (2.14) become

(2.32) X t

l2

(x) = sup{X ~ (1-l)l 2:: 1 - y) t=O

00

(2.33) l 1(x) = inf{l: ~ t

(1-l)l ~ 1 - y) t=X

which can be solved explicitly to give

~ _ (l )1/x ~ _ 1/(x+l) ~1 - -y '~2 - y •

The lower confidence limit Al is of greater interest in reliability

- 9 -

Page 22: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

, I~· 0

.. ·. '· >'

. :'-:·~1; -~·

.. ~ .1 • ., --~

( ::=- .. -~-;~ j . .

( =- .. ·' ... '. ·.; ... ,,. J

(-,_ .:~:)

( ·7 .• -~., ,, ;' \..# . ,..,._,

•,r.:. (··:- '_, ; ·- j-

~-:)(:.:) =

:, ·._;-\:) \ -·- I

:, :,

= .... ··-; = ·"i"-~ -~

• .. -·.; '.·=-

-- .. ) --

( ·:~{)J ~: -. .. - ·.·_p; ,.

·---J

-:-·· :· ,-':) - .~ ....... _. -.. . l; ·• r;,:-;. ~ s:J .. ·2 :·._':,

·- •., ... ..J .._ ... ~ .-•• .:- -, ..

-·-

__ ·.)

.. ;i

···-·-;, -:-

__ ::. ( ::: ..; -j-.J. ~-- :::'· +_.

'·,; (::..: +::.; :: ~:, + :,

. ;..:,,. ... ·~

.- - ... ,...,

... =

( - ... ··, '. i.: .J ··:)

r- • ., •

~: ... j

;·: .·

: I

Page 23: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

applications, and for large x, Al is asymptotically 1 + (1/x)log(l-y).

As an example if x = 100, a= 0.05, then Al= 0.970.

When r1

= r2

= 1, our general method of setting confidence limits

amounts to using the unconditional distribution of min(x1, x2). In

reliability applications where we have two dissimilar elements in

series with system reliability A= p1p2 , the use of min(x1 , x2 ) is

equivalent to testing individual series systems (rather than components),

recording the first system failure, and using that observation to estimate

Next suppose r 1 is arbitrary but r 2 = 1. Then if w = x2- x 1 ~ O,

(2.35) r

1+x1-l r

1 x

1 P{X1 = x1 Jx2- x 1 = w} = ( xl )(1-A) A x1 = o, 1, 2, ••• ,

which is a NB{r1

, A) distribution, and is incidentally free of w.

If w < O, then the conditional distribution is a truncated negative binomial, r +x -1 r x

( 1 1 )(l-A) lA 1 xl

P{Xl= x1JX2- Xl= w} = -oo--r-+x __ _,l ___ r __ 1] ( 1 1 )( l-A) lA t

t t=-w

which is not free of w. If we define the incomplete beta function as

usual by

(2.37) ( ) r{m+n) Jx m-1( )n-1 Ix m, n = r(m)r(n) O t 1-t dt

then the cuDU1lative forms of (2.35) and (2.36) are respectively (see for

example [7], [15]),

x=0, 1, 2, •••

and

- 10 -

Page 24: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

·'•.lt ..•

-:-~' .-.-: ~-J

... ).

( -- · __ -_,.-. \ .::,. ::- .·_)

---~-

·(.~. ')-:') \ . ~--.)

:· . ..:-::-:· j

..... ·.,

. .:... ( i:i'; : .. (. - ': = __ \ __ ~{ .. .. ·'-:... ( .y .. ,;)

~~ ..... -c.~r-/j ...

. . \"-. ( . -.•. . \ ( ·,- .. .. ) . ':/f . ·_';· -F.:.-:-~:~_-r .;

::.: ...... -

,:-- . .. .

:.,. __ ....,

T :.-:-

·.-... ; i':.

:,- :·

(.··_,_·- ·­:. \.,.l_:::,i

.. ,.,, __ :_

~-;, ........

~.,._; __ .-_t· __ ,:~_·.·::-:.~.---.'. -.-·-· .• ,.·_,~,· .•.•. .:..____,! -- .·_,·J··_,_·:·~-··.,·.· .• _·-_.·.r __ •• ~·.··~~-";_ ·:.:r_-·-.-~ ..... .J , __ ;.--~-_-.. _ .. ,._,'-· ( .,- ~ - . ·- -· - -~ -- -·- - - ~.;.-..,.~··-;,~. ~-2"~~.J ,.,:..;. .. ~._: . .)"::;,..u:.-~ ...

_r. - = --. ·.-.:. v-.~-

-... -: ..... /')

-~ ·~.

i.-_ ..

A·.• ·-~· .:.

• • ?, .:: .. . ..

= ·._:. ,•

Page 25: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

(2.39)

x = -w, -w+l, -w+2, ••••

Equation (2.38) and other formulas useful for evaluating the negative

binomial distribution are considered in more detail in Section 5 below.

2.5 A numerical example.

Table 1 gives a numerical example of lower confidence limits,

illustrating the use of the formulas of Section 2.3. The point estimates

Table 1

Example r 1

A 5

B 10

3

6

45

90

57

114

95% lower confidence limit for

P1P2

0.7924

0.8068

Point estimate of P1P2

0.855

0.855

for p1p2 were calculated from x1x2/{r1+ x1)(r2+ x

2). Since Example B

is obtained by doubling the data values of Example A, the point estimates

are the same. Example B has a confidence limit slightly closer to the

point estimate, as one would expect. The confidence limits were found

by iterating the calculation of the cunn.ilative distribution (2.29) about

5 times, eventually determining the value of A such that (2.29) takes

the value 0.95. This was done on a CDC 6600 computer using a Fortran

program and double precision (29 digits). Each iteration required evaluation

of the infinite hypergeometric series (2.22). Individual terms were

calculated recursively, and the series was arbitrarily declared to have

converged as soon as an individual term a was less than 10-10• Let n

- 11 -

Page 26: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

{x-:-~ .. :--~) r :-··

' .. r.:. -·- ~"-\_·:--?- ~. r::.

-- --· .... -. ( ~ .. :-. ~ ;- :?) ·~ ..

. ·. ;.~.::

~: ':: :::.:..,. .... ,_~ ·,- f'il").:.

.. ... :!"i"' ..::..-_'.,.l.

... . ~- ; . ::--

, .... --~

,.. -·- ·- .1. .!.

. :,,,- ... = (·:-· --

= .,/ .

• "~: r_,-1"'.':~~~:; -'i::.~ _:--:: ,._ -7-,·t:v:;t-_:r·

--- "'"':''1 , . ..- ... ,::

~· -~-,.f;.J. ~..'::::: ...... ::

.. - .... · -- ----- ...... ___ - ·--~----. ·- --

:-:.,. ··-

r,._j," .::,

.t. ,--..... ,. -.•.·, •..

( (_;_· .. ;::: )

-___ .,

·- ·----- _· -. ·.- - _- __ .- .. + ·--... -~- --··· ______ .. ______ -··--·-- ---.. --·-·-----

,._ ~

~:-::-·,.: I

•.__ ...

.... :"

(... ... ' -·•""•"":~-. _,_._ ·:_!_ ..... :,.c~·-·

: ... >.-:.:-- . .,,, _._ .. -·-r· _, •.. _ ....

:' ·:; ·:., ._·) \ :,---·• -·

.., -~·~.:, ,, ..... ·,)· _·'._~_ .. , __ .. , . .,. c,~:.:._.~~..,J \. ...... --· ... -J -~_;.:,..,:.,

,..,.-. ~--~ _ ... r • . · ...... ·: --· _. --- -- .

. _.;; .:.

~:-: ~- !._· ••

,_-. ::.~-

_!.

r \.

!:t .-_- .~ .. :=-. ·. ~

·-Dnr;.

.· .... ,.

i.

Page 27: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

..

S ='Eno a. denote the cumulative sum. In Example A, with A = o. 7924 n J

64 8 8 -11 the calculation stopped with s201 = 1 35.77 and a201

= • 03 X 10 ,

and in Example B, with A= 0.8068 it stopped with s357

= 2320684000

and a357

= 8.948 X lo-11

• Computing time was approximately one second

for each iteration.

3. A Method of Mixtures of Distributions for Estimating Other Functions.

In Section 2 we have shown how the Lehmann-Scheffe theory can be

used to estimate products and quotients of Bernoulli parameters.

Occasionally other functions may be of interest. For example, in

reliability theory, when a system consists of combinations of series and

parallel elements, then the reliability can be expressed as sums and

differences of products of Bemoulli parameters. In the present section

we will show that sometimes it is possible to tailor the sampling rule

to the function to be estimated, and thereby eliminate nuisance parameters

and obtain confidence limits.

Lennna. 3.1.

Let pl+ q1

= p2

+ q2

= 1, and let Bin(n, p) denote a binomial

variate in the usual notation. If X - NB(r, q1

) and (YIX = n) - Bin(n, p2

),

then Y -NB(r, q1p

2 / (1-q

1q

2)).

Proof:

The probability generating function (PGF) of X is

(3.1) pl r

~(t) = (y--q t) • - 1

The PGF of (YIX = 1) is

(3.2)

By a theorem for the PGF of a random sum (Feller [ 3], p. 287), the PGF of

Y is

- 12 -

Page 28: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

_.:4] .< -'Ii-.

I:S

·._/_:--~:=:.-(.·.---.). -- ·(·.·_ .. rs___ :-.. __ ~ ..... ) . - . - - . + j) -·.:- •

-,- ' - #-

JJ ~·: .,:, ( :· -- :-r). .·

( -_, .. ·-:·) . .:-- ..; ·. · ..

-- ( j:~•:t?::) .• -.- -.. ------_-·· . :. y/ ,;:';.,_

c_: __ ,.;_;_r-: , ..... i.:_-,_·,-_:( .. -c_.~ / 1:hci c·· ·.-,.,.(ir_:f )-) .. . - . \ · .; ··._:· . '. . : -

~~ ..... ~ :; -· .,,:;,._;_ ., < --~. '. ... _., - .~., ~ (- - "

··--·- ·-·-·-··----- ---·- .·---',--- .... --··-,-·-·----. ··:~ -~-~v·~~-.:~-0·-~ ·---~~--~·:·:·-,~=-·.-.- ~~ q;s· -.)·)::~2#!:.·:~~--~;_t:··f-~~-c,;.:·~

·..:..

•": ·.-,-·,#, _.,.-; ·-- ....

... _~- .. -;. --·; ~- .. !"

~:: ... ~·_..r·_ -· =

:_r.-

,:-·_,: ');, .,_,,· - # - ''

• : ··-i.:'.J

~ ·r. '-' ) .• _,.

·-:) .:.; .

Page 29: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

(3.3)

where

(3.4) 1 - q

which is the PGF of the distribution asserted in the lemma.

Since the marginal distribution of Y depends only on the parametric

function p of (3.4), it is seen that we can make inferences about p

by a sampling scheme which first takes a negative binomial sample from

population TI1

, then a (positive) binomial sample from TI2

, where the

size of the second sample depends on the outcome of the first sample.

We are unable to give an example where the parameters p or q

would be of interest. However, by combining the present result with

those of Section 2, it is possible to make inferences about a wide

variety of parametric functions. For example, suppose we have a system

with two elements in parallel connected to a third in series. The

system reliability is

(3.5)

We can write

(3.6)

and since this is of the form (2.3) {with appropriate relabeling of

"successes" and "failures" in 111) we can use the method of Section 2.

In this case we will require the compound sampling described in Lemma 3.1

from populations 111

and 112

plus an independent negative binomial

sample from each of the populations 111 , 112 , 113

- 13 -

Page 30: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

t ,.

( :')

( :;

( : • ·.·=) .· .

~-.. ·- .' .. ....., '. ~ -:- .- .: ~...;.

.:,

~- .:·' ·.

-.. ::~

= y ,-r --~. ~~ _;,v .

I. . . ..: : -

... , .. :,

cf

·-·I"':""~'··· ..... .i . ...,_··-·'.J.

i)- .:;= - -.· - .r ~- .. ;:I.)5

·-·----·---·-·

Page 31: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

4. Compound Poisson Methods.

In the present section we give some new methods of eliminating

nuisance parameters which apply to the estimation of many functions of

Bernoulli parameters, in particular sums of products. Unfortunately

the methods do have certain deficiencies, as we will indicate. Never­

theless we feel the techniques have mathematical interest and that

possibly ways may eventually be found to circumvent or minimize the

shortcomings. The following well-known result is the basis of the methods.

Lemma 4.1.

If (X IY = n) - Bin(n, p) and Y - Po(A)(Poisson with mean A),

then X - Po(AP).

Thus we can "compound" binomial variates, converting them to (compound)

Poisson variates. If we take a binomial sample whose size is determined

randomly by a Poisson variate of known mean A, the result is a Poisson

variate having mean "-P• The purpose of converting to Poisson is to

allow later manipulations, but unfortunately the original Poisson observation

introduces unwanted variability (see Section 4.2).

4.1 Horizontal and vertical compounding.

If we independently compound Bin{n., p.) ]. ].

variates by Po(A.) ].

variates

we get independent Po(Lp.) ]. ].

variates. The Lehmann-Scheffe theory we

have used in Section 2 applies also to Poisson distributions, and in

particular it is possible to obtain conditional distributions depending

only on the product of Poisson variates, in this case on TT(A.p.). Relevant ]. ].

formulas have been given by Harris [5]. Here TIA. ].

is known, so we can

obtain confidence limits for rrpi. We call the method of this paragraph

"horizontal compounding."

- 14 -

Page 32: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

' ...

1)

.. , .. __ ,.:...,,

- -- ----~-; -'~··!-

. - - ' _-

-;·. x ,.;.-.:.:~-·--! .~°?·, ;:; i-.:; -i . ~:t'°tl a!!.s:-:·::: .:;gr_ ::::.~u:bC>:.r,r_.q-r.::;?.

( -:-.-.-. ...... ,v

·"•'..:.:

•.· ·'"'(. ·"~). --:- ..;:,'-.J :,---~-

- 4------ - _...._ - -· ~

.:. #~_,.--, ... ~ ....... !:,. ...

~:;.' .... :·--·.

--- ··--- ,.,--·--·----·-- -}:·· .~cl: ·--:'~Jl~.!_~.L1_ 1~0-~~::;2~;_; ~t-I~~:ti~)r~.:.:: ·

·~·-···~) . :; .v. ·_

~ -•_-.,_;,.

. ( .. ·: r- ! ) .. • ' ....

·.· .. --c / ·._ ·. I: ., ~ )

-!--',/

·,.:; :: •: :.~:--

Page 33: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

Now suppose x2

-Bin(n2

, p2

), where n2 is an observed value of

x 1 -Bin(n1 , p1), and n1 is an observed value of Po(°A..J. Two applications

of Lemma 4.l: give the marginal distribution x2 - Po(°A..p1

p2). This "vertical

compounding" can clearly be carried to any number of steps. Since A is

known, we need only tables of the Poisson distribution to find confidence

limits for rrp .• l.

To estimate sums of products of Bernoulli parameters one can appeal

to the reproductive property of the Poisson distributions, and possibly

also to the following lemma (Feller [3], p. 301, problem 3):

Lemma 4.2.

In Y Bernoulli trials, where Y -Po(°A..), the numbers of successes

and failures are stochastically independent.

For example, 9 in (3.5) can be rearranged to 9 = p3(p1+ q1p2). This

suggests observing Y = n from Po(°A..), x1 = x1 from Bin{n, p1), x2

= x2

from Bin(n-x1

, p2

). Then x 1 -Po(°A..p1

) and x2 -Po(°A..q1p2) and x 1

and x2

are independent. Thus x1

+ x2 - Po(°A..p 1 + °A..q1p2 ), and taking

x3

from Bin{x1 + x2

, p3

) gives the desired unconditional distribution

x3 - Po{°A..P3'P1 + qlp2)) •

It is clear that there are many ways to estimate sums of products

by horizontal and vertical compounding. For expressions involving

differences rather than sums, algebraic manipulations using the identities

pi+ qi= 1 can be used to reverse the signs. Hwang [6] has classified

functions which can be estimated by these various techniques.

4.2 Critique.

In either horizontal or vertical compounding, the use of the variates

Y having known Poisson distribution is rightly viewed with suspicion.

- 15 -

Page 34: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

) • l'" -·

... ·

··--- ·------- -(~!;~~·r:~: .. c~t::G •

~ __ s·). .~ ..

...... -,. .._....,.:J

+ C" \..)

f )] ~ L ·-' .

·.• ..! - •

= . .i

._.j

. "l

-. J

( ,,. -·- _( ... :, .. :- .. -, :.

= ( C' J_.!_ - .

. -.·;~ n.-.-· ..... -.f-; ... --- .-~_ • ..,:_·_:.-_':..~:·.· ·:-, ._-_._-_._.,_._? .,, - ~-- :,,_~;,;•.,., ... :..J ~ - -- - -

. ·,· ~ ( ')• -. ::,_.'.) .·

.. -~ j..: :-:., ~ ..... ·,_ .

r;·-·:.--·

: ... :

J..: .. ;.: .-._

·-,--

=

·~ . I

~-.,.~--· - ·. : ~

Page 35: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

Technically, Y is an ancillary statistic since its distribution does not

depend on the unknown parameters. Contrary to our suggested procedures,

either Bayesian theory or the likelihood principle would demand inferences

conditional on the observed value of Y.

The variates Y do indeed introduce undesirable variability, which

is the price we pay for elimination of nuisance parameters. We can get

some idea of the effect of the added variability by considering a

simplified case. Take Y - Po(A) and (X IY = n) - Bin(n, p). We may

compare the conditional (given Y = n) point estimator

(4.1) p = X/n

with the unconditional estimator

(4.2) P= X/L

The simplest comparison is that of conditional and unconditional variances:

(4.3) Var(pl·Y = n) = pq/n Var p = p/L

A measure of the efficiency of p relative to p is the quotient

(pq/n)/(p/1) = ql/n, in which the relevant factor is q, since 1/n is

of the order of unity for large 1. We conclude that the additional

variability introduced by Y is very serious when q is small, but not

serious when p is small so that q is close to 1.

On the other hand when p is small, it becomes necessary to choose

1 to compromise between large samples (1 large) and a high "discreteness

index" {lp small; see Section 1.2).

In any compounding scheme it would be advisable to consider questions

of efficiency and "discreteness index" at each stage.

- 16 -

Page 36: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

.,,. ·_. J•: ;,.,.; -~~.-.·-·, . ....;,,._; ··-

·; ·.- :-- 1j

.__ I

C:-F··

.-...-........ ~ ., . .,...,..,

J.

C - c·· . ... ' ' ;• ' . ,•) '.) \' _>/. ;_-;,.

= -ri't\~.,.~ ·.,·~~-r r., ·----r;:-.. ~-r~r; .'r_.-::-.-_·,_·r-_..,_,"'._·_~-_.' · •. _-_ .. :·,;=.·~.~--.·'.-n-..,_.'. ____ ,.. __ ,-•. _,·. . ··:.,l - -·~ •' -~.·: ...... _ - • ···-~"\. ·-:- •. ·.. ..., ----- ~-- _,

~r, \_.,.. ,.,'l. \--·

/\ ~- ;:-r ... --,. \.-_- ..

= ~v - ,- ,

( •.=:_- r.-r: _._ •• _ ... ,.Q .... . ~ ~ . - - ..

..... ,"• . .-.-.. ~ ·~ - ..... :.., .t.\-{.·")

. ···,;

...

Page 37: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

5. Estimation of Products Using Geometric Variates.

A sequence of Bernoulli trials--that is, a Bernoulli process--gives

a record of "successes" and "failures," which may conveniently be recorded

as zeros and ones. Let p = P{success) = P(O), q = 1 - p = P{failure) = P(l).

In a record such as 0111010001101 the information about p is traditionally

sunnnarized by "X - Bin( 13, p) and X = 6," if it was dee ided in advance

to stop with the thirteenth observation, or by "X - NB(7, p) and X = 6"

if it was decided in advance to stop with the seventh failure. In the

latter case we may alternatively report the observation of seven independent

geometric variates: X. 1.

is

the number of successes between the (i-1)8t

and ith

failures. We write

X. - Geom( p) where ].

(5.1) X - Geom(p) X

means P(X = x) = (1-p)p for x = O, 1, 2, ••••

In the present section we exploit properties of geometric variates

to estimate products of Bernoulli parameters. Our methods are discrete

analogs of those proposed by Lieberman and Ross [11) for the estimation

of sums of exponential parameters.

5.1 A method for estimating Pila and its theoretical basis.

The following easily verified lennna follows from Ferguson's characteri­

zation of the geometric distribution [4].

Lennna 5.1.

Let X and Y be independent, X - Geom(p1), Y - Geom(p

2), and

let ql = 1 - p1 , q2 = 1 - P2 ,

(5.2) u = min(x, Y), V = Y - X.

Then U and V are independent, U -Geom(p1p2 ), and

- 17 -

Page 38: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

.:1 .. ~.· -

-

(. ·s)

:_--s~:

~-. ---:- - ·~ ~; ~ ,_::; , -... ·:- ~ ...

-- ·--- -~---· --~ ;_·~s.;:··(.~;;. : .. 'l-;:·:-1'!~· ::;; .. ·:-:? ~·2 •

-- .:,.; -~.- ~.

( ...... ;· I. ~ = \ J. 3·. -

( :r <) _ ... _ ·"""~· .-~ .-

- _~:.~

(:-: r ;- .... :. . :,:·!-.') -= (J~ ");. '):

. - .-,-

~-:..:.. 7 .--TI-:. - -~-

=

.·, ::·7:-::c,.,·.·.-:.·_

Page 39: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

(5.3) I V

cp2 P(V = v} = -v

cpl

v>O

v<O

Since the distribution of U depends only on p1p

2, one or more

U-values could be used to estimate However, there is "left

over" information in the values of V which can be used for increased

efficiency.

The variate V is rather like a Geom(p2

) variate when V > O

and -V is like Geom(p1) when V < O. The case V = 0 in fact presents

a difficulty (which did not arise in the Lieberman-Ross continuous model)

which we overcome by an asymmetric treatment, arbitrarily classifying

V = 0 with the negative values. From (5.3) we have:

Lemma 5.2.

(V-ljV > 0) - Geom(p2).

Now let us suppose x1 , ••• , Xm are independent Geom(p1) variates

and Y1

, ••• , Yn are independent Geom(p2 ) variates. Let u1

= min(x1

, Y1),

v1

= y1

- x1

• We now use v1

to construct an observation u2

, independent

of u1

and having the same Geom(p1p

2) distribution.

Let us define I min(X2, vl-1) if v1

> o (5.4) u2 =

min{-v1 , Y2 ) if v 1 :::: o

1V-l-X if v1

> o (5.5) v2

1 2 =

Y2 + vl if v1 :::: o.

From Lennnas 5.1 and 5.2 we have:

Lemma 5.3.

{i) u1

, u2

and v2

are mutually independent; {ii) u2 -Geom{p1p2 );

(iii) v2

has the distribution (5.3).

- 18 -

Page 40: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

(-

\ -J.:.

)

C)

:"! \._-_, =

' -~ -

·.•. - - ~-.. r :..! •

..... ,"'.} .... ~.

.. S

>

:, ·f:.:(:..i/_~. :LS)

-. :::::·:· cx~-l~1f(~rf - >'

·r f'- -;, ., ., • ;.,

C·!. · }. : L > J).

::·t·':··,t.) ·s-~

r:. ,. ... _,...-rr~( ·~ ;·_j -}'. ~ \..·::>v-· .,,->. ,._. ·-__ . - - . . ;( ~A-j_.A. $ _u)

.i':. < ,-

- ;, f-.;. ;_;~,): ·( l:;V._/

,_·_.-,'_ ·.·

I.{.::. :::: ,':-j = .s ..::. >-

J.:., = r';

, ... ,-, •. ,l

. -, - ::::

·:· C . ·,· \-

:, = :-'::t'\(J-h-;·v).·

:J.-(:· .

Page 41: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

At this point we have two observations u1

, u2

, useful for estimating

p1p

2, plus independent "left over" information v

2 which, like v

1, can

be combined with a Geom(p1) observation if v2 > 0 or with a Geom(p2

)

observation if v2 ~ O. The procedure continues until either the X's

or the Y's are exhausted. As in the Lieberman-Ross method there will

then be residual information about one parameter, but not about both.

r values Uj9 These are combined without Suppose the procedure yields

loss of information about by forming the sum

(5.6) r

s = 'E u. r i=l 1.

Well known distribution theory gives Sr -NB(r, p1p2 ), so that it becomes

a straightforward problem to find confidence limits for p1p2

using the

negative binomial distribution, as we show in Section 5.4.

5.2 A symmetric approximation.

Some simplification is possible, and the results are more closely

analogous to those of Lieberman and Ross, if we define the "symmetric

approximation" to be that obtained by deleting the "-1" in the definitions

(5.4) and (5.5) of u2

and v2

• When q1 and q2 are small, then

V = 0 has small probability, and the "symmetric approximation" is

presumably appropriate. Let us define the following partial sums:

j

Tlj = 'E X. i=l

1. j = 1, ••• , m

j (5.7) T2j = 'E Y.

i=l 1.

j = 1, ••• , n

j s. = 'E u.

J i=l 1.

j = 1, ••• , r'

- 19 -

Page 42: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

.. _·· .... . ;.

I

( ··.• ·.· \ ~ .'·.)

.::,::.·~_:.; . .:

c.

2 =

.. s·-; = ~-"

=

-, = T.;. •• ~.;,

,-

_ _;.: · = ~- :.. • "" . .;,

... .;,

f.: .. '

. .. ~ .

~ f . 3},.u.::::,"3'.·,:.r.. ;· ~ ·i::.st::S~::.'i1 S:~F~:()I;: w

. 'J?.:

= ·.:.1.;

I •• ~,1 '

.:1 ·.:

. -02" ! S .,_; ~ _._ ~ : --> ;_::;., ,.. __

s j_

::!{.·:'

. .. ~,. ~

_\• .-

. , .,.~ .-:· -~"---- t • • -..

.·- .... __ ..., \ ..... , . ,

: ~-

..... · ... -.- ]JC,[: "2·::-..,.,...., .. .: -

~:JJC

_; .o.· .. :.-.r;:;; ··(::·S). . j) . ..

--.: : ;~ ..

Page 43: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

where r' is the number of U-values provided by the symmetric approximation.

As in the Lieberman-Ross procedure, it can be shown that the construction

is such that the sequence of s. 's J

agrees exactly with the first r' terms

obtained by putting the T1j's and T2j's into a single ordered sequence

(an exa!DPle is given in Section 5.3). Therefore W, can be calculated r

simply by

(5.8) S , = min(T1

, T2

) r m n

and r' is the number of T1j's and T2 j 's

This approximation shows that when q1

less than or equal to

and are small there

s '. r

will be a minimum of "left over" information, so that the procedure has

a high efficiency, when we have approximately the same number of Bernoulli

observations from the two populations. Of course, it is also possible

to deliberately terminate sampling in such a way as to minimize "left

over" information.

Because of the simplifications arising in the symmetric approximation,

it is tempting to think that it may actually be exact. A simple check

shows that unfortunately it is not. Let A= (x1 = Y1 = O}, B = (X1 = x2 = O},

C = (Y1

= Y2 = O}e Then with the modified definition of u2

,

(U l = U 2 = 0} = A U B U C • Since BC = ABC , P (A U B U C} = P (A) + P (B) + P ( C )

- P(AB) - P(AC) = q1q2 + qf + q~ - qfq2 - q1q~. When q1 = q2 = q = 1 - p,

this gives P(U1

= u2

= O} = q2 (1+2p). But if u1

and u2 are independent

Geom(p2 ) variates, P(U1

= u2

= O} = {1-p2 ) 2 = q2 (1+2p+p2 ), so that the

synunetric approximation is not exact.

5.3 Examples.

To illustrate a case where p1

and p2 are close to unity, let

(x1

, ••• , x3) = (24, 35, 39), (Y1 , ••• , Y6) = (16, 12, 27, 12, 43, 19).

- 20 -

Page 44: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

.. . j

(- . ·' * •• . .

~---····----~~:.;_::;·,.~:~:: ~--~~ ~

( .. .: ...::. . -. -·

'· -+.-'

-·~-<':;-- -~. ~-~-V--•,,'(,,.

.. ) =

-.:..--·•·(Y, \ ·u .,J •. c -'•':--)

T· ,-

.:~·.; =-~ ~ ~- .:,:c;. :s (ft 0 • :{ = = •.J

_·,-·r-· .. .-

.. (·I-~, ,. [•'.'I ~-- \"' I

- ,..S = - -1!. = ·:• '\-_:.

-.-

':-"""" -­... : ,; -·~-

·, '

-;:-

. ·. - -.:. ':'••

···" ..

§,:: .. 1)rt~~~\r:.~-?JJ;_B-1p · -;·,'\1:

.- .. ~ - . .

'\" , ... -~: .... :· : .·

=

-~---- ... - _.._ - •, ··-· ... __ ,_:.! .... 'J

··t~:·r~: :; ,, CJ:;. ;i.J'.'.:it~:~·lZl,:,l ::;-_··< ·. ~-rr ;~vi.1,!.:;f'f:~! .-:.f: ;_. ;:; (j -·· ,.,,.. . ,-

_.•..:.: o.,J ~-

-~ ~-- .

,..._. __ .... ,.JV

~-.,

. . . .:...::;~\.} , _:,1 .. 1.:.;.;. .. ~:,

,·· ;

=

= r:. (,;·) . . ·.- . '(.'...: I -. c ...... -, -:~ '··)

_; j = ; . ' = ~~-.-C :: ,_ - :ij _;;

:!.': .. :r:

~-.-_.: ;:;~

Page 45: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

u1 = min{24, 16) = 16, v1

= 16 - 24 = -8, and we take +8 to be a

Geom{p1) observation. u2 = min{8, 12) = 8, v2 = 12 - 8 = 4, and we

take 4 - 1 = 3 to be a Geom(p2 ) observation. Similarly we find:

(v1 , ••• , v7

) = (-8, 4, -32, -5, 7, -33, 10), (u1

, ••• , u7

) = (16, 8, 3,

27, 5, 6, 33), (s1, ••• , s7

) = (16, 24, 27, 54, 59, 65, 98). The "left

over" information is v7

= 10 and Y6 = 19. The synnnetric approximation

would give (s1, ••• , s7

) = (16, 24, 28, 55, 59, 67, 98), where s1 = Y1

,

s2 = x1, s3

= Y1+ Y2 , s4 = Y1 + Y2 + Y3

, etc. We know s7

-NB{7, p1p2

),

and the value s7

= 98 sunnnarizes the information about p1p2

• In

effect, we have terminated with the seventh failure on the 105-th

Bernoulli trial. The maximum likelihood estimator of p1p2 based on

s7

is 98/105, and (as we show in Section 5.4) a 90 percent confidence

interval for p1p2 is (0.890,0.968).

With p1

and p2

small, we might get observations like

(x1

, ••• , x7

) = {o, 1, 1, o, o, 2, 1) and (Y1, ••• , Y12) = (o, o, 1, o, 1,

1, o, 2, o, o, 1, 2). Our asymmetrical solution gives u6 = u10

= u17

= 1

and all other U-values equal to zero. Left over information is x7

= 1

and v17

= -1. The maximum likelihood estimator of p1p

2 based on

s17

= 3 is 3/20 = 0.15. The synnnetric approximation gives a substantially

different sequence of U values, and in fact direct comparison of the

methods is difficult because the asymmetric solution tends to use a

greater proportion of Y values, so that at termination the two methods

have used different sets of data. We believe the synunetric approximation

will tend to overestimate p1p2

when p1 and p2

are small •

. In the above sequence (x1

, ••• , x6

) we have 4 successes and 6

failures, giving p1 = o.4. In (Y1 , ••• , Y12 ) we have 8 successes and

- 21 -

Page 46: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

.. _.~ •.

•.· 4,

.. -·· J 4 --:,;,~

.. \· ..

.,. j

~4' !

1:. -V

J \S,') = r) • I.2 •

. ~·i ~ .. - = - .. : .....

'\ -·.)

(., . ..i -·

:,. ... ~ -j··.· :.·,:,. _ _,. -:-·

8 (

.. -.:

·~ ~-~ . ' = .... .; .., ~- : = -- ..... -- =

·:. -~ ( .... , .; . -

-~ ·. '--· . :_'._·)-~ (·---~

..... • ~ '. • . .• s t, •• ~

( \ - .. :, - :. \

J. ,·"·"' ··.,.-;.

0

---,:--·.- -: -· --::.:· =

·..:..-:·,--; ,.._: .; ,I'

·( ·, - .. .. · j

r,:, -..... J...)

_..:._ ~ '

c-- -.

-•_,.

c,. J

1-1;•

( ..... .! '. ~

·b'<

-· ..... -~-: . -

- e.-.--... 1 ~-

-~r •,· ·. -,_ ,I'.:···-·

= -•:.,.

.s- ·.:.

;.:

. J 1 . --: ,_.;__.:., · .. - -i --~ ... :~.

ii = JS

- ,. = ~1.i.'.'-.. =

.. -.:. -·:. ,- . C.

. • ••• - ~ •• -, '·' .~ • .... ..J .. :. .1 -'.; -: '. ~ .t :,_; ; , -... '7

·.- -- ·:'.' .. : ',):--. ••• ~-1

=

·. : ~ .. ' • .... ' .• ... _ .... '. . .- · ..

Page 47: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

12 failures, giving p2

= o.4. The value p1p

2 = 0.16 agrees well with

the value 0.15 derived above. We give this comparison simply as a partial

check, and not to claim any superior method of point estimation. The

point of the procedure is rather the possibility of finding "exact"

confidence limits for In the next section we find a 90 percent

confidence interval (o.o44, 0.344).

5.4 Calculation of negative binomial confidence limits.

Let us denote by FX{x; r, p) the cumulative distribution function

of X -NB{r, p) defined in (2.1). Then using standard arguments

(5.8) P{p ~ p(X)) ~ y and P{E_{X) :S p} ~ y

when p(x) and E_{x) are solutions of

(5.9) FX{x; r, p) = 1 - y and FX{x-1; r, .E,) = y.

It is known [7], [15] that the negative binomial is related to the

incomplete beta function defined in (2.37) by

(5.10)

Therefore we may also define p and f as solutions of

(5.11) I1

-(r, x+l) = 1 - y and I1

(r, x) = y. -p -_f

Thus the confidence limits could be found from tables of the incomplete

beta function [17]. It is usually more convenient however to use the

F distribution. The relationship here {see for example [16], p. 33) is

(5.12)

where of course F~: is the Fisher-Snedecor F variate with 2m and

2n degrees of freedom. Thus we find

- 22 -

Page 48: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

( :::i •. ..,-c ... • . ..! -'-')

,. ~-

(···1·· _.,.. ·.'. \ :.:.. .; '-J.)

.:...-;. ....

( : .• >.) . , -·

o·,·

;\ ( . .,._\ -::__. -~)

. =·~(-- ~ J:. -- ....

. .. ( - -1· - --,.:. - . .;.~ '• \ ;. .. I

~-S'IJ_q

··. \ = -::) 'L

.·,

I-.. :_(7-1\. ·-'?: __ ::·_,::_. !:>,..,. ,...,.,, -- -,-~ - - -- ...., ~-~J-. ~ .... -~ ': .. ·.\·''::.

.... -·-cv·) ~: -- .

(~. -,-\. ~ '·' "" I

. ' ::-,; )

- ,, .... = :•·' ( f!t..'--J'- ',_I\:.

.· \~. ·--~--.: ~ -I

'-:-"-.:

. - --·~--- ----- ·----···-·-----··· · ~:;~~ .. ·--~;·I1·s:;.:tc/1 o·:: z;-~/:_sz·.:?;·_:'\.::; r,~ ..:·J 1Jt._~-=·::;..: c·a~-~=· .. :-·~ ,:;;_:..;_.~ ~ _ -:_.!.:.;~-~.:.2·

.b0 = v:

";·.·i~ ~!-_.._, .. , - £, ~: =

V V

,-.;....~·-.:

:. · .. )

.-_:;. , -1 e:~::_-:-:;z_,~ .::

Page 49: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

(5.13) ( ) 2x+2 x+l F2r (y)

p = 2x+2 , r + (x+l)F2r (y)

xF2x(l-) 2r Y p = 2x

r + xF2r(l-y)

where Fm(S) is the F-percentile defined by n

m m P(F < F (S)} = S• n - n

Since F tables usually give upper tails, we may use the standard trick

of taking reciprocals to express f as

(5.14) p = 2 • x + rF~(y)

X

If neither the incomplete beta nor the F table is convenient, a

number of approximations are available [l], [7].

198 For the first example of Section 5.3, r = 7, x = 98, F14

(0.95) = 2.16,

14 F

196(0.95) = l.74, and we find a 90 percent confidence interval

(£, p) = (0.890, 0.968). For the second example r = 17, x = 3,

F~4(0.95) = 2.23, Fi4

(0.95) = 3.79, and (£, p) = (0.044, 0.344) is

a 90 percent confidence interval.

There are many ways to extend the above procedures in order to estimate

the product of k parameters. If we consider the symmetric approximation

and let T. in. i

denote the cumulative sum of all n. geometric observations i

from population i, then S , = mini(T. ) r ini

is approximately NB(r', p1p2 ••• pk)

T .. (j = 1, ••• , n.) less than where r' is the number of cumulative sums iJ 1

or equal to s '. r This analog of an exact result of Lieberman and Ross

would appear to be a somewhat questionable approximation for large k.

Perhaps the simplest exact procedure is a (k-1)-fold iteration of

our two-population procedure. Let (Xi. l' ••• , X. ) denote Geom(p.) in. i i

- 23 -

Page 50: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

.. -...

.. ,_ .-

..: _ . ., l.-

.·., ··-~ .. ---:

··,.

:s/ .. ( ·:·~1 ·-~-<:) =

( ,.

.,,. + ~:·--. s:: ( ... , . .. , -·- I

( ··, .... \ 1:_ .i ;=') .:.,,; = --- -- ·--------- .... --- - .

(). ·,--:_, '; ·- ~ .. -'--' =

( '~-~ .1 :..·. ·-

-r:; .:'.·i.,S

.-.-:·.,.,,;.,-

f .·.-1 • :..! .l

~-: . ·: (1 - :· ~ j

i ....

·-:~_ -~ .. b~::c,:-~~; ........ -.:.:.;..i- .:_, :•)

(

.::. = =

........ = L :~{-, ( :~. :j;(j ... -· .:·

,· i_ ].,· ... ··-·

.. ·--·, . : ·- · .. ' .... , ~ -· ~ ---~ -.- • f

: .. --- .... ~'-.s -

Page 51: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

observations. Utilizing observations with i = 1, 2, we may construct

by the method of Section 5.1, values u1

, ••• , Ur which are independent

variates. Next we combine

to obtain Geom(p1p2

p3

) variates, and so on.

Other procedures can be described which involve choosing the minimum

of more than two geometric variates. It does not seem worthwhile to spell

out details here, but we will state without proof three lemmas giving

distribution theory which could be used to this end. These lemmas may

be of some interest in themselves, and could possibly be shown to yield

procedures in some way superior to the method of iteration.

Let x1, ••• , Xk be independent, Xi -Geom(pi), and define

(5.15) i = 1, ••• , k-1.

Lemma 5.4.

w0 -Geom(p1 ••• pk) and w0 is independent of (w1, ••• , Wk_1).

Lemma 5.5.

{w0

= w0

} is independent of {x1 ~ x2 ~ ••• ~ Xk).

Lennna 5.6.

Given that x1 ~ x2 ~ ••• ~ ~, w0 , w1 , ••• , Wk-l are mutually

independent and W. -Geom(p. 1 ••• pk) for i = O, 1, ••• , k-1. l. 1.+

- 24 -

Page 52: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

... \. -~,;

( . . \ .[ -=~

'! _'! •• :: -·

('

•. t

r .. ,._ ... - _.:,

= -

--;' ........ .-.:. . , .... _.

'!' .. _ ... r:·.;., ,,. ... , =

= l

J..:-:-_.: ..

. . '

~(:fC•B!~j · .Go.t:r:;t:it".r:{~~\ f:,

. ~. "' : . -

:.--·:· :~-c~h r~:::~:1::.7..1~-:.. · .s\.<l ~---t~' .. .:,

~.: . (_;p~ . -.. ·•

>

~ .. ;':_;<·_V{

.-~.! . (. r•

:1, ...

.rt.£.

... - .,--· ... .I .

·,.-(.1·'

(_ 1•- = ~--- ,

(.·(,f ?) \ ..... - . -..

,-"\ __ ;<J ':.

't ......... _, ..

Page 53: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

REFERENCES

[1] Bartko, John, J., "Approximating the Negative Binomial," Technometrics,

8 (1966), 345-50.

[2) Buehler, Robert J., "Confidence Limits for the Product of Two Binomial

Parameters," Journal of the American Statistical Association, 52

(1957), 482-93.

[3] Feller, William, An Introduction to Probability Theory and Its

Applications, Vol. 1, 3rd ed., New York: John Wiley and Sons, Inc.,

1968.

[4] Ferguson, T. s., "A Characterization of the Geometric Distribution,"

American Mathematical Monthly, 72 (1965), 256-6o.

[5] Harris, Bernard, "Hypothesis Testing and Confidence Intervals for

Products and Quotients of Poisson Parameters with Applications

to Reliability," Journal of the American Statistical Association;

66 (1971), 609-13.

[6] Hwang, Dar-Shong, "Interval Estimation of Functions of Bernoulli

Parameters with Reliability and Biomedical Applications, 11

University of Minnesota PhD Thesis and School of Statistics

Technical Report No. 152, Minneapolis, 1971.

[7] Johnson, Norman L. and Kotz, Samuel, Discrete Distributions, Boston:

Houghton Mifflin Co., 1969.

[8] Lehmann, E. L., Testing Statistical Hypotheses, New York: John Wiley

and Sons, Inc., 1959.

[9] ~

Lehmann, E. L. and Scheffe, Henry, "Completeness, Similar Regions,

and Unbiased Estimation, Part II, 11 Sankhya, 15 (1955), 219-36.

- 25 -

Page 54: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

~.; -~\ ...

• •• C,

.. ~

>]

---··.!

.. , i ' ..• J

·-·--- ·--S"J~(.f f.:_r..:·J) ~-s-2.s-tt :~3 i: ~:·~:·.~·-::;c.:-.r ~ It ~.::i: IJ: · •.• ._; ·.:.J.1 .. · .. ~;:·:.: y •

J,.

-:--(.-. ;, .£. ... ~ ~

----·-·---·---· -··---·--------- ---1-:·. :,u5..:-:-~:..t. :~~..:~;~ .... .:~---;· .... ~~

.. -... ,_

.: :.:..; ·! _,;

• r:.,_·":.; . ---- ...... ____________ . . -------

:·:·:;\=~~ 4 -~·;- ... ~:·· ... ~:~·::-. :;~:-:::r.·:; )~2~:~::~~.r,--:;~r0;:;2:, n~~J;::.:;

c,.:c . -~

•""•1• •• , ,: . j.: : \. J\_:.~-·

c·~~ ~ oJ·,..~~c.;J i-~ _s~;.ir:;-.;:~--~. ·~ 1.-:..i· ~=-... / \/bI:.._J\·-:·:.:.:r.~:·cri~ ,""'····:,_

··- _,, ..... --·.Ji-··-:.·~

- - - ------- ____ ...________ - --- --·--·- .. - --- ··--- -·----- ----- --; ------

.. -:··-- -.: ~:.,J~__.

L ·)_] !~072"G-~;, :.~-~.!~-.. ~:-·.:· \/:_, -~-=--::i-~~c:~-:; .. :=~~Jr: .;t~i :);:of·~·/.r~-~--4=-·:. ·.:cp-~~?.::~~-.-~r;q _I~S

-·----- --------··-------·-----·-------·-·- - ---- ·--·--· - --·-i···---·-- ------. - - .... . ~-J:~:~;!,~.::~ 1:G·:, ~: ... :1,~~~:J:µ_~r ·c,::: ·.::;·1_s \·. *'?-~- .. : •.~.;J :~.::,::;~:~·8.;:f·-:=~-}- ..:.·.rc8(:,c;~·z.::~:-i~I::"·

··~, L ~- J

:; .... -, ~-- ·.· ·-· ~·

• 4 .!.

Page 55: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

(10] Lentner, M. M. and Buehler, R. J., "Some Inferences about Gamma

Parameters with an Application to a Reliability Problem,"

Journal of the American Statistical Association, 58 (1963), 670-7.

[11] Lieberman, Gerald J. and Ross, Sheldon M., "Confidence Intervals for

Independent Exponential Series Systems," Technical Report No. 130,

Department of Operations Research and Department of Statistics,

Stanford University, Stanford, California, 1970. (To appear in

Journal of the American Statistical Association~)

(12] Madansky, Albert, "Approximate Confidence Limits for the Reliability

of Series and Parallel Systems," Technometrics, 7 (1965), 495-503.

(13] Mann, Nancy R., "Computer-Aided Selection of Prior Distributions for

Generating Monte Carlo Con£ idence Bounds on System Reliability,"

Naval Research Logistics Quarterly, 17 (1970), 41-54.

[14] Myhre, Janet and Saunders, Sam C., "On Confidence Limits for the

Reliability of Systems," The Annals of Mathematical Statistics,

39 (1968), 1463-72.

[ 15] Patil, G. P., "On the Evaluation of the Negative Binomial Distribution

with Examples," Technometrics, 2 (1960), 501-5.

[16] Pearson, E. s. and Hartley, H. o., Biometrika Tables for Statisticians,

Volume I, Cambridge University Press, 1956.

[17] Pearson, Karl, Tables of the Incomplete Beta-Function,. Cambridge

University Press, 1934.

[18] Rosenblatt, Joan Raup, "Confidence Limits for the Reliability of

Complex Systems, 11 Statistical Theory of Reliability, 115-37,

Marvin Zelen, Ed., University of Wisconsin Press, Madison, 1963.

- 26 -

Page 56: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

:r0::_

( .. '.,.:)

( ·~;: _.-·, :··) ·': .. -... ·._ · ... ·'·

,,.-, .. ~­.,.: -·-:.

. - •~• .: .. "::,.· 'r ·.: .. -~.· .. ~-: :._.. ,-.~,-. ,,.f',,.,. --:. ... ·· ...... , .:.,. .. -_.

:; '.~:·

.i ...

~ •· ..

r ~ ~-: !_ -·- -··

... -- .. --~ .. J;..~.·.• . .:....\.:."'::;:.··.

~ '.,:, .. r ::· L ·. ..

r/_:-

r· r-: L ,. -·.;

Page 57: CONFIDENCE INTERVALS FOR SOME FUNCTIONS

i~ .. , •,

i ,_

· ... · (19] Stevens, W. L., "Fiducial Limits of the Parameter of a Discontinuous

Distribution," Biometrika, 37 (1950), 117-29.

[20] Whittaker, E.T. and Watson, G. N., A Course of Modern Analysis, 2nd

ed. Cambridge, England: Cambridge University Press, 1915.

- 27 -