UNIVERSITY OF · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ......

58
P SC Cal PROB UNI CHOO licut Uni ( BABIL B CO IVER OL OF versity P (STAT LITY BSc. Ma OMPLEMEN II SEM RSITY F DIST P.O. Mala TISTIC DIST athemat NTARY CO MESTER Y OF TANCE appuram 416 CS) TRIBU tics OURSE R CAL E EDU , Kerala, UTIO ICUT UCAT India 67 ONS T ION 73 635

Transcript of UNIVERSITY OF · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ......

Page 1: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

P

SCCal

PROB

UNICHOOlicut Uni

(

BABIL

B

CO

IVEROL OF

versity P

(STAT

LITY

BSc. Ma

OMPLEMEN

II SEM

RSITYF DISTP.O. Mala

TISTIC

Y DIST

athemat  

NTARY CO

MESTER

Y OF TANCEappuram

416

CS)

TRIBU

tics 

OURSE

R

CALE EDU, Kerala,

UTIO

ICUTUCAT India 67

ONS

T ION

73 635

Page 2: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 2 

                     

UNIVERSITY OF CALICUT SCHOOL OF DISTANCE EDUCATION

B.Sc  Mathematics 

II Semester  

Complementary Course 

(STATISTICS) PROBABILITY DISTRIBUTIONS

Prepared by :  Sri. GIREESH BABU .M.Department of Statistics  Government Arts & Science College, Calicut – 18 

Scrutinised by:  Sri. C.P. MOHAMMED (Rtd.)Poolakkandy House Nanmanda P.O. Calicut District 

Layout:   Computer Section, SDE ©

Reserved

Page 3: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 3 

CHAPTER CONTENTS PAGE

1 BIVARIATE PROBABILITY DISTRIBUTIONS 6

2 MATHEMATICAL EXPECTATION OF BIVARIATE RANDOM VARIABLES

13

3 STANDARD DISTRIBUTIONS 25

4 LAW OF LARGE NUMBERS 52

Page 4: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 4 

Page 5: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 5 

SYLLABUS

Module 1

Bivariate random variable: definition(discrete and continuous type), Joint probability mass function and probability density function, marginal and conditional distributions, independence of random variables. (15 hours). Module 2

Bivariate moments: Definition of raw and central product moments, conditional mean and conditional variance, covariance, correlation and regression coefficients. Mean and variance of a random variable in terms of conditional mean and conditional variance.(15 hours). Module 3

Standard Distributions: Discrete type Bernoulli, Binomial, Poisson distributions(definition, properties and applications) Geometric and Discrete Uniform(definition, mean, variance and mgf only). Continuous type Normal(definition, properties and applications) Rectangular, Exponential, Gamma, Beta (definition, mean, variance and mgf only). Lognormal, Pareto and Cauchy Distributions (definition only) . (30 hours). Module 4

Law of large Numbers : Chebychevs inequality, convergence in probability, Weak Law of Large Numbers for in random variables, Bernoulli Law of Large Numbers, Central Limit Theorem for independent and identically distributed random variables(Lindberg-Levy form). (12 hours).

Books for reference:

1. V.K. Rohatgi: An Introduction to Probability theory and Mathematical Statistics, Wiley Eastern.

2. S.C.Gupta and V.K.Kapoor: Fundamentals of Mathematical Statistics, Sul-tan Chamd and sons.

3. Mood A.M., Graybill.F.A and Boes D.C.: Introduction to Theory of Statistics Mc Graw Hill.

4. John E Freund: Mathematical Statistics (Sixth Edition), Pearson Education (India), New Delhi.

Page 6: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:
Page 7: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 6 

C h a p t e r 1 BIVARIATE PROBABILITY DISTRIBUTIONS

1.1 BIVARIATE RANDOM VARIABLES 1 . 1 . 1 D e f i n i t i o n :

Let S be the sample space associated with a random experiment E. Let X = X(s) and Y = Y(s) be two functions each assigning a real number to each outcomes s S. Then (X, Y) is called a bivariate random variable or two-dimensional random variable.

If the possible values of (X, Y) are finite or countably infinite, (X, Y) is called a bivariate discrete RV. When (X, Y) is a bivariate discrete RV the possible values of (X, Y) may be represented as (xi, yj), i = 1,2, ..., m, ...; j = 1,2, ..., n, .... If (X, Y) can assume all values in a specified region R in the xy plane, (X, Y) is called a bivariate continuous RV. 1 . 1 . 2 J o i n t P r o b a b i l i t y M a s s F u n c t i o n

Let (X,Y) be a pair of discrete bivariate random variables assuming pairs of values (x1, y1), (x2, y2), ..., (xn, yn) from the real plane. Then the probability of the event X = xi, Y = yj denoted as f(xi, yj) or pij is called the joint probability mass function of (X, Y).

i.e., xi, yj = P(X = xi, Y = yj)

This function satisfies the properties

1. xi, yj ) ≥ 0 for all (xi,yj)

2. ∑ ∑   xi, yj  = 1 i

1.1.3 Joint Probability Density Function

If (X,Y) is a two dimensional continuous random variable such that       = x, y then f(x,y) is called the

joint pdf of (X,Y), provided f(x,y) satisfies the following conditions.

1. x, y ≥ 0 for all (x, y) R, where R is the range space.

2. R (xi,yj) = 1.

Moreover if D is a subspace of the range space R, P(X, Y) D is defined as P{(X,Y) D} =

, . In particular P{a X b, c Y d} = ,

1.1.4 Cumulative Distribution Function

If (X,Y) is a bivariate random variable (discrete or continuous), then F(x, y) = P{X x andY y} is called the cdf of of (X,Y).

Page 8: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 7 

In the discrete case, F(x, y) = ∑ ∑ Pij

i

In the continuous case, F(x, y) = ∞ ∞ (x,y)

P r o p e r t i e s o f F(x,y)

(i). F ( -∞, y) = 0 = F(x, -∞) and F(∞,∞) = 1

(ii). P{a < X < b,Y y} = F(b,y)-F(a,y)

(iii).P {X x, c < Y < d}=F(x,d) - F(x,c)

(iv). P {a < X < b, c < Y < d} = F(b,d)-F(a,d)-F(b,c)+F(a,c)

(v). At points of continuity of (x,y)

F = (x,y)  

1.1.5 Marginal Probability Distribution

P(X = xi) = P {(X = xi and Y = y1) or (X = xi and Y = y2) or etc.,}

= pi1 +pi2+... = ∑

P ( X = xi) = ∑

is called the marginal probability function of X. It is defined for X = x1, x2, ... and denoted as Pi*. The collection of pairs xi,pi*, , i=1,2,…..,... is called the marginal probability distribution of X. Similarly the collection of pairs { yj, pi*},j=1,2,... is called the marginal probability distribution of Y, where pi*= ∑ = P(Y = yj). In the continuous case,

P12

12 , ∞

= ,

= ,  since , may be treated a constant in (x – ½dx, x+ ½dx)

=

= , is called the marginal density of X.

Similarly, = (x, y)dx is called the marginal density of Y.

N o t e :

P(a X b)=P(a X b,- ∞ < Y < ∞)

Page 9: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 8 

= ,

= ,

= x

Similarly, P(c Y d)= Y

1.1.6 Conditional Probability Distribution

In the discrete case

P(X = xi/Y = yj) =  ,   =

is called the conditional probability function of X, given Y = yj. The collection of pairs, {xi, }, i = 1,2, ….., is called the conditional probability distribution of X, given

Y= yj .

Similarly, the collection of pairs,{Yj, }, j= 1,2,…. is called conditional probability

distribution of Y given X = xi.

1.1.7 Independent Random Variables

Let (X,Y) be a bivariate discrete random variable such that P{X = xi/Y = yj}=P(X = xi) i.e., =

i.e. pij = pi* × p*j for all i,j then X and Y are said to be independent random variables. Similarly if (X,Y) be a bivariate continuous random variable such that (x, y) = X(x) ×

Y (y), then X and Y are said to be independent random variables.

1.2 SOLVED PROBLEMS

P r o b l e m 1

If X and Y are discrete rv’s with the joint probability function is

(x,y) = , where (x,y) = (1, 1), (1,2), 2, 1), (2, 2)

= 0 , else where. Are the variable independent.

Page 10: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

Solution : Given

    (x,y) = , where (x,y) = (1,1), (1,2), (2,1), (2,2)

The marginal pdf of X is,

x (x) = ∑ (x,y)

 ∑                      

 , x  1,2 

The marginal pdf of Y is, Y (y) = ∑x (x, y)

= ∑

=  + = , y = 1, 2

Clearly (x , y) ≠ x(x),. Y (y)

There fore X and Y are not independent. Problem 2 Given (x/y ) = ,0 < x < y < 1 and y y  by4      obtain a, b and also get the joint pdf ?

Solution:

Since the conditional pdf (x/y) is a pdf, we have

  /    1

i.e., a dx =1

i.e., a [ ]0 = 1

a = 2

Similarly, dy = 1

Page 11: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 10 

b[ ]10 = 1 b =5

The joint pdf is,

(x,y) = (y).  (x/y) = by 4 = 10xy2,0 < x < y < 1

P r o b l e m 3

The joint probability density function of a two-dimensional random variable (X,Y) is given by

  (x,y) = 2, 0 < x < 1 , 0 < y < x

= 0, elsewhere

(i). Find the marginal density functions of X and Y,

(ii). Find the conditional density function of Y given X=x and conditional density of X given Y=y, and

(iii). Check for independence of X and Y Solution

(i). The marginal pdf’s of X and Y are given by

x(x) = xy(x,y)dy = 2  2 , 0 < x < 1

= 0, elsewhere

y(y) = xy(x, y)dx = 2 = 2(1 - y), 0 < y < 1

1 y

= 0, elsewhere

(ii). The conditional density function of Y given X is

Y/X(y/x) = , =    , 0 < x < 1

The conditional density function of X given Y is

                                x/y(x,y) = , = = , 0 <y <1

(iii). Since x(x) . y(y) = 2(2x)(1 - y) ≠ xy , , X and Y are not independent.

P r o b l e m 4

Two random variables X and Y have the joint pdf  

Page 12: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 11 

(x, y) = k(x2 + y2), 0 < X < 2, 1 < Y < 4 = 0, elsewhere

Find k.

S o l u t i o n :

Since (x,y) is a pdf, , 1

  1

k    + x y 2 20 d y = 1

k    + 2 y 2 d y = 1

k y + 2  4 1 = 1

k [ + - - ]

k ( ) = 1

i . e . , 5 0 k = 1

i . e . k =

P r o b l e m 5

If X and Y are two random variables having joint density function (x,y) = (6-x-y); 0 < x < 2 , 2 < y < 4

= 0, otherwise.

Find (i). P(X < 1 ∩Y < 3), (ii).P(X + Y < 3) and (iii)P(X < 1/Y < 3) S o l u t i o n

( i ) . P(X < 1 <3) = ,

= ( 6 - x - y ) d x d y =

( i i ) P ( X = Y < 3 ) = ( 6 – x – y ) d x d y =

Page 13: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 12 

( i i i ) P ( X < 1 / Y < 3 ) = 

) = =

P r o b l e m 6

The Joint distribution of X and Y is given by

(x,y) = 4    ;   0, 0

Test whether X and Y are independent.

For the above joint distribution, find the conditional density of X given Y=y. S o l u t i o n

Joint pdf of X and Y is

(x,y) = 4    ;   0, 0

Marginal density of X is given by

x(x,) = , 4 2  2 

= 4   dy

= 4  

(put = t) = 2x. / /∞0

x(x) = 2x. ; x 0

Similarly, the marginal pdf of Y is given by

Y (y) = (x,y)dx =2y. ; y 0.

Since XY (x, y) = x (x). Y(y), X and Y are independently distributed.

The conditional distribution of X for given Y is given by:

( X = x / Y = y ) = ,

= 2x. ; x 0.

Page 14: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 13 

Chapter 2

MATHEMATICAL EXPECTATION OF BIVARIATE RANDOM VARIABLES

2.1 Definition:

Let X1, X2, ..., Xn be n random variables with joint pdf (x1, x2, ..., xn) and let g(x1, x2, ..., xn) be any function of these random variables. Then,

P

E {g (x1, x2, ..., xn)) = ∑ ∑ … .∑ (x1, x2, ..., xn) (x1, x2, ..., xn) if the rv’s are discrete

= , … . (x1, x2, ..., xn)f(x1, x2, ..., xn)dx1,dx2...dxn, if the r.v’s are continuous provided the sum or integral on the RHS is absolutely convergent. 2.2 Properties of Expectation 2.2.1 Property

If X and Y are any two random variables and (X) be any measurable function of X. Then for two constants a and b, E(a.  (X) + b) = a.E( (X)) + b P r o o f : Let X be a discrete random variable, then

E(a.  (X) + b) = ∑ (X) + b]f(x)

= ∑ . (x) (x) + ∑ . (x)

= a. ∑ (x)+b∑ (x)

=a.E( (x))+b)since ∑ (x)=1)

If X is continuous, instead of summation use integration.

Remarks:

When a= 0, E(b) = b, i.e., expectation of a constant is the constant itself.

2.2.2 Addition Theorem:

If X and Y are two random variables, then E(X + Y) = E(X) + E(Y), provided all the expectations exist. P r o o f : Let X and Y are two discrete random variables with the joint pmf f(x,y). Then by definition

E(X + Y) = ∑ ∑     ,

= ∑ ∑ . , ∑ ∑ . ,

Page 15: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 14 

= ∑   ∑ ,  ∑ ∑ ,

= ∑ .1

 ∑ .2

y

= E(X) + E(Y)

In the case of two continuous random variables instead of summation use integration. Remarks: Let X1, X2, ..., Xn are any finite number of random variables, then, E(X1 + X 2+ . . . + X n)) = E(X1) + E ( X 2)+...+E(Xn)

2.2.3 Multiplication Theorem

Let X and Y are two independent random variables, then, E(XY) = E(X) + E(Y), provided all the expectations exist.

P r o o f Let X and Y are any two independent discrete random variables with joint pmf f (x , y) Then,

E(XY) = ∑ ∑ ,

= ∑ ∑ 1 2

Since X and Y are independent, (x , y) = 1(x). 2(y)

Hence , E(XY) = ∑1

(x)∑ 2

= E(X).E(Y)

In the case of continuous random variables use integration instead of summation. Remarks:

The converse of multiplication theorem need not be true. That is , for two random variables E(XY) = E(X).E(Y), need not imply X and Y are independent.

E x a m p l e 1 :

Consider two random variables X and Y with joint probability function is (x , y) = (1 - |x|.|y|), where x = -1, 0, 1; and y = -1, 0, 1= 0, elsewhere.

Here,

 1

x ) = ∑ ,    1

(x)= (3 - 2|x|),y= -1,0,1

2

y   ∑ x, y   ⇒ 2 y      (3 - 2 |y |), x = -1, 0, 1

E(X) = ∑ . (3-2|x|)

= -1(3-2)+0(3)+1(3-2 ) = 0

Page 16: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 15 

Similarly, E(Y) = 0 and E(XY) = 0

Hence, E(XY) = E(X).E(Y) But, (x,y)≠ 1 (x). 2 (y) That is X and Y are not independent. 2 . 2 . 4 P r o p e r t y :

If X 0, the E(X) 0

2 . 2 . 5 P r o p e r t y :

If X and Y are two random variables, Then, [E(XY)]2 E(X2) .E(Y2) (Cauchy-Schwartz Inequality)

Proof :

Consider the real valued function, E(X + tY)2.

Since (X + tY)2 0, we have E(X + tY)2 0,

i.e.,t2E(Y2) + 2tE(XY) + E(X2) 0

LHS is a quadratic equation in t and since it is always greater than or equal to zero, this quadratic equation is not having more than one real root. Hence its discriminant must be less than or equal to zero. The discriminant “b2 - 4ac” for the above quadratic equation in t is

[2E(XY)]2 - 4E(X2)E(Y2) ⇒ [2E(XY)]2 - 4E(X2)E(Y2) 0

i.e., 4[E(XY)]2 4E(X2).E(Y2)

Hence, [E(XY)]2 E(X2)E(Y2)

2 . 2 . 6 P r o p e r t y :

If X and Y are two random variables such that Y X, then E(Y) E(X) Proof: ConsiderY X , t h e nY -X 0,i.e.,X-Y 0 ⇒ E(X-Y) 0

⇒ E(X-Y ) = E ( X + ( -Y ) ) = E ( X ) + E ( -Y) 0

i.e., E(X) - E(Y) 0 ⇒ E(X) E(Y)

2 . 2 . 7 P r o p e r t y :

For a random variable X, |E(X)| E(|X|), provided the expectations exist.

Proof: We have X  |X| ⇒ E(X) E(|X|) .............. (1)

Again, -X |X| ⇒ E(-X) E(|X|) ………..(2)

(1) and (2) ⇒ |E(X)| E(|X|)

Page 17: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 16 

2 . 2 . 8 P r o p e r t y :

If the possible values of a random variable are 0,1,2,....Then ,

E(X) = ∑

Proof :  

0 1 2

= [p(X=1) + p (X=2)+p(X=3) +..]+[p(X=2)+p(X=3) +……]+[p(X=3)+p(X=4)+…]+…

= p(X=1)+2p(X=2)+3p(X=3)+4p(X=4)+…..

=∑

⇒∑ 2.3 Raw and Central Moments

Let X be a random variable with pmf/pdf f(x) and let A is any constant and r any non negative integer, then,

E(X - A) = ∑ (x)or (x)dx x

according as X is discrete or continuous is called the rth moment of X about A, denoted by µ’ ,(A), provided it exists.

When A = 0, then E(X-A)r = E(Xr); which is known as the rth raw moment of X and is denoted by µ’ .

E(X - )r is the rth central moment of X and is denoted by µr .

We have = ∑ ∑ xi= ipi = µ’ n r

Now the variance µ2 = E(X - E(X))2

= E[X2 + (E(X))2 - 2XE(X)]

= E(X2) + [E(X)]2 - 2E(X)E(X)

= E(X2) - [E(X)]2 = µ’’2 - (µ’1) 2

Relation between raw and central moments:

We have first central moment, µ1 = E(X - E(X))1 = E(X) - E(X) = 0 The second central moment or variance ,

µ2

= µ’2

-( µ’1

) 2

Page 18: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 17 

The third central moment,

µ3

= E(X –E(X))3

= E(X3 - 3X2E(X) + 3X[E(X)]2 - [E(X)]3)

= E(X3) - 3E(X2)E(X) + 3E(X)[E(X)]2 - [E(X)]3

= E(X3) - 3E(X2)E(X) + 3[E(X)]3 - [E(X)]3

= E(X3) - 3E(X2)E(X) + 2[E(X)]3

= µ’3 - 3 µ ’ µ’1 +2[µ’1]3

The fourth central moment,

µ4 = E(X - E(X))4

= E(X4 - 4X3E(X) + 6X2[E(X)]2 - 4X[E(X)]3 + [E(X)]4)

= E(X4) - 4E(X3)E(X) + 6E(X2)[E(X)]2 - 4E(X)[E(X)] 3 + [E(X)]4

= E(X4) - 4E(X3)E(X) + 6E(X2)[E(X)]2 - 3[E(X)]4

= µ’4 - 4µ’3µ’1 + 6µ’2[µ’1]2 -- 3[µ’1]4

In general, the rth central moment, µr = E(X - E(X))

= E(Xr-rC1Xr-1E(X)+rC2Xr-2[E(X)]2-rC3Xr-3[E(X)3+...+(-1)r[E(X)]r)

2.4 Properties of variance and covariance

1). For a random variable X, var(aX) = a2V(X) Proof:

Var(aX) = E[aX - E(aX)]2

= E[a(X - E(X)]2

= a2E[X - E(X)]2

= a2V(X)

2). For two independent random variables X and Y, V(aX + bY) = a2V(X) + b2V (Y) Proof:

V(aX + bY) = E[aX + bY - E(aX + bY)]2

= E[a(X - E(X)) + b(Y - E(Y))]2

= a2E[X - E(X)]2 + b2E[Y - E(Y)]2 + 2abE[X - E(X)]E[Y - E(Y)]

= a2V(X) + b2V(Y) + 2abCov(X, Y)

Since X and Y are independent , Cov(X, Y) = 0 ; hence,

V(aX + bY) = a2V(X) + b2V(Y)

Page 19: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 18 

3). For two random variables X and Y, Cov(X + a, Y + b) = Cov(X, Y)

C o v ( X + a , Y + b ) = E [ ( X + a ) ( Y + b ) ] -E ( X + a ) E ( Y + b )

E[(XY + bX + aY + ab)] - [E(X)E(Y) + bE(X) + aE(Y) + ab]

= E(XY) - E(X)E(Y)

= Cov(X, Y)

4).For two random variables X and Y; Cov(aX, Y) = aCov(X, Y). Proof:

Cov(aX, Y) = E[(aX)(Y)] - E(aX)E(Y)

= aE[XY] - a[E(X)E(Y)]

= a[E(XY) - E(X)E(Y)]

= aCov(X, Y)

2.5 Conditional Expectation and Variance:

Let (X,Y) be jointly distributed with pmf/pdf f (x , y). Then, Conditional mean of X given Y=y is denoted as E(X/Y = y) and is defied as

E(X/Y = y) = ∑ (x/y), if X and Y are discrete , if X and Y are continuous. Conditional mean of Y given X=x is denoted as E(Y/X = x) and is defined as,

E(Y/X = x) = ∑ f(y/x) ,if X and Y are discrete = / , if X and Y are continuous. Conditional variance of X given Y=y is denoted as V(X/Y = y) and is defined as,

V(X/Y = y) = E(X2/Y = y) - [E(X/Y = y)]2

Conditional variance of Y given X=x is denoted as V(Y/X = x) and is defined as,

V(Y/X = x) = E(Y2/X = x) - [E(Y/X = x)]2

T h e o r e m :

If X and Y are two independent r.v’s then,                                                                          (t) = Mx(t)My(t) P r o o f : By definition,

Mx+y(t)= E[[et(x+y] = E[etxety]

= E[etx]E[ety] = Mx (t).My (t) ,

Page 20: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 19 

since X and Y are independent

T h e o r e m : If X1 , X2 , .. .Xn are n independent r.v’s then

M ∑1    ∏

P r o o f :

By definition, M∑    ∑ ∑

E[∏ ∏ ]

, since Xi ’s are independent

= ∏ ,

i.e., m.g.f of a sum of n indpendent r.v.’s is equal to the product of their m.g.f’s R e m a r k s 1 For a pair of r.v.’s (X, Y), then covariance between X and Y (or product moment between X and Y) is defined as

Cov(X, Y) = E[X - E(X)][Y - E(Y)] R e m a r k s 2

Cov(X, Y) = E(XY) - E(X)E(Y)

R e m a r k s 3 If X and Y are independent r.v’s. Cov(X, Y) = 0 i.e.,

Cov(X, Y) = E(XY) - E(X)E(Y)

= E(X)E(Y) - E(X)E(Y) = 0

R e m a r k s 4 The correlation coefficient between the two random variables X and Y is de-fined as pXY = ,

  

where Cov(X, Y) = E(XY) - E(X)E(Y)

V(X) = E(X2) - [E(X)]2

V(Y) = E(Y2) - [E(Y)]2

2 . 6 S O L V E D P R O B L E M S

P r o b l e m 1 Let X and Y are two random variables with joint pmf

f ( x , y ) = , x = 1 ,2 ;y= 1,2

Find E(X2Y)

Page 21: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 20 

S o l u t i o n

E(X2Y) = ∑ ∑ ,

= ∑ ∑ y[ ]

= 1[ ] + 2[ ] + 5[ ] + 8[ ] =

P r o b l e m 2 For two random variables X and Y, the joint pdf f(x,y) = x + y , 0 < x < 1;0 < y < 1,Find E(3XY2) S o l u t i o n :

E(3XY2)= 3 (x,y)dydx

= 3 (x+y)dydx

= 3 ydx

= 3 [ ]10 + x[ ]10)dx

=3 + )dx

= 3[ ]10 + 3[ ] 10 =

P r o b l e m 3 Let X and Y are two random variables with joint pmf f(x,y) = , x = 1, 2;y=1,2 Find (i). Correlation between X and Y. (ii). V(X/Y = 1). S o l u t i o n :

Correlation (X,Y) = ,

.

=   

E(XY) = ∑ ∑ ,

= ∑ ∑ [ ]

Page 22: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 21 

= +  + +  =

E(X) = ∑ 1

where (x) = ∑ ,  ∑ + = = , x = 1,2

E(X) = ∑ ] = 1 × + 2 × 59 =

E(X2) = ∑ 1(x)

= ∑ [ ] =1 × +4 × =

f2(y) =∑ , = ∑ ] = + = , y = 1,2

E(Y) = ∑ ]

= 1 × +2 × =

= E(Y2) = ∑ 2(y)

= ∑ [ ] = 1 × + 4 × =

Correlation (X,Y) =              .  

 

= - 0 . 0 2 6

V ( X / Y = 1 ) = E ( X 2 / Y = 1 ) - [ E ( X / Y = 1 ] 2

E ( X 2 / Y = 1 ) = ∑ ( x / y = 1 )

= ∑ , = 1 .

, + 4 .

,

=

E ( X / Y = 1 ) =∑ 1

∑ , 1. ,  2. ,

Page 23: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 22 

= T h e r e f o r e , V ( X / Y = 1 ) – [ ] 2 =

P r o b l e m 4

The Joint pdf of two random variables (X,Y) is given by, f(x,y) = 2,0< x <y <1, = 0, elsewhere.

Find the conditional mean and variance of X given Y=y. S o l u t i o n :

The conditional mean of X given Y=y is E(X/y) = (x/y)dx

where, f(x/y) = ,

Then, (y)= ,

= 2 2 0 = 2 y , 0 < y < 1

There fore, f(x/y) =  = ,0 <x <y < 1

E(X/y) = .   dx = [ ]0

= × = , 0 <y < 1

Also, V(X/y) = E(X2/y) - [E(X/y)]2

Where, E(X2/y) = f(x/y)dx

= . dx =  [ ]0

= ×  =

Therefore, V(X/y) =   2

Page 24: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 23 

= - = , 0 < y < 1

Problem 5 Two random variables X and Y have joint pdf

f ( x , y ) = 2 - x - y; 0 x 1,0 y 1 = 0, otherwise Find (i). f1(x), f2(x), (ii). f(x/Y = y), f(y/X = x), (iii). cov(X,Y) Solution: (i) (x) = ,

= 2   - x, 0 x 1

(y) = ,

= 2   , 0 x   1

(ii)

/ , =

 , 0  x   1

f(y/X=x)= , =  

, 0  y   1

(iii) E(XY) = ,

= 2

= 2

= - - ]10dx =   - ]dx

=[ - - ]10 =

E(X) = 1   =

= [ - ]10 =

E(Y) = 2   =  

= [ - ]10 =

Page 25: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 24 

There fore, Cov(X,Y) =  -  .  =

P r o b l e m 6 Show by an example that independence implies verse is not true always.

S o l u t i o n : Two random variables X and Y are independent then cov(X,Y)=0, which implies ρxy = 0. To prove the converse, consider a r.v X having the pdx, f ( x ) = , - 1 x 1 = 0, otherwise Let Y = , i.e., X and Y are not independent. Here, E(X) = dx =   0 = 0

E(Y)= E(X2) = dx =( 11  =

E(XY) =E(X.X2) =E(X3) = dx

= dx = ×0 =0

Therefore,

cov(X,Y) = E(XY) –E(X)E(Y) =0 – 0 = 0 i.e. = 0 This shows that non correlation need not imply independence always.

Page 26: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 25 

Chapter 3

STANDARD DISTRIBUTIONS

3.1 DISCRETE DISTRIBUTIONS 3 . 1 . 1 D e g e n e r a t e D i s t r i b u t i o n s

If X is a random variable with pmf

P(X=x) = { 1,when x   k 0, otherwise    

Then the random variable X is said to follow degenerate distribution. Distribution function of this random variable is

F(x) = { 0,1,

Mean and Variance : Mean of X, E(X) = ∑ P(X = x) = k × 1 = k E(X2) = ∑ P x x2P(X =x) = k 2 × 1 In general, E(xr) = k Then, V(X) = k 2 - [k]2 = 0 Moment generating function:

MX(t) = E(etx) = ∑ P(X = x)

= etkP(X = k) = × 1 =

3.1.2 Discrete Uniform Distribution A random variable X is said to follow the discrete uniform distribution if its pmf is

f(x) = , 1 2,… .0,             

Eg: In an experiment of tossing of an unbiased die, if X denote the number shown by the die.Then X follows uniform distribution with pmf, f(x) = P(X = x) = ,x = 1,2,...,6.

Mean and Variance:

Mean of X,

E(X) =∑ P(X=x) =∑

=   [x1+x2+….+xn] = ∑

E(X2)= ∑ P(X=x) =∑

=  [( ) 2 +( )2 + … +( )2]= ∑

Page 27: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 26 

Then, V(X) =∑ -[∑ ] 2

3.1.3 Binomial Distribution

This distribution was discovered by James Bernoulli in 1700. Consider a random experiment with two possible outcomes which we call success and failure. Let p be the probability of success and q = 1 - p be the probability of failure.p is assumed to be fixed from trial to trial. Let X denote the number of success in n independent trials. Then X is a random variable and may take the values 0,1,2,...,n. Then, P(X = x) = P(x successes and n - x failures in n repetitions of the experiment) There are   mutually exclusive ways each with probability pxqn-x, for the happening of x successes out of n repetitions of the experiment. Hence,

P(X = x) = pxqn-x, x = 0,1,2, ...,n

Definition: A random variable X is said to follow the binomial distribution with parameters n and p if the pmf is,

f(x) = P(X = x) = ( nCx pxqn-x, x = 0, 1,2, ..., n; 0 <p < 1, and p + q = 1 = 0 elsewhere

Mean and variance: Mean, E(X)=∑

                                    ∑ [nCx pxqn-x]= ∑ [ !! !

pxqn-x

= np∑ !! !

px-1qn-x])

np[(p+q) 1]

= np[(1)  1] =np

E(X2) = ∑ [nCx pxqn-x]

= ∑ 1 [ !! !

pxqn-x

= ∑ 1 [ !! !

pxqn-x] +∑ !! !

pxqn-x]

=n(n-1)p2∑ !! !

px-2qn-x] +E(X)

= n(n - 1)p2[(p + q)n-2] + np

Page 28: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 27 

= n(n - 1)p2 + np There fore the variance,

V(X) = E(X2) - [E(X)]2

= [n(n - 1)p2+np] - [np]2

= n2p2 - np2 + np - n2p2

= n2p2 – np2 = np(1 -p) = n2p2

np-np2 =np(1-p)=npq

E(X3) = ∑ x3[nCxpxqn-x]

=∑ 1 2 +3x2 -2x][ !! !

pxqn-x

=∑ 1 2 [ !! !

pxqn-x+

∑ 3 !! !

pxq n-x - ∑ 2 !! !

pxqn-x

= n(n - 1)(n - 2)p3∑ !! !

px-3 + qn-x] + 3E(X2)-2E(X)

[n(n - 1)(n-2)p3 [(p+q)n-3] + 3[n(n-1)p2 +np]- 2np

= n(n - 1)(n - 2)p3 + 3[n(n - 1)p2] + np

Similarly,

E(X4) = n(n - 1)(n - 2)(n - 3)p4 + 3[n(n - 1)(n - 2)p3] + 7n(n - 1)p2 + np

Beta and Gamma coefficients:

β1 = µ23 =

γ1 = β1=√

A Binomial Distribution is positively skewed, symmetric or negatively skewed according as γ1 >=< 0 which implies q >=< p.

β2 = = 3 +

A Binomial Distribution is leptokurtic, mesokurtic or platykurtic according as β2 >=< 3 which implies pq > = <

Moment Generating Function: The mgf,

MX (t) = E(etx)

Page 29: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 28 

=∑ P(X=x)

=∑ [nCx pxqn-x]

=∑ x pxqn-x](pet)xqn-x

(q+pet)n

Additive property of the binomial distribution

If X is a B (n1 , p) and Y is B (n2 , p) and they are independent then teir sum X + Y also follows B(n1 + n2 , p). Proof: Since,X ~ B(n1 ,p), Mx (t) = (q +pet)n1 Y ~ B(n2 ,p), My (t) = (q +pet)n2 We have,

Mx + y (t) = Mx (t).My (t)

, since X and Y are independent. = (q+pet)n1 × (q+pet)n2

= (q + p e t)n1+n2

= mgfofB(n1 + n 2 , p )

There fore, X + Y ~B ( n 1 + n 2 , p )

If the second parameter (p) is not the same for X and Y, then X + Y will not be binomial.

Recurrence relation for central moments

I f X~ B(n, p), then

µ r + 1 = pq[nrµ r -1 + μr ] Proof : We have

µ r = E[X - E(X)]r

= E(X - np)r

= ∑ nCxpxqn-x Therefore,

μr = ∑ nCxpxqn-x r ( x - n p ) r - 1 ( - n ) +

nCx(x – np) qn-xxpx-1+ (x-np)rpx(n-x)qn-x-1(-1)]

=-nr∑ Cxpxqn-x +∑ Cxpxqn-x[  – ]

=-nr µ r-1 + ∑ f(x)

Page 30: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 29 

i.e., μr =-nr µ r + µ r + 1

There fore,

μr 1 = pq[nr μr 1 + r]

Using the information , µ 0 = 1 and µ1 = 0 .Also we can determine the values of µ 2 , µ 3 , µ 4 etc. by this relation.

Recurrence relation for Binomial Distribution

B(x + 1; n, p) = B(x; n, p) Proof: We have,

B ( x ; n , p ) = C x p x q n - x

B(x+1;n,p) =

Cx+1px+1qn-(x+1)

; ,; ,

=

= !! !

× ! !  

=

There fore,

B(x+1;n,p) = B(x;n,p)

3.1.4 Bernoulli Distribution

A random variable X is said to follow Bernoulli distribution if its pdf is given by

f(x)     1 , 0,10,      

Here X is a discrete random variable taking only two values 0 and 1 with the corresponding probabilities 1-p and p respectively.

The rth moment about origin is , µ’ r = E(Xr) = 0rq + 1rp = p,r =1,2 µ’2 =E(X2) =p

therefore , Variance = µ2 =p-p2=p(1-p)=pq

3.1.5 Poisson Distribution

Poisson distribution is a discrete probability distribution. This distribution was developed by the French mathematician Simeon Denis Poisson in 1837.This distribution is used to represent rare events. Poisson Distribution is a limiting case of binomial distribution under certain conditions.

Page 31: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 30 

Definition : A discrete random variable X is defined to have Poisson distribution if the probability density function of X is given by,

f (x) = !, 0,1,2, … , 00,           

Poisson distribution as a limiting case of Binomial:

The Poisson distribution is obtained as an approximation to the binomial distribution under the conditions,

(i). n is ver large , i.e., n 1 (ii).p is very small, i.e., p ! 0 (iii).np = λ, a finite quantity. P r o o f :

L e t X ~B(n,p) , then, f (x )= n Cxpxqn-x, x = 0 , 1 , 2 , . . . , n ; p + q = 1

= !!     !

pxqn-x

= …!

p x ( 1 - p ) n - x

=…

!

Now,

lim 11

12… . . 1

11

Also np = λ ⇒ p =

lim (1-p)x = lim∞ 1 =1

lim (1-p)n lim∞1 =

Applying the above limits , we get,

f(x)= !

, x = 0, 1, 2, ...

Moments of Poisson Distribution Mean:

E(X) =∑ = ∑ !

= λ e- λ∑

!

= λ = λ

Variance, V(X) = E(X2) - [E(X)]2

Page 32: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 31 

w h e r e , E ( X 2 ) = ∑

= ∑ 1

= ∑ 1λλ!

= λ λ ∑ λ!

= λ λ λ+ λ

= λ 2 + λ

T h e r e f o r e , V ( X ) = λ 2 + λ -λ 2 = λ

A l s o , S D ( X ) = √λ

For a Poisson Distribution Mean = Variance

µ3 =µ’3 - 3µ 2µ’1 +2(µ’1)3

Here µ’ 1 =λ;µ’2 = λ2 + λ

Now, µ’3 =E(X3) =∑

=∑ 1 2 3x 2x x

=∑ (x-1)(x-2)λλ!

+3∑ f(x)-2∑ f(x)

= λ 3 eλ 

∑! + 3E(X2) - 2E(X)

=λ3e-λe-λ +3(λ2+λ)-2λ

= λ3 + 3λ 2 + λ

Therefore, µ3= λ 3+3λ2 +λ -3(λ2 +λ)λ +2λ3

In a similar way we can find, µ4 = 3λ 2 + λ Measures of Skewness and Kurtosis:

β1 = = =

1= β1 =√

Since λ > 0, Poisson Distribution is a positively skewed distribution.

Page 33: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 32 

Also,

β2 =μμ = =3+

2= β2 -3=

Since λ > 0, Poisson Distribution is leptokurtic.

Moment Generating Function

MX(t) = E ( e tx)

=∑ f ( x ) = ∑!

= ∑!

= =

Additive property of Poisson distribution:

Let X1 and X2 be two independent Poisson random variables with parameters λ1 and λ2 respectively. Then X = X 1 + X 2 follows Poisson distribution with parameter λ1 + λ2.

Proof: X 1 ~ P (λ1) ⇒ MX1(t) =

X 2 ~ P (λ2) ⇒ MX2(t) =

MX(t) =MX1+X2(t) = MX1(t).MX2(t)

, Since X1 and X2 are independent.

⇒ MX(t) = =

Thus, X = X 1+X2 ~ P(λ1 +λ2)

Remarks:

In general if Xi ~ P(λi) for i = 1, 2, ..., k; and Xi’s are independent, then

X = X1+ X2 + ... + Xk ~ P(λ1+ λ2 + ... + λk)

3.1.6 Geometric Distribution

D e f i n i t i o n : A random variable X is defined to have geometric distribution if the pdf of X is given by

f ( x ) = , , for  0,1,2, … . ; 1 0, otherwise

M o m e n t s : Mean,

E(X)=∑

Page 34: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 33 

= ∑ qxp

= p[q + 2q2 + 3q3 + ...]

=pq[1+2q+3q2+.. .]

= pq(1 - q) - 2

= =

Variance, V(X) = E(X2) - [E(X)]2

E(X2) = ∑ f(x)

= ∑  1

= ∑ (x-1)q2p+∑

= p[2.1q2 + 3.2q3 + 4.3q4 + ...] + E(X)

= 2pq2[1+ 3q + 6q2 + . . . ] +

= 2pq2(1-q)-3+ = + = +

V(X)=  + – ( )2

= + 

= = (p+q) =

Moment Generating function:

MX(t)=E(etx

= ∑

=p∑ )x

=p[1+qet+(qet)2 +……]

=p(1-qet)-1 =

Page 35: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 34 

Lack of memory property:

If X has geometric density with parameter p, then P[X s + t/X s] = P(X t)for s,t = 0,1,2,... P r o o f :

P[X s + t/X s] =          

=∑∞

∑∞

= = qt =P(X t ) Thus geometric distribution possesses lack of memory property.

3.2 SOLVED PROBLEMS

P r o b l e m 1 . The mean and variance of a binomial random variable X are 12 and 6 respectively. (i). Find P(X=0) and (ii). P ( X > 1) S o l u t i o n : Let X ~ B(n,p) G i v e n E ( X ) = n p = 1 2 and V ( X ) = n p q = 6

 =  = ⇒ q =

⇒ p = , since p + q = 1

Also, n=24 (i).

P(X = 0) =24 C0 ( )24 = ( )24

P ( X > 1 ) = 1 -P(X 1 ) = 1 -[ P ( X = 0 ) + P ( X = 1 ) ]

= 1 - 25( )24

P r o b l e m 2 . I f X ~ B(n, p), Show that Cov( , )=

S o l u t i o n :

Cov(X, Y) = E(XY) - E(X)E(Y)

There fore, Cov( , )=E( × )- E( )E( )

=E( )- E( ) E( )

Page 36: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 35 

= (E(nX-X2)-  E(X)E(n-X)

= [E(nX)-E(X2)]-  (E(X)[n-E(X)]

= ( [n.np-[n(n-1)p2+np]-np(n-np)]

= [np-(n-1)p2-p-np+np2]

= = =

P r o b l e m 3 . If X ~ B(n, p), find the distribution of Y = n - X. S o l u t i o n : The pdf o f X i s ,

p(x) =nCxpxqn-x, x = 0, 1, 2, ..., n Given, Y~ n -X, i . e . ,X=n -Y

⇒ f(y) = Cn-ypn-yqn-(n-y), y = n, n - 1, n - 2, ..., 0

= !! !

pn-yqy, y = 0, 1, 2, …..,n

Cy qy, pn-y, y = 0, 1, 2, …..,n

⇒ Y ~ B(n,q)

P r o b l e m 4 If X and Y are independent poisson variates such that P(X=1) =P(X=2) and P(Y=2)=P(Y=3). Find the variance of X-2Y.

S o l u t i o n : L e t X ~ P ( λ 1 ) a n d Y ~ P ( λ 2 )

G i v e n , P ( X = 1 ) = P ( X = 2 )

i . e . , ! =

i . e . , 1 = !

,since λ1>0

There fore, λ1 =2

Page 37: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 36 

Also

P ( Y = 2 ) = P ( Y = 3 )

! =

!

i . e . , 1 =

There fore, λ2 =3 , since λ2 > 0 There fore,V(X) = λ1 = 2 and V(Y) = λ2 =3 Then,

V(X - 2Y) = V(X) + 4V(Y)

Since X and Y are independent.

= 2+4 × 3 = 14

P r o b l e m 5 .

If X and Y are independent poisson variates, show that the conditional distribution of X given X+Y is binomial. S o l u t i o n : Given X ~ P(λ1) and Y ~ P(λ2) Since X and Y are independent, X + Y ~ P(λ1 + λ2). There fore,

P ( X = x / X + Y = n ) =

=

=.

= !    !

!

= !! !

λ1

λ1 λ2

λ2

λ1 λ2n-x

= n C x p x q n - x

Where p =( λ1

λ1 λ2) q = 1 - p, which is a binomial distribution.

Page 38: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 37 

P r o b l e m 6 .

Let two independent random variables X and Y have the same geometric distribution. Show that the conditional distribution of X/X+Y=n is uniform. S o l u t i o n : Given P (X = k) = P (Y = k) = qkp, k = 0, 1, 2, ...

P (X = x/X + Y = n) =

=

=

P (X + Y = n) = P (X = 0, Y = n) + P (X = 1, Y = n - 1)+

P (X = 2, Y = n - 2) + ... + P (X = n, Y = 0)

= q0pqnp + q1pqn-1p + q2pqn-2p + . . . + qnpq0p

since X and Y are independent. = qnp2 + qnp2 + qnp2 + ... + qnp2

= (n + 1)qnp2

There fore,

P ( X = x / X + Y = n ) = = , x = 0, 1, 2, ...

which is a uniform distribution. P r o b l e m 7 .

For a random variable following geometric distribution with parameter p, prove the recurrence formula,P(x + 1) = q.P(x)

S o l u t i o n :

The pmf of X is p(x) = q xp,x =0,1,2,.. . Then,

P(x + 1) = qx+1p

P(x) = qxp

⇒ = = q

Hence P(x + 1) = q.P(x)

Page 39: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 38 

3.3 CONTINUOUS DISTRIBUTIONS 3.3.1 Normal Distribution

A continuous random variable X with pdf f(x) = √

e , ∞ < x <∞ is said to follow normal distribution with parameters µ and σ, denoted by X ~ N(µ,σ).

Mean and Variance: Let X ~ N(µ,σ) , then,

E(X)= √

dx

= +√

dx

=√

dx+√

dx

=√

dx+

Put ⇒ dx = σdu

⇒ E ( X ) = √

- σ e +µ × 1

= √

× 0 +µ = µ

(Since ue is an odd function of u, σdu = 0) There fore mean = µ

V(X) = E(X - E(X))2

= E ( X -µ)2

=

= √

dx Put = z

=√

dz

=√

dz

Put =u

=√

2√

=√

Page 40: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 39 

= √

1

= √

г

=√

г

=√

√ =

i.e., V(X) =

There fore Standard Ddeviation = V X   = σ O dd o r de r m o m ents a bo ut m ea n:

  = √

2

2 2 dx

= √

dz

By putt ing =z

=√

dz

=√

× 0 = 0

Since the integrand is an odd function i.e., µ2 r + 1 = 0, r = 0, 1,2, ... Ev en o rder cent ra l moment s : µ2 r = 1.3.5...(2r - 1)σ2r µ2 r = E(x - µ)2r

=

= √

_ 2

2 2 dx

Put = z

= √

dz

Put = z

=√

dz,

=√

dz,

Page 41: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 40 

Put =u

=√

2√

=√

=√

=√

г

=√

(r -  )(r - ),…., , г

=√

… . .√

µ2r = 1.3.5...(2r - 1)σ2r

Recurrence relation for even order central moments

We have µ2r = 1.3.5...(2r - 1)σ2r

µ2r+2 = 1.3.5...(2r - 1)(2r + 1)σ2 2

There fore,

2r 2 

2r= (2r + 1)σ 2

i.e.,

µ2r+2 = (2r + 1)σ 2µ2r

This is the recurrence relation for even order central moments of Normal Distribution. Using this relationship we can find out the 2nd and 4th moments. Put r = 0 then µ2 = σ2 r = 1 ⇒ µ4 = 3σ4 Since µ3 = 0 ⇒ β1 = 0,γ1 = 0

Also,β1 = =3 and γ2= 0

Moment generating function: MX(t) = E(etx)

=

= √

dx

Page 42: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 41 

Put = z

= √

e µ e σ

= √

e

=√

Put z-t  =u

=√

=√

2

Put =v

=  122 2

√2 2 √

=√

dv

= √

г

= √

Thus,

MX (t) =

Additive property: Let X1 ~ N(µ1 , σ1 ), X2 ~ N(µ2 , σ2 ) and if X1 and X2 are independent, then

X1 + X2 ~ N(µ1 + µ2 ,    

P r o o f :

We have the mgf’s of X1 and X2 are respectively,

MX1(t) = and

MX2(t) = Since X1 and X2 are independent

MX1+X2 (t) = MX 1(t) + MX 2(t)

= ×

Page 43: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 42 

=     i.e., X1 + X2 ~ N(µ1 + µ2 ,    

Remarks 1

If X1 , X2 , …..., Xn are n independent normal variates with mean = µi and variance =   , , i=1,2,...,n respectively. Then the variate Y = ∑ is normally distributed with mean = ∑ and variance = ∑   .

Remarks 2

If X1 , X2 , ..., Xn are n independent normal variates with mean = µi and variance =   , , i=1,2,...,n respectively. Then the variate Y = ∑ is normally distributed with mean = ∑ and variance = ∑     ,where ai ’s are constants.

3.3.2 Standard Normal Distribution

A normal distribution with mean µ = 0 and standard deviation σ = 1 is called a standard normal distribution. If Z is a standard normal variable then its pdf is,

f(z) = ,-∞ < z < ∞

Moment generating function: M Z (t)= ( t)=

=   × =

Normal distribution as a limiting form of binomial distribution

Binomial distribution tends to normal distribution under the following conditions (i). n is very large (n →∞) (ii). neither p nor q is very small.

Proof:

let X →B(n,p)

Then f(x)=nCxpxqn-x,x = 0, 1, 2, ….n Also, E(X)=np,V(X)=npq, MX(t)=(q+pet)n

Define

Z = = √

=

Now MZ(t)=

Page 44: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 43 

= Mx(t/ )

= (q+ ⁄ )n

Then,

logMZ(t) = + nlog(q+ )

= + nlog(q+ )

= + nlog [ q + p(1+! +

! +……)]

= + nlog [ q + p + p(! +

! +……)]

= + nlog [ 1 + p(! +

! +……)]

= + n [ p(! +

! +……) - (

! +

! +……) 2+….. ]

= + n [ + - + 0 ( )

= + + (1-p)+0( )

= + + +0( )

= 0(  ) → as n→∞

There fore

Mz(t) =

This is the mgf of a standard normal variate. So Z→ N(0, 1) i.e., =

√→ N(0, 1) as n→∞

X →N(np, )

when n is very large.

3.3.3 Uniform Distribution (Continuous)

A continuous random variable X is said to have a uniform distribution if its pdf is given by,

f ( x ) = ,a  xb

= 0 , elsewhere

Page 45: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 44 

Properties : 1. a and b (a¡b) are the two parameters of the uniform distribution on (a,b).

2. This distribution is also known as rectangular distribution, since the curve y= f(x) describes a rectangle over the x-axis and between the ordinates at x=a and x=b.

3. The d.f., f(x) is given by

f(x)

0, if ∞- < x < a

, a < x < b

1, b < x < ∞

Moments:

Mean = E(x) =

= dx

= ( )

=   =

Variance V(X) = E(X2) - [E(X)]2

E(X2) = f(x)dx = dx

( ) =

=

Therefore,

V(X) = 2 2

3 -( ) 2

=                

=

Also,

SD(X) = √

Page 46: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 45 

Moment generating function:

3.3.4 Gamma Distribution

A continuous r.v.is said to have a gamma distribution if its probability density function is given by

f(x) = . , x > 0

= 0 , otherwise

where m > 0, p > 0 are called the parameters of the gamma distribution.

M o m e n t s Mean, E(X) =

= dx

=

= .

=

Variance, V(X) = E(X2) - [E(X)]2

E(X2) =

= dx

=  2 1∞0

= .

=

=  = +

There fore,

V(X) = +  -  =

Page 47: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 46 

Moment generating function:

Mx(t) =E( ) = dx

=  1dx

= = ( )

= ( 1- )

3 . 3 . 5 E x p o n e n t i a l D i s t r i b u t i o n Let X be a continuous r.v with pdf,

f(x) = λe-λx, x > 0, λ > 0

Then X is defined to have an exponential distribution. M o m e n t s : Mean, E(X) =

= λe

= λ λ

= λΓ3 =

Variance, V(X) = E(X2) - [E(X)]2

E(X2)=   λe

= λ e

= λΓ3 =

There fore,

V(X) = -

Moment Generating Function:

MX(t) = E(et x)

= = e λe

= λ = λ[ ]∞0

λ t -(1-

λ)-1

Page 48: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 47 

3.3.6 Beta Distribution Let X be a continuous r.v with pdf,

f(x) = β m,n

xm-1(1 - x)n-1;0 < x < 1,m > 0,n >0

Then X is called beta distribution of first and kind is denoted as β1(m, n) M o m e n t s : Mean, E(X) =

= β m,n

1

= β m,n

1

= β m ,β m,n

= Γ m 1 ΓnΓ m n 1 × Γ m n

ΓmΓn

=

Variance = E(X2)-[E(X)]2

E(X2)= 

=β m,n

1

=, β(m+2,n)

=Γ m 2 ΓnΓ m n 2 × Γ m n

ΓmΓn

=

There fore,

V(X) = – ( ) 2

=

Page 49: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 48 

3.3.7 LogNormal Distribution

Let X be a positive random variable , and let Y = logeX. If Y has normal distribution then X is said to have a log normal distribution. The pdf of log normal distribution is given by,

f(x) = √

; 0 x <∞;- ∞; < <∞ and >0

M o m e n t s :

= E ( X r ) = E ( e r y )

=

=

=

  = - = ( - 1 )

3.3.8 Pareto Distribution Let X be a continuous random variable. If the pdf of X is given by

f(x) = ( ) 1, > 0, x0 > 0

then X is said to follow a Pareto distribution. Mean = E(X) = , for > 1

Variance =

V(X) = - 01

2 , for > 1

Mgf of Pareto distribution does not exist.

3.3.9 Cauchy Distribution

A continuous random variable X is said to follow Cauchy distribution if its pdf is given by f(x) =

β 1   x β2

for - ∞ < x < ∞, ∞ < < ∞ and β > 0

The two parameters of this distribution are and β. If = 0, β = 1, then the pdf of Cauchy distribution will be,

f(x) = ,   ∞ < x < ∞

Page 50: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 49 

Properties

1. For a Cauchy distribution mean does not exist. 2. For a Cauchy distribution variance does not exist. 3. mgf of Cauchy distribution does not exist.

3.4 SOLVED PROBLEMS P r o b l e m 1 .

IF X ~ N(12,4). Find (i). P(X 20) (ii). P(0 X 12) (iii). Find a such that P(X > a) = 0.24. S o l u t i o n :

We have Z = = ~ N(0,1) (i)

P(X 20)= P     =P(Z  2)

=0.5-P(0<2)=0.5- 0.4772 =00228

(ii) P(0  X  12) =P    

=P(-3 Z 0)= P(0  Z  3)= 0.4987.

(iii) Given P(X>a) =0.24⇒P =0.24

⇒ P( Z > ) = 0 . 2 4

Hence P( 0 < Z < )=0.5 -0.24 =0.26

From a Standard Normal table the value of =0.71 ⇒ a =14.84

P r o b l e m 2 .

Find k , if P(X k) = 2P(X > k) where X ~N ( , )

S o l u t i o n :

Given that P(X k) = 2P(X > k)

⇒  

    = 2

⇒  

    +

    

= 2+1

Page 51: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 50 

⇒    

    =3

⇒    

=3

⇒P(X>k) = =0.333

P   =0.333

i.e., P( Z > ) =0.333

From table = 0.44

Then k = µ +0.44

P r o b l e m 3 . If X is a normal random variable with mean 6 and variance 49 and if P(3X + 8 λ) = P(4X -7 µ) and P(5X -2 µ) = P(2X +1 µ), find λ and µ. S o l u t i o n : Given X ~ N(6, 7)

P(3X +8 λ) = P(4X -7 µ)

⇒P(X ) = P ( X ) - - - - - - - - (1)

P(5X -2 µ) = P(2X + 1 µ)

⇒ P(X ) = P(X - - - - - - - - (2)

Since X ~N (6,7), Z = ~ N(0,1)

From(1), P       

=P    µ

⇒ P(Z 21 ) = P(Z µ28 )

From the standard normal curve,if P(Z a) = P(Z b), then a = -b That is

=-

⇒ 4λ +3µ -155=0 - - - - - - -- - - (3)

From (2) P    µ

=P    λ 1

Page 52: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 51 

⇒ P(Z µ35 ) = P(Z 14 )

= = -λ 13

⇒ 5λ +2µ -121=0 - - - - - - - - - - - - - - - - - (4)

Solving (3) and (4) we get, λ = 7.57 and = 41.571 P r o b l e m 4 . For a rectangular distribution,

f(x) = , -a < x < a, Show that μ =

S o l u t i o n :

We have E(X)= f(x)dx= dx=0 There fore,

 = E[X - E(X)]2r =E[X62r]

=

=

= =

Problem 5 If X1, X2, …., Xn are n independent random variables following exponential parameter λ, find the distribution of y =∑

Solution: Given that X~exponential with parameter λ Therefore, Mx(t) = (1- )-1

Then, My(t) =M∑ (t) =∏

= ∏ 1 1

= (1-  )-n

This is the mgf of a gamma distribution with parameter n and λ. There fore the pdf of Y is given by,

f(y) = λΓλ(yn-1), y  0

= 0, elsewhere.

Page 53: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 52 

Chapter 4 LAW OF LARGE NUMBERS

4.1 Chebyshev’s Inequality:

Let X be a random variable for which the mean µ and variance exists, then for any t > 0 , P(|X - µ| t )

P(|X - µ| t )  1 P r o o f Let X be a continuous random variable , then

= E[X - E(X)]2 = E[x - µ] 2

=

= + +

Since f(x) is a pdf which is non negative ,[x - µ]2f(x) is always non negative then,

0

There fore,    

In ,      ⇒ u 2

In ,    ⇒ 2

There fore

⇒ 2 2 2

⇒ 2 2 2[P(X -t +P(X- t ]

[P(X -t ) + P(X t )]

[P(|X | t

⇒ [P|X - | t )]or

[P(|x-  ) t ]   12

Also, -[P(|x-µ)   t ]  1

2 i.e., 1 - [P(|x-µ) t ]  1 1

2 i.e., [P(|x-µ| t ]  1 1

2

Page 54: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 53 

4.2 Convergence in Probability:

Let X1, X2, ... be a sequence of random variables.The random variable Xn is said to converge in probability to a constant θ, if for any ε > 0,

P(|Xn - θ|  ε)→0 as n →∞.

This type of convergence is also referred to as stochastic convergence or statistical convergence.

4.3 Bernoulli’s law of large numbers (BLLN):

Consider a random experiment with only two possible outcomes success and failure. Let p be the probability of success and it remains unchanged from trial to trial. Let n independent trials of the experiment being conducted. Let Xn be the number of successes in these n trials. Bernoulli’s law of large numbers (BLLN) states that for any ε > 0,

P (| - p| < ε) → 1 as n →∞.

P r o o f :

We have Xn ~ B(n,p). Hence E(Xn) = np and V(Xn) = npq.

⇒ E( )= p and V( )=

For the variable ,by Chebyshev’s Inequality we have,

P E >1 - 

i.e.,

P     p pqn  > 1 - 

Put, t = ε ⇒ t = ε ,

Hence, P  |   | ε > 1 -  

ε

⇒ P  |   | ε > 1 - ε

as n →∞, ε → 0 ⇒ P  |   | ε →1.

4.4 Weak law of large numbers(WLLN): Let X1, X2, ..., Xn be a sequence of random variables with E(X1) = µi for all i, for i=1 ,2,...,n. Let Sn = X1 + X2 +...Xn. Let Sn = X1 + X2 +...+ Xn, Mn = µ1 + µ2 + ... + µn and Bn = V(X1 + X2 + ... + Xn).Then, P  |      ε → 0 as n →∞,

Page 55: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 54 

provided

→ 0 as n →∞,

P r o o f :

For the variable , by Chebyshev’s Inequality we have,

P E <   . . . . . (1)

Here E( ) = E 1   2  ……..     =

and V ( ) = V( ) = V(    ……..   )= 

put ε = t ⇒ t = ε2

hence, ⇒ P E  ε <  

ε

⇒ P E  ε <  ε

as n →∞ ⇒ P E  ε →0,

provided → 0      ∞

4.5 Central limit theorem (CLT)

CLT states that the sum of a very large number of random variables is approximately normally distributed with mean equal to sum of means of the variables and variance equal to sum of the variances of the variables provided the random variables satisfy certain very general assumptions.

4.6 Lindberg -Levy form of CLT:

Let X1 , X2 , ..., Xn be a sequence of independent and identically distributed random variables with E(Xi ) = µ and V(Xi ) = , i = 1,2, ..., n where we assume that 0 < < ∞. Letting Sn = X1 + X2 + ... + Xn , the normalised random variable.

Z =√

→ N(0,1) as n → ∞

P r o o f :

Given E(Xi ) = µ , V ( X i ) = , i = 1,2,3,...,n

Sn = X1 + X2 + ... + Xn = ∑ Xi

Page 56: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 55 

Assume that MX i(t) exists for i=1,2,...,n

Now,

Mzt =M√

(t)

= √

= √√

= ∑ √

= ∏√

Since X s are independent.

Therefore

MZ(t)= 1√

1  √

2 0 

where 0 denotes terms with  and higher

There fore ,

logMZ(t) = - + nlog 1√

= - +n 1√

0   √

0  2

= -  2

2 2 ( - 2  0 

=  + 0 

Since - 2

→ as n → ∞ Therefore,

MZ(t) → as n → ∞

This is the mgf of a standard normal variable. i.e., Z → N(0, 1) as n →∞.

Page 57: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 56 

4.7 SOLVED PROBLEMS Problem 1 For a geometric distribution with f(x) = , x 1,2, … . .,using Chebyshev’s

inequality prove that P(|X - 2| 2)

Solution: We have, E(X) = ∑ × = + 2 × + 3 × + ……

= 1 2     3   

= 1   2 =2

E(X2) = ∑ × = + 22 × + 3 × + ……

= 1 4    9   

= 1   1   3 = 6

Hence

V(X)= 6-4 =2

By Chebyshev’s Inequal i ty ,

P(|X- | t ]    1-

⇒ P(|X-2| 2)]  1-√

= Put t =√2 , we get,

[ P(|X-2| 2)]  

P r o b l e m 2

Find the least value of probability P(1 X 7) where X is a random variable with E(X) = 4 and V(X) = 4.

S o l u t i o n :

By Chebyshev’s Inequality,

P(|X- | ]    1- i .e . ,

Page 58: UNIVERSITY OF  · PDF fileSC Cal ROB UNI HOO icut Uni (ABIL B CO VER L OF versity P ... Probability Distributions‐Semester II Page 5 ... S.C.Gupta and V.K.Kapoor:

School of Distance Education 

Probability Distributions‐Semester II  Page 57 

P(|X-4| 2k ]    1-

But we have to find that least value of

P(1 X 7)= P(1-4 X-4 7 4

P(-3  X 3

Put 2 k = 3 , t h e n k = ⇒ =

There fore

P(1 X 7)= P(|X-4| 3) 1 =

Thus the least value of probability =

P r o b l e m 3

If X ~ B(100, 0.5), using Chebyshev’s Inequality obtain the lower bound for,

P(|X - 50| <7.5).

S o l u t i o n :

Given X ~ B(100, 0.5).Then Mean,µ =np =100X0.5 = 50 and = npq = 100x0.5x0.5 =25

By Chebyshev’s Inequality

P(|X-µ| tσ)  1-

i.e., P(|X- 50| 5t)  1 - 

Put 5t =7.5 ,then t=1.5, we get

P(|X - 50| <5 × 1.5) > 1 -

.

⇒ P(|X - 50| <7.5) > 0.56

i.e,the lower bound is 0.56.