Discrete Random Variables - University of...

Post on 04-Oct-2020

4 views 0 download

Transcript of Discrete Random Variables - University of...

Discrete Random Variables

Randomness

• The word random effectively means

unpredictable

• In engineering practice we may treat some

signals as random to simplify the analysis

even though they may not actually be

random

Random Variable Defined

X( )A random variable is the assignment of numerical

values to the outcomes of experiments

Random VariablesExamples of assignments of numbers to the outcomes of

experiments.

Discrete-Value vs Continuous-

Value Random Variables• A discrete-value (DV) random variable has a set

of distinct values separated by values that cannot

occur

• A random variable associated with the outcomes

of coin flips, card draws, dice tosses, etc... would

be DV random variable

• A continuous-value (CV) random variable may

take on any value in a continuum of values which

may be finite or infinite in size

The probability mass function (pmf ) for a discrete random

variable X is

PX

x( ) = P X = x .

Probability Mass Functions

A DV random variable X is a Bernoulli random variable if it

takes on only two values 0 and 1 and its pmf is

PX

x( ) =

1 p , x = 0

p , x = 1

0 , otherwise

and 0 < p < 1.

Probability Mass Functions

Example of a Bernoulli pmf

Probability Mass Functions

If we perform n trials of an experiment whose outcome is

Bernoulli distributed and if X represents the total number of 1’s

that occur in those n trials, then X is said to be a Binomial random

variable and its pmf is

PX

x( ) =

n

xp

x 1 p( )n x

, x 0,1,2, ,n{ }

0 , otherwise

Probability Mass Functions

Binomial pmf

Probability Mass Functions

If we perform Bernoulli trials until a 1 (success) occurs and the

probability of a 1 on any single trial is p, the probability that the

first success will occur on the kth trial is p 1 p( )k 1

. A DV random

variable X is said to be a Geometric random variable if its pmf is

PX

x( ) =p 1 p( )

x 1

, x 1,2,3,...{ }0 , otherwise

Probability Mass Functions

Geometric pmf

Probability Mass Functions

If we perform Bernoulli trials until the rth 1 occurs and the

probability of a 1 on any single trial is p, the probability that the

rth success will occur on the kth trial is

P rth success on kth trial( ) =k 1

r 1pr 1 p( )

k r

.

A DV random variable Y is said to be a negative - Binomial

or Pascal random variable with parameters r and p if its pmf is

PY

y( ) =

y 1

r 1pr 1 p( )

y r

, y r,r +1, ,{ }

0 , otherwise

Probability Mass Functions

Negative Binomial

(Pascal) pmf

Probability Mass Functions

Suppose we randomly place n points in the time interval 0 t < T

with each point being equally likely to fall anywhere in that range.

The probability that k of them fall inside an interval of length t < T

inside that range is

P k inside t =n

kpk 1 p( )

n k

=n!

k! n k( )!pk 1 p( )

n k

where p = t / T is the probability that any single point falls within

t . Further, suppose that as n , n / T = , a constant. If

is constant and n that implies that T and p 0. Then

is the average number of points per unit time, over all time.

Probability Mass Functions

Events occurring at random times

Probability Mass Functions

It can be shown that

P k inside t =

k

k!limn

1n

n

=e

=

k

k!e

where = t. A DV random variable is a Poisson random

variable with parameter if its pmf is

PX

x( ) =

x

x!e , x 0,1,2, ,{ }

0 , otherwise

Probability Mass Functions

Cumulative Distribution

Functions

The cumulative distribution function (CDF) is defined by

FX

x( ) = P X x .

For example, the CDF for tossing a single die is

FX

x( ) = 1/ 6( )u x 1( ) + u x 2( ) + u x 3( )+ u x 4( ) + u x 5( ) + u x 6( )

where u x( )1 , x 0

0 , x < 0

Functions of a Random Variable

Consider a transformation from a DV random variable X

to another DV random variable Y through Y = g X( ) . If the

function g is invertible, then X = g 1Y( ) and the pmf for Y is

PY

y( ) = PX

g 1y( )( ) where P

Xx( ) is the pmf for X.

Functions of a Random Variable

If the function g is not invertible the pmf and pdf of Y can be found

by finding the probability of each value of Y . Each value of X with

non-zero probability causes a non-zero probability for the

corresponding value of Y . So, for the ith value of Y ,

P Y = yi

= P X = xi,1

+ P X = xi,2

+

+ P X = xi,n

= P X = xi,k

k=1

n

The function to the right is an

example of a non-invertible

function.

Expectation and Moments

Imagine an experiment with M possible distinct outcomes

performed N times. The average of those N outcomes is

X =1

Nn

ix

i

i=1

M

where xi is the ith distinct value of X and n

i

is the number of times that value occurred. Then

X =1

Nn

ix

i

i=1

M

=n

i

Nx

i

i=1

M

= rix

i

i=1

M

The expected value of X is

E X = limN

ni

Nx

i

i=1

M

= limN

rix

i

i=1

M

= P X = xi

xi

i=1

M

Expectation and Moments

Three common measures are used in statistics to indicate

an "average" of a random variable are the mean, the

mode and the median. The mean is the sum of the values

divided by the number of values X =1

Nn

ix

i

i=1

M

.

The mode is the value that occurs most often.

PX

xmode

( ) PX

x( ) for all x.

The median is the value for which an equal number

of values fall above and below.

PX

X > xmedian

( ) = PX

X < xmedian

( )

Expectation and Moments

The first moment of a random variable is its expected value

E X = xiP X = x

i

i=1

M

The second moment of a random variable is its mean-squared

value (which is the mean of its square, not the square of its

mean).

E X2

= xi

2 P X = xi

i=1

M

The name "moment" comes from the fact that it is mathematically

the same as a moment in classical mechanics.

Expectation and Moments

The nth moment of a random variable is defined by

E Xn

= xi

n P X = xi

i=1

M

The expected value of a function g of a random variable is

E g X( ) = g X( )P X = xi

i=1

M

Expectation and Moments

A central moment of a random variable is the moment of

that random variable after its expected value is subtracted.

E X E X( )n

= xi

E X( )n

P X = xi

i=1

M

The first central moment is always zero. The second central

moment (for real-valued random variables) is the variance,

X

2= E X E X( )

2

= xi

E X( )2

P X = xi

i=1

M

The variance of X can also be written as Var X . The positive

square root of the variance is the standard deviation.

Expectation and Moments

Properties of expectation

E a = a , E aX = a E X , E Xn

n

= E Xn

n

where a is a constant. These properties can be use to prove

the handy relationship,

X

2= E X

2 E2X

The variance of a random variable is the mean of its square

minus the square of its mean. Another handy relation is

Var aX + b = a2 Var X .

Conditional Probability Mass

Functions

The concept of conditional probability can be extended to a

conditional probability mass function defined by

PX |A

x( ) =

PX

x( )P A

, x A

0 , otherwise

where A is the condition that affects the probability of X .

Similarly the conditional expected value of X is

E X | A = x PX |A

x( )x B

and the conditional cumulative

distribution function for X is FX |A

x( ) = P X x | A .

Conditional Probability

Let A be A = X a{ } where a is a constant.

Then FX |A

x( ) = P X x | X a =P X x( ) X a( )

P X a.

If a x then P X x( ) X a( ) = P X a and

FX |A

x( ) = P X x | X a =P X a

P X a= 1.

If a x then P X x( ) X a( ) = P X x and

FX |A

x( ) = P X x | X a =P X x

P X a=

FX

x( )F

Xa( )