Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation...

34
Problems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department of Electrical and Systems Engineering Washington University in St. Louis St. Louis, MO 63130 [email protected] May 4, 2006 Introduction In this document, problems in detection and estimation theory are collected. These problems are primarily written by Professor Joseph A. O’Sullivan. Most have been written for examinations ESE 524 or its pre- decessor EE 552A at Washington University in St. Louis, and are thereby copyrighted. Some come from qualifying examinations and others are simply problems from homework assignments in one of these classes. Use of these problems should include a citation to this document. In order to give some organization to these problems, they are grouped into roughly six categories: 1. basic detection theory; 2. basic estimation theory; 3. detection theory; 4. estimation theory; 5. expectation-maximization; 6. recursive detection and estimation. The separation into these categories is rather rough. Basic detection and estimation theory deal with finite dimensional observations and test knowledge of introductory, fundamental ideas. Detection and estimation theory problems are more advanced, touching on random processes, joint detection and estimation, and other important extensions of the basic theory. The use of the expectation-maximization algorithm has played an important role in research at Washington University since the early 1980’s, motivating inclusion of problems that test its fundamental understanding. Recursive estimation theory is primarily based on the Kalman filter. The recursive computation of a loglikelihood function leads to results in recursive detection. The problems are separated by theoretical areas rather than applications based on the view that theory is more fundamental. Many applications touched on here are explored in significantly more depth elsewhere. 1 Basic Detection Theory 1.1 Analytically Computable ROC Suppose that under hypothesis H 1 , the random variable X has probability density function p X (x)= 3 2 x 2 , for 1 x 1. (1) 1

Transcript of Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation...

Page 1: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

Problems in Detection and Estimation Theory

Joseph A. O’SullivanElectronic Systems and Signnals Research Laboratory

Department of Electrical and Systems EngineeringWashington University in St. Louis

St. Louis, MO [email protected]

May 4, 2006

Introduction

In this document, problems in detection and estimation theory are collected. These problems are primarilywritten by Professor Joseph A. O’Sullivan. Most have been written for examinations ESE 524 or its pre-decessor EE 552A at Washington University in St. Louis, and are thereby copyrighted. Some come fromqualifying examinations and others are simply problems from homework assignments in one of these classes.Use of these problems should include a citation to this document.

In order to give some organization to these problems, they are grouped into roughly six categories:

1. basic detection theory;

2. basic estimation theory;

3. detection theory;

4. estimation theory;

5. expectation-maximization;

6. recursive detection and estimation.

The separation into these categories is rather rough. Basic detection and estimation theory deal with finitedimensional observations and test knowledge of introductory, fundamental ideas. Detection and estimationtheory problems are more advanced, touching on random processes, joint detection and estimation, and otherimportant extensions of the basic theory. The use of the expectation-maximization algorithm has played animportant role in research at Washington University since the early 1980’s, motivating inclusion of problemsthat test its fundamental understanding. Recursive estimation theory is primarily based on the Kalmanfilter. The recursive computation of a loglikelihood function leads to results in recursive detection.

The problems are separated by theoretical areas rather than applications based on the view that theoryis more fundamental. Many applications touched on here are explored in significantly more depth elsewhere.

1 Basic Detection Theory

1.1 Analytically Computable ROC

Suppose that under hypothesis H1, the random variable X has probability density function

pX(x) =32x2, for − 1 ≤ x ≤ 1. (1)

1

Page 2: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

Under hypothesis H0, the random variable X is uniformly distributed on [ -1 , 1 ].

a. Use the Neyman-Pearson lemma to determine the decision rule to maximize the probability of detectionsubject to the constraint that the false alarm probability is less than or equal to 0.1. Find the resultingprobability of detection.

b. Plot the receiver operating characteristic for this problem. Make your plot as good as possible.

1.2 Correlation Test of Two Gaussian Random Variables

Suppose that X1 and X2 are jointly distributed Gaussian random variables. There are two hypotheses fortheir joint distribution. Under either hypothesis they are both zero mean. Under hypothesis H1, they areindependent with variances 20/9 and 5, respectively. Under hypothesis H2,

E

([X1

X2

][X1 X2]

)=[

4 44 9

](2)

Determine the optimal test for a Neyman-Pearson test. Sketch the form of the corresponding decisionregion.

1.3 Discrete-Time Exponentially Decaying Signal in AWGN

Suppose that two data models are

Hypothesis 1: R(n) =(

14

)n + W (n) (3)

Hypothesis 2: R(n) = c(

13

)n + W (n), (4)

where under either hypothesis W (n) is a sequence of independent and identically distributed (i.i.d.) Gaussianrandom variables with zero mean and variance σ2; under each hypothesis, the noise W (n) is independent ofthe signal. The variable c and the variance σ2 are known.

Assume that measurements are available for n = 0, 1, . . . , N − 1.a. Find the loglikelihood ratio test.b. What single quantity parameterizes performance?c. What is the limiting signal to noise ratio as N goes to infinity?d. What is the value of the variable c that minimizes performance?

1.4 Two Zero Mean Gaussian

Two random variables (R1, R2) are measured. Assume that under Hypothesis 0, the two random variablesare independent Gaussian random variables with mean zero and variance 2. Under Hypothesis 1, the tworandom variables are independent Gaussian random variables with mean zero and variance 3.

a. Find the decision rule that maximizes the probability of detection subject to a constraint on theprobability of false alarm, PF ≤ α.

b. Derive an equation for the probability of detection as a function of α.

1.5 Exponential Random Variables in Queuing

In queuing systems, packets or messages are processed by blocks in the system. These processing blocksare often called queues. A common model for a queue is that the time it takes to process a message is anexponential random variable. There may be an additional model for the times at which messages enter thequeue, a common model of which is a Poisson process.

2

Page 3: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

Recall that if X is exponentially distributed with mean μ, then

pX(x) =1μ

exp(−x

μ

), x ≥ 0. (5)

Suppose that one queue is being monitored. A message enters at time t = 0 and exits at time t = T .Under Hypothesis H1, T is an exponentially distributed random variable with mean μ1; under H0, T isexponentially distributed with mean μ0. Assume that μ1 > μ0.a. Prove that the likelihood ratio test is equivalent to comparing T to a threshold γ.b. For an optimum Bayes test, find γ as a function of the costs and the a priori probabilities.c. Now assume that a Neyman-Pearson test is used. Find γ as a function of the bound on the false alarmprobability PF , where PF = P (say H1|H0 is true).d. Plot the ROC for this problem for μ0 = 1 and μ1 = 5.e. Now consider N independent and identically distributed measurements of T denoted T1, T2, . . . , TN . Showthat the likelihood ratio test may be reduced to comparing

l(T) =1N

N∑i=1

Ti (6)

to a threshold. Find the probability density function for l(T) under each hypothesis.

1.6 Gaussian Variance Test

In many problems in radar, the reflectivity is a complex Gaussian random variable. Sequential measurementsof a given target that fluctuates rapidly may yield independent realizations of these random variables. Itmay then be of interest to decide between two models for the variance.

Assume that N independent measurements are made, with resulting i.i.d. random variables Ri, i =1, 2, . . . , N . The models are

H1 : Ri is N (0, σ21), i = 1, 2, . . . , N, (7)

H0 : Ri is N (0, σ20), i = 1, 2, . . . , N, (8)

where σ1 > σ0.a. Find the likelihood ratio test.b. Show that the likelihood ratio test may be simplified to comparing the sufficient statistic

l(R) =1N

N∑i=1

R2i (9)

to a threshold.c. Find an expression for the probability of false alarm, PF , and the probability of miss, PM .d. Plot the ROC for σ2

0 = 1, σ21 = 2, and N = 2.

1.7 Binary Observations: Test for Bias in Coin Flips

Suppose that there are only two possible outcomes of an experiment; call the outcomes heads and tails. Theproblem here is to decide whether the process used to generate the outcomes is fair. The hypotheses are

H1 : P (Ri = heads) = p, i = 1, 2, . . . , N (10)H0 : P (Ri = heads) = 0.5, i = 1, 2, . . . , N. (11)

Under each hypothesis, the random variables Ri are i.i.d.a. Determine the optimal likelihood ratio test. Show that the number of heads is a sufficient statistic.

3

Page 4: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

b. Note that the sufficient statistic does not depend on p, but the threshold does. For a finite number N , ifonly nonrandomized tests are considered, then the ROC has N +1 points on it. For N = 10 and p = 0.7, plotthe ROC for this problem. You may want to do this using a computer because you will need the cumulativedistribution function for a binomial.c. Now consider a randomized test. In a randomized test, for each value of the sufficient statistic, the decisionis random. Hypothesis H1 is chosen with probability φ(l) and H0 is chosen with probability 1−φ(l). Considerthe Neyman-Pearson criterion with probability of false alarm PF = α. Show that the optimal randomizedstrategy is a probabilistic mixture of two ordinary likelihood ratio tests; the first likelihood ratio test achievesthe next greater probability of false alarm than α, while the second achieves the next lower probability offalse alarm than α. Find φ as a function of α. Finally, note that the resulting ROC results from the originalROC after connecting the achievable points with straight lines.

1.8 Likelihood Ratio as a Random Variable

The problem follows Problem 2.2.13 from H. L. Van Trees, Vol. 1, closely.The likelihood ratio Λ(R) is a random variable

Λ(R) =p(R|H1)p(R|H0)

. (12)

Prove the following properties of the random variable Λ.a. E(Λn|H1) = E(Λn+1|H0)b. E(Λ|H0) = 1.c. E(Λ|H1) − E(Λ|H0) = var(Λ|H0).

1.9 Matlab Problem

Write Matlab subroutines that allow you to determine detection performance experimentally. The casestudied here is the signal in additive Gaussian noise problem, for which the experimental performanceshould be close to optimal.

Let Wk be i.i.d. N (0, σ2), independent of the signal. In the general case, there can be a signal undereither hypothesis,

H0 : Rk = s0k + Wk, k = 1, 2, . . . , K, (13)H1 : Rk = s1k + Wk, k = 1, 2, . . . , K. (14)

For this problem, we look at the special case of deciding whether there is one or there are two exponentialsin noise, as a function of the noise level and the two exponentials.

H0 : Rk = exp(−k/4) + Wk, k = 1, 2, . . . , K, (15)H1 : Rk = exp(−k/4) + exp(−k/(4T )) + Wk, k = 1, 2, . . . , K. (16)

Write Matlab routines to generate the data under either hypothesis. Hint:kk=1:K;w=randn(1,K);r0=exp(kk/4)+w;

Write a separate Matlab function to compute the optimal test statistic.Write Matlab routines to generate N independent vectors, and compute the resulting N test statistics, undereach hypothesis.Write a Matlab routine to compute an empirical receiver operating characteristic from these data. Givenany value of the threshold, the fraction of test statistic values greater than the threshold, given the null

4

Page 5: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

hypothesis H0, is the empirical probability of false alarm. Similarly, given any value of the threshold, thefraction of test statistic values greater than the threshold, given hypothesis H1, is the empirical probabilityof detection.a. Plot the empirical receiver operating characteristic for fixed σ2 and K as T varies (T > 1).b. Plot the empirical receiver operating characteristic for fixed T and K as σ2 varies.c. Plot the empirical receiver operating characteristic for fixed σ2 and T as K varies.Evaluate and summarize your results in a concise manner.

1.10 Test for Poisson Means

An experiment is performed to determine whether one or two radioactive substances are in a beaker. Asensor is set up nearby the beaker to measure the radioactive events. Assuming the measurements are madeover a time interval which is short compared to the half-lives of the radioactive substances, the rate of decayis constant. The number of events for each substance is a Poisson counting process. Thus, we have twohypotheses:

H0 : m(T ) = n1(T ), where P {n1(T ) = n} =(λ1T )n

n!e−λ1T (17)

H1 : m(T ) = n1(T ) + n2(T ), where P {n2(T ) = n} =(λ2T )n

n!e−λ2T (18)

Here, T is the length of time over which the measurements are made. Assume the cost of a correct decisionis 0 (C00 = 0, and C11 = 0).

a. For fixed T , P0, P1, C10 and C01, determine the optimum Bayes test.b. Let λ1T = 1 and λ2T = 3. Plot the receiver operating characteristic. Note that the plot consists

of a discrete set of points. Locate the point on your curve where P0C10/P1C01 is approximately equal to 4.c. How does the ROC curve vary as T is increased? State your answer in general, but to quantify your

answer, examine the specific case where the observation interval is doubled (T is doubled from its value inpart b). In particular, you will want to plot the ROC for this new observation interval.

1.11 Basic Detection Theory

Suppose that under each of two hypotheses the three random variables X1, X2, and X3 are independentPoisson random variables. Under hypothesis H0, they all have mean 2. Under hypothesis H1, they havemeans 1, 2, and 3, respectively. Assume that the prior probabilities on the two hypotheses are each 0.5.

a. Find the minimum probability of error decision rule.b. Draw a picture of the decision region.

1.12 Cauchy Distributions

Suppose that under hypothesis H1, the random variable X has a Cauchy probability density function withmean 1

pX(x) =1

π [1 + (x − 1)2]. (19)

Under hypothesis H0, the random variable X has a Cauchy probability density function with mean 0

pX(x) =1

π [1 + x2]. (20)

a. Given a single measurement find the likelihood ratio test.b. For a single measurement, sketch the receiver operating characteristic.In your calculations, you may need the following indefinite integral:∫

11 + x2

dX = tan−1(x) + constant. (21)

5

Page 6: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

1.13 Maxwell versus Rayleigh Distribution

Problem motivation. In some problems of a particle moving in space, it is not clear whether the motion israndom in three-dimensions, random but restricted to a two-dimensional surface, or somewhere in between(fractal motion of smoke, for example). In order to decide between these hypotheses, the positions ofparticles are measured and candidate density functions can be compared. While this can be turned into adimensionality estimation problem, here we treat the simpler case of deciding between the two extreme casesof random motion in two or three dimensions.

It is assumed that only the distance of the particles relative to an origin is measurable, not the positionsin three dimensions. In three dimensions, the distance at any time is Maxwell distributed. In two dimensions,the distance is Rayleigh distributed. The particles are assumed to be independent and identically distributedunder either hypothesis.

Suppose that under hypothesis H1, each random variable Xi has a Maxwell probability density function

pX(x) =x2√

2/π

σ3exp(− x2

2σ2

), x ≥ 0. (22)

Under hypothesis H0, each random variable Xi has a Rayleigh probability density function

pX(x) =x

σ2exp(− x2

2σ2

), x ≥ 0. (23)

a. Given N independent and identically distributed measurements, determine the optimal Bayes test.b. Determine the probability of false alarm for N = 1 measurement.c. Determine the threshold used in the Neyman-Pearson test for N = 1.

1.14 3-ary Detection Theory

Suppose that there are three hypotheses, H1, H2, and H3. The prior probabilities of the hypotheses areeach 1/3. There are three observed random variables, X1, X2, and X3. Under each hypothesis, the randomvariables are independent and equal⎡

⎣ X1

X2

X3

⎤⎦ =

⎡⎣ S1

S2

S3

⎤⎦+

⎡⎣ W1

W2

W3

⎤⎦ . (24)

The random variables W1, W2, and W3 are independent noise random variables. They are Poisson distributedwith means equal to 1. The noise random variables Wi are all independent of the signal random variablesSk. Under each hypothesis, the signal random variables S1, S2, and S3 are independent Poisson randomvariables. The means of the signal random variables depend on the hypotheses:

H1 : E

⎧⎨⎩⎡⎣ S1

S2

S3

⎤⎦⎫⎬⎭ =

⎡⎣ 1

10

⎤⎦ H2 : E

⎧⎨⎩⎡⎣ S1

S2

S3

⎤⎦⎫⎬⎭ =

⎡⎣ 1

01

⎤⎦ (25)

H3 : E

⎧⎨⎩⎡⎣ S1

S2

S3

⎤⎦⎫⎬⎭ =

⎡⎣ 0

11

⎤⎦ . (26)

a. Find the optimal decision rule.b. Find an expression for the probability of error.

6

Page 7: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

1.15 N Pairs of Gaussian Random Variables in AWGN

Consider the hypothesis testing problem:

H0 : r = n (27)

H1 : r = s + n (28)

where n is a 2N × 1 real valued Gaussian random vector which is 0 mean with covariance matrix σ2nI (n is

N (0, σ2nI); the signal vector s is also a 2N×1 real valued Gaussian random vector, N (0, Ks). The covariance

matrix Ks is a block diagonal matrix with N blocks of size 2×2 each:

Ks =

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎣

σ2s ρσ2

s 0 0 · · · 0 0ρσ2

s σ2s 0 0 · · · 0 0

0 0 σ2s ρσ2

s · · · 0 00 0 ρσ2

s σ2s · · · 0 0

· · · · · · · · · · · · · · · · · · · · ·0 0 0 0 · · · σ2

s ρσ2s

0 0 0 0 · · · ρσ2s σ2

s

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎦

(29)

a. Determine the optimum Bayes test as a function of the costs and the a priori probabilities (assume thecost of correct decisions is 0). Determine a sufficient statistic for the problem. Simplify the expression forthe sufficient statistic as much as possible. (You may want to use the fact that the inverse of a block diagonalmatrix is also block diagonal, with the inverses of the blocks from the original matrix along its diagonal.)

b. Determine an equation the threshold must satisfy for the Neyman-Pearson criterion. Is this thresholdeasy to compute?

c. Plot the ROC for the particular case of N = 2, ρ = .25, and σ2n/σ2

s = .25. Hint: This is probablymost easily done after an appropriate rotation of the received data. It may also be done without a rotation.Remember, r is a 2N × 1 vector, so it is 4×1 for this part.

1.16 Information Rate Functions: Gaussian Densities with Different Means

Define the two hypotheses:

H1 : Ri = mi + Wi, i = 1, 2, . . . , n

H0 : Ri = Wi, i = 1, 2, . . . , n

where the random variables Wi are i.i.d. N (0, σ2). The means mi under hypothesis H1 are known.a. Derive the log-likelihood ratio test for this problem. Show that when written in vector notation, theloglikelihood ratio equals

l(r) =1σ2

(r − 12m)Tm, (30)

where m is the n × 1 vector of the means mi.b. Show that the performance is completely determined by a signal-to-noise ratio

d2 =n∑

i=1

m2i

2σ2(31)

c. Compute the moment generating function Φ0(s) for the log-likelihood ratio given hypothesis H0.d. Compute the information rate function in two ways. First, directly from the log-moment generatingfunction φ0(s) = ln Φ0(s), compute

I0(γ) = maxs

sγ − φ0(s). (32)

7

Page 8: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

Second, use the relative entropy formula and the tilted density function. Recall that the tilted densityfunction is defined by

p(r : s) =pR(r|H0)esl(r)

Φ0(s). (33)

The second formula for I0(γ) is then

I0(γ) = D(p(r : s)||pR(r|H0)). (34)

Show that the two expressions are equal. This requires substituting into the relative entropy formula thevalue of s such that the desired mean of l(R) is achieved (the desired mean is the threshold γ).

1.17 Information Rate Functions:Gaussian Densities with Different Variances

Consider the hypothesis testing problem with hypotheses:

H1 : Ri is N (0, σ21i), i = 1, 2, . . . , n

H0 : Ri is N (0, σ20i), i = 1, 2, . . . , n,

and under either hypothesis, the random variables Ri are independent.a. Derive the log-likelihood ratio test for this problem. Show that the test statistic is

l(r) =n∑

k=1

−12

lnσ2

1k

σ20k

− r2k

2

(1

σ21k

− 1σ2

0k

). (35)

b. Find the tilted probability density function for this problem. Show that this tilted density functioncorresponds to independent Gaussian random variables with zero mean and variances σ2

sk, where

1σ2

sk

=1 − s

σ20k

+s

σ21k

. (36)

c. Find the log-moment generating function φ(s). Recall (as in the last problem) that Φ(s) is the normalizingfactor for the tilted density function and φ(s) = lnΦ(s).d. For this problem, it may be difficult to find s explicitly as a function of the threshold γ. In order tocircumvent this difficulty, the standard approach is to represent the curves parametrically as a function of s.Find γ as a function of s by using the property that the mean of the log-likelihood function using the tilteddensity equals γ.e. Using the representation of γ from part d, the information rate function may be found as a function of susing

I0(γ(s)) = sγ(s) − φ(s), (37)

and the result from part c.

1.18 Information Rate Functions:Poisson Distribution Functions

Consider the hypothesis testing problem with hypotheses:

H1 : Ri is Poisson with means λ1i, i = 1, 2, . . . , n

H0 : Ri is Poisson with means λ0i, i = 1, 2, . . . , n,

8

Page 9: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

and under either hypothesis, the random variables Ri are independent.a. Derive the log-likelihood ratio test for this problem. Show that the test statistic is

l(r) =n∑

k=1

rk lnλ1k

λ0k− λ1k + λ0k. (38)

b. Find the tilted probability density function for this problem. Show that this tilted density functioncorresponds to independent Poisson random variables with means λsk, where

ln[λsk] = (1 − s) ln[λ0k] + s ln[λ1k]. (39)

c-e. Repeat parts c, d, and e from the last problem here. This is needed again since there is no straightforwardexpression for s in terms of γ.

1.19 Information Rate Functions:Exponential Densities with Different Means

Consider the hypothesis testing problem with hypotheses:

H0 : Ri ∼ p0(r) = 1λ0

e−r

λ0 , r ≥ 0, i = 1, 2, . . . , n

H1 : Ri ∼ p1(r) = 1λ1

e− r

λ1 , r ≥ 0, i = 1, 2, . . . , n,

and under either hypothesis, the random variables Ri are independent.a. Derive the log-likelihood ratio test for this problem and denote the loglikelihood ratio by l(r).b. Find the log-moment generating function for the loglikelihood ratio φ0(s). Recall that Φ0(s) is themoment generating function for the loglikelihood ratio (and is the normalizing factor for the tilted densityfunction) and φ0(s) = lnΦ0(s). Let l(r) be the loglikelihood function derived in part a; then

Φ0(s) = E[esl(R)|H0

]. (40)

c. Consider a threshold in the loglikelihood ratio test that gets smaller with n. In particular, consider thetest that compares l(r) to a threshold γ/n. Find, as a function of γ, an upper bound on the probability offalse alarm

PF ≤ e−nI(γ). (41)

In particular, I(γ) is the information rate function. In fact, for all problems like this,

I0(γ) = limn→∞− 1

nlnP (l(R) > γ/n|H0). (42)

d. Find the tilted probability density function for this problem. Show that this tilted density functioncorresponds to independent exponentially distributed random variables with mean λs, where

1λs

=1 − s

λ0+

s

λ1. (43)

e. Let s correspond to γ as in the computation of the information rate function in part c. Show that therelative entropy between the tilted density function using s and the density function under hypothesis H0

equals I(γ). That is,

D(ps||p0) = I(γ(s)). (44)

9

Page 10: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

2 Basic Estimation Theory

2.1 Basic Minimum Mean-Square Estimation Theory

Suppose that X, Y , and Z are jointly distributed Gaussian random variables. To establish notation, supposethat

E [X Y Z] = [μx μy μz] , (45)

and that

E

⎛⎝⎡⎣ X

YZ

⎤⎦ [X Y Z]

⎞⎠ =

⎡⎣ Rxx Rxy Rxz

Rxy Ryy Ryz

Rxz Ryz Rzz

⎤⎦ (46)

a. Show that X and Y − E[Y |X] are independent Gaussian random variables.b. Show that X, Y −E[Y |X], and Z−E [Z|(X, Y − E[Y |X])] are independent Gaussian random variables.Hint. This problem is easy.

2.2 Three Jointly Gaussian Random Variables

Suppose there are three random variables X, Y , and Z. X is Gaussian distributed with zero mean andvariance 4. Given X = x, the pair of random variables [Y, Z]T is jointly Gaussian with mean [x/2 x/6]T ,and covariance matrix

K =[

2 2/32/3 17/9

]. (47)

a. In terms of a realization of the pair of random variables [Y, Z] = [y, z], find the conditional mean ofX: E[X|Y = y, Z = z].

b. Comment on the special form of the result in part a.c. What is the variance of X given [Y, Z] = [y, z]?

2.3 Estimation of Variance and Correlation of Pairs of Gaussian Random Vari-ables

Suppose a 2N × 1 real valued Gaussian random vector s is observed, where s is N (0, Ks), and where Ks isgiven by the expression in problem 1.15. The two parameters in this covariance matrix are σ2

s and ρ.a. Find the maximum likelihood estimates for σ2

s and ρ.b. Find the Cramer-Rao lower bound for the variance in estimating σ2

s .

2.4 Uniform Plus Laplacian

Suppose that the random variable S is uniformly distributed between -3 and +3, denoted U [−3, +3]. Thedata R is a noisy measurement of S,

R = S + N, (48)

where N is independent of S and is Laplacian distributed with probability density function

pN(n) =12e−|n|. (49)

a. Find the minimum mean-square error estimate of S given R. Show your work.b. Find the maximum a posteriori estimate of S given R.

10

Page 11: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

2.5 Poisson Mean Estimation

Suppose that Xi, i = 1, 2, . . . , N are i.i.d. Poisson distributed random variables with mean λ.a. Find the maximum likelihood estimate for λ.b. Find the Cramer-Rao lower bound on the variance of any unbiased estimator.c. Compute the bias of the maximum likelihood estimator.d. Compute the variance of the maximum likelihood estimator. Compare this result to the Cramer-Rao

lower bound from part b, and comment on the result.

2.6 Covariance Function Structure

Suppose that X1, X2, . . . , Xn is a set of zero mean Gaussian random variables. The covariance between anytwo random variables is

E{XiXj} = 0.9|i−j|. (50)

a. Derive an expression for E{X2|X1}.b. Derive an expression for E{X3|X2, X1}.c. Find the conditional mean squared error

E{(X3 − E{X3|X2, X1})2}. (51)

d. Can you generalize this to a conclusion about the form of E{Xn|X1, X2, . . . , Xn−1}? Given previousvalues, what is a minimal sufficient statistic for estimating Xn?

Comment: Note that any covariance function of the form α|i−j| has the property discussed in this problem.Relate this covariance function to the first-order autoregressive model

Xn = αXn−1 + Wn, (52)

where Wn is a sequence of independent and identically distributed random variables, each with zero meanand variance

σ2 =1

1 − α2. (53)

2.7 Independent and Identically Distributed Pairs of Gaussian Random Vari-ables

Suppose that (X1, Y1), (X2, Y2), . . . , (Xn, Yn) are pairwise independent and identically distributed jointlyGaussian random variables. The mean of Xi is 3. The mean of Yi is 5. The variance of Xi is 17. Thevariance of Yi is 11. The covariance between Xi and Yi is 7.

a. Find the expression for E[Xi|Yi] in terms of Yi.b. Find the expression for

E

[1n

n∑i=1

Xi

∣∣∣∣∣Y1, Y2, . . . Yn

](54)

Given the random variables {Y1, Y2, . . . Yn}, what is the sufficient statistic for the mean 1n

∑ni=1 Xi?

c. Now define two new random variables

T =1n

n∑i=1

Xi S =1n

n∑i=1

Yi. (55)

Find the joint distribution for the random variables T and S.d. Find the expression for E[T |S] in terms of S. Compare this to your answer in part b. Comment on

this result.

11

Page 12: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

2.8 Basic Estimation Theory

Suppose that

R1 = 5 cos θ + W1, (56)R2 = 5 sin θ + W2, (57)

where θ is a deterministic parameter to be estimated and W1 and W2 are independent and identicallydistributed Gaussian random variables, with zero mean and variance 3.

a. Find the maximum likelihood estimate for θ.b. Find the Cramer-Rao lower bound for the estimate of θ. Does this bound depend on the true value

of θ? Comment on this. The variable θ has units of radians.

2.9 Compound Model: Simple Discrete Case

Suppose that an experiment has three possible outcomes, denoted 1, 2, and 3. Each run of the experimentconsists of two parts. In the first part, a biased coin is flipped, and heads occurs with unknown probability p.If a head occurs, then the three outcomes have probabilities [1/2 1/3 1/6]. Otherwise, the three outcomeshave probabilities [1/6 1/3 1/2]. In the second part of the experiment, one of the three outcomes is drawnfrom the distribution determined in the first part.

This experiment is run n independent times.a. Find the probabilities for outcomes 1, 2, and 3 in any run of the experiment in terms of the probability

p. You must do this part correctly to solve this problem.b. Find the probability distribution for the n runs of the experiment.c. From the distribution you determined in part b, determine a sufficient statistic for this problem.d. Find the maximum likelihood estimate for p.e. Is the estimate that you found in part d biased?

2.10 Cramer-Rao Bound for Gamma Density:problem 12, p. 83, Alfred O. Hero, Statistical Methods for Signal Pro-cessing, University of Michigan, unpublished notes, January 2003.

Let X1, X2, . . . , Xn be i.i.d. drawn from the Gamma density

p(x|θ) =1

Γ(θ)xθ−1e−x, x ≥ 0, (58)

where θ is an unknown nonegative parameter and Γ(θ) is the Gamma function. Note that Γ(θ) is thenormalizing constant,

Γ(θ) =∫ ∞

0

xθ−1e−xdx. (59)

The Gamma function satisfies a recurrence formula Γ(θ + 1) = θΓ(θ).a. Find the Cramer-Rao lower bound on unbiased estimators of θ using X1, X2, . . . , Xn. You may leave

your answer in terms of the first and second derivatives of the Gamma function.

3 Detection Theory

3.1 M-ary Detection and Chernoff Bounds

There are many different ways to explore the performance of M-ary detection problems.Suppose that there are M random vectors Sm, m = 1, 2, . . . , M , each vector having N components

(dimension N × 1). These vectors are independent and identically distributed (i.i.d.). Furthermore, the

12

Page 13: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

components of the vectors are i.i.d. Gaussian with zero mean and variance σ2s. Thus, the probability density

function for a vector Sm is

p(sm) =N∏

k=1

1√2πσ2

s

e− 1

2σ2s

s2mk . (60)

The model for the measured data in terms of the signal Sm is

R = Sm + W, (61)

where W is a noise vector with i.i.d. zero mean, Gaussian distributed components whose variance is σ2w, and

W is independent of Sm.a. Let’s first assume that the signal vectors are not known at the receiver. Suppose that given R it is

desired to estimate the random vector Sm. Find the minimum mean-squared error estimate; denote thisestimator by S(r). Find the mean-squared error of the estimate.

b. Compute the average mean-square error for our MMSE estimate S(r),

ξ2ave =

1N

E{(Sm − S(R))′(Sm − S(R))}. (62)

Show that for any ε,

P (| 1N

(Sm − S(R))′(Sm − S(R)) − ξ2ave| > ε) (63)

goes to zero exponentially fast as N gets large. HINT: Use a Chernoff bound.c. Assume that we have a channel N uses of which may be modeled by (61), when the transmitted signal

is sm. Assume a receiver knows all of the possible transmitted signals S = {sm, m = 1, 2, . . . , M}, and thatthe receiver structure is of the following form. First, the receiver computes s(r). Second, the receiver looksthrough the vectors in S and finds all m such that sm satisfies

| 1N

(sm − s(r))′(sm − s(r)) − ξ2ave| < ε. (64)

If there is only one, then it is decided to be the transmitted signal. If there is more than one, the receiverrandomly chooses any signal. To evaluate the probability of error, we do the following. Suppose that sm

was the signal sent and let sl be any other signal. Show that the probability

P (| 1N

(sl − s(r))′(sl − s(r)) − ξ2ave| < ε) (65)

goes to zero exponentially fast as N gets large; find the exponent. HINT: Use a Chernoff bound.d. To finish the performance analysis, note that the probability that sl1 satisfies (64) is independent

of the probability that sl2 satisfies (64), for l1 �= l2. Use the union bound to find an expression for theprobability that no l other than l = m satisfies (64). Then, find an exponential bound on the followingexpression in terms of K = log2 M

P (error|m) < P (sm does not satisfy (64)) + (1 − P (all l �= m do not satisfy (64))) (66)

If you get this done, then you have an exponential rate on the error in terms of K, ε, σ2s , and σ2

w. Note thatShannon derived the capacity of this channel to be

C =12

log(

1 +σ2

s

σ2w

). (67)

Is C related to your exponent at all? It would relate to the largest possible K/N at which exponential errorrates begin. Do not spend a lot of time trying to relate this K/N to C if it does not drop out, since itprobably will not. However, you surely made a mistake if your largest rate is greater than C.

13

Page 14: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

3.2 Symmetric 3-ary Detection with Sinusoids

Suppose that three hypotheses are equally likely. The hypotheses are H1, H2, and H3 with correspondingdata models

H1 : r(t) = a cos(100t) + w(t), 0 ≤ t ≤ π (68)H2 : r(t) = a cos(100t + 2π/3) + w(t), 0 ≤ t ≤ π (69)H3 : r(t) = a cos(100t − 2π/3) + w(t), 0 ≤ t ≤ π, (70)

where w(t) is white Gaussian noise with mean zero and intensity N0/2,

E[w(t)w(τ )] =N0

2δ(t − τ ). (71)

a. Assume that a > 0 is known and fixed. Determine the optimal receiver for the minimum probabilityof error decision rule. Draw a block diagram of this receiver. Does your receiver depend on a? Comment onthe receiver design.

b. Derive an expression for the probability of error as a function of a. Simplify the expression if possible.

3.3 Detection for Two Measured Gaussian Random Processes

Suppose that two random processes, r1(t), and r2(t), are measured. Under hypothesis H1, there is a signalpresent in both, while under hypothesis H0 (the null hypothesis), there is no signal present. More specifically,under H1,

r1(t) = Xe−αt cos ωt + w1(t), 0 ≤ t ≤ T (72)r2(t) = Y e−αt sin ωt + w2(t), 0 ≤ t ≤ T, (73)

where

• X and Y are independent zero mean Gaussian random variables with variances σ2

• α and ω are known

• w1(t) and w2(t) are independent realizations of white Gaussian noise, both with intensity N0/2

• w1(t) and w2(t) are independent of X and Y .

Under H0,

r1(t) = w1(t), 0 ≤ t ≤ T (74)r2(t) = w2(t), 0 ≤ t ≤ T, (75)

where w1(t) and w2(t) are as above.a. Find the decision rule that maximizes the probability of detection given an upper bound on the

probability of false alarm.b. Find expressions for the probability of false alarm and the probability of detection using the optimal

decision rule. In order to simplify this, you may need to assume that the two exponentially damped sinusoidsare orthogonal over the interval of length T .

c. Analyze the expressions obtained in part b. How does the performance depend on α, ω, T , σ2, andN0/2?

14

Page 15: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

3.4 Deterministic Signals in WGN:Comparison of On-Off, Orthogonal, and Antipodal Signaling

In this problem, you will compare the performance of three detection problems and determine which of thethree performs the best. The following assumptions are the same for each problem:

• The prior probabilities are equal: P1 = P2 = 0.5.

• The performance is measured by the probability of error.

• The models are known signal plus additive white Gaussian noise.

• As shown below, for each problem the average signal energy (averaged over the two hypotheses) is E.

• The signals are given in terms of s1(t) and s2(t) plotted in the figure below.

• The noise w(t) is white Gaussian noise with intensity N0/2, and is independent of the signal undereach hypothesis.

Problem 1:

H1 r(t) =√

2Es1(t) + w(t), 0 ≤ t ≤ 4,

H2 r(t) = w(t), 0 ≤ t ≤ 4.

Problem 2:

H1 r(t) =√

Es1(t) + w(t), 0 ≤ t ≤ 4,

H2 r(t) =√

Es2(t) + w(t), 0 ≤ t ≤ 4.

Problem 3:

H1 r(t) =√

Es1(t) + w(t), 0 ≤ t ≤ 4,

H2 r(t) = −√Es1(t) + w(t), 0 ≤ t ≤ 4.

Determine the optimal performance for each of the three problems. Compare these performances.

−1 0 1 2 3 4 5−1

−0.5

0

0.5

1

Time, s

Sig

nal V

alue

Signal s1 (t)

−1 0 1 2 3 4 5−1

−0.5

0

0.5

1

Time, s

Sig

nal V

alue

Signal s2 (t)

Figure 1: Signals s1(t) and s2(t).

15

Page 16: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

3.5 Random Signal Amplitude:Comparison of Antipodal and Orthogonal Signaling

In this problem, you will compare the performance of two problems, only this time, the signals are random.Let A be a Gaussian random variable with mean 0 and variance σ2. The random variable A is independentof the noise w(t). The two signals s1(t) and s2(t) are the same as in detection problem 3.4, as shown inFigure 1. As before, the hypotheses are equally likely, the goal is to minimize the probability of error, andthe independent additive white Gaussian noise w(t) has intensity N0/2.

Problem A:

H1 r(t) = A√

Es1(t) + w(t), 0 ≤ t ≤ 4,

H2 r(t) = A√

Es2(t) + w(t), 0 ≤ t ≤ 4.

Problem B:

H1 r(t) = A√

Es1(t) + w(t), 0 ≤ t ≤ 4,

H2 r(t) = −A√

Es1(t) + w(t), 0 ≤ t ≤ 4.

a. Find the probability of error for the minimum probability of error decision rule for Problem B.b. Find an expression (given as an integral, but without solving the integral) for the probability of error

for Problem A. Show that the performance is determined by the ratio SNR = 2Eσ2/N0.c. Which of these two problems has better performance and why?d. This part is difficult, so you may want to save it for last. Solve the integral for the minimum probability

of error for Problem A. Show that the performance is determined by the fraction of an ellipse that is in aspecific quadrant. Show that this fraction equals

tan−1 1√SNR + 1

, (76)

and therefore that the probability of error is

P (error) =2π

tan−1 1√SNR + 1

. (77)

3.6 Degenerate Detection Problem

Consider the binary hypothesis testing problem over the interval −1 ≤ t ≤ 1,

H0 : r(t) = n(t), −1 ≤ t ≤ 1, (78)H1 : r(t) = s(t) + n(t), −1 ≤ t ≤ 1, (79)

where s(t) is a known (real-valued) signal and n(t) is a (real-valued) zero mean Gaussian random processwith covariance function

E [n(t)n(u)] = 1 + tu, −1 ≤ t ≤ 1, −1 ≤ u ≤ 1, (80)= Kn(t, u). (81)

The noise n(t) is assumed to be independent of the signal s(t).a. Find the eigenfunctions and eigenvalues for Kn(t, u) over the interval −1 ≤ t ≤ 1.b. Suppose that s(t) = 2− 3t, for −1 ≤ t ≤ 1. Find the likelihood ratio test. What is the signal to noise

ratio?c. Note that there are only a finite number of nonzero eigenvalues. Comment on the implications

for the performance if the signal s(t) is not in the subspace of signal space spanned by the correspondingeigenfunctions. That is, what can be said about thhe performance if s(t) is not a linear combination of theeigenfunctions corresponding to the nonzero eigenvalues.

16

Page 17: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

3.7 Random Phase in Sinusoid

In a binary hypothesis testing problem, the two hypotheses are

H0 : r(t) = w(t) , for 0 <= t <= T, (82)

H1 : r(t) =√

2E/T cos(ωct + θ) + w(t) , for 0 <= t <= T. (83)

Here, w(t) is WGN with intensity N0/2, ωc is a known frequency (a multiple of 2π/T ), and θ is a randomvariable with probability density function pθ(Θ).a. Determine the likelihood ratio. Interpret the result as an expected value of the likelihood if θ is known.Show that a sufficient statistic consists of the pair (Lc, Ls) where

Lc =∫ T

0

r(t)√

2/T cos(ωct)dt and Ls =∫ T

0

r(t)√

2/T sin(ωct)dt . (84)

b. Assume that θ is uniformly distributed between 0 and 2π. Derive the optimum test

L2c + L2

s > γ . (85)

Hints: For this distribution, show that the likelihood reduces to

I0(2√

E√

L2c + L2

s/N0)e−E/N0 (86)

where I0(·) is a modified Bessel function of the first kind defined by

I0(x) =1π

∫ π

0

ex cos θdθ =12π

∫ 2π

0

ex cos(θ+φ)dθ (87)

for all φ. Then use the fact that I0(·) is a monotonic function (so it is invertible) to get the result. A plotof I0 is given on page 340 of the text. Incidentally, you will want to read some of the nearby pages to solvethis problem (including pg. 344).c. Find the probabilities of false alarm and detection. Show pertinent details.d. Read the section in the book for pθ given by (364), pg. 338.

3.8 Detection of Signal in Simple Colored Noise

In this problem, we look at detection in the presence of colored noise. Under hypothesis H0,

r(t) = n(t), 0 ≤ t ≤ 3, (88)

while under hypothesis H1,r(t) = s(t) + n(t), 0 ≤ t ≤ 3, (89)

where the noise n(t) is independent of the signal s(t). Supppose that n(t), 0 ≤ t ≤ 3 is a Gaussian randomprocess with covariance function

Kn(t, τ ) = δ(t − τ ) + 2s1(t)s1(τ ) + 3s2(t)s2(τ ) + 5s3(t)s3(τ ) + 7s4(t)s4(τ ), 0 ≤ t ≤ 3, 0 ≤ τ ≤ 3,

where

s1(t) =

√23

cos(2πt/3), 0 ≤ t ≤ 3,

s2(t) =

√23

sin(2πt/3), 0 ≤ t ≤ 3,

s3(t) =

√23

cos(4πt/3), 0 ≤ t ≤ 3,

s4(t) =

√23

sin(4πt/3), 0 ≤ t ≤ 3.

17

Page 18: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

These signals that parameterize the covariance function for the noise are shown in the top plots in Figure 2.The signal s(t) is shown in the lower plot in Figure 2 and is equal to

s(t) = 11 cos(4πt/3 + π/3), 0 ≤ t ≤ 3. (90)

a. Derive the optimal Bayes decision rule, assuming prior probabilities P0 and P1, and costs of false alarmand miss CF and CM , respectively.b. What are the sufficient statistics? Find the joint probability density function for the sufficient statistics.c. Derive an expression for the probability of detection for a fixed threshold.

−1 −0.5 0 0.5 1 1.5 2 2.5 3 3.5 4−1

−0.5

0

0.5

1

Time, s

Noi

se S

igna

l Val

ues

Four Noise Functions

−1 −0.5 0 0.5 1 1.5 2 2.5 3 3.5 4−15

−10

−5

0

5

10

15

Time, s

Sig

nal V

alue

, H1

Figure 2: Upper Plot: Signals s1(t), s2(t), s3(t), and s4(t). Lower Plot: Signal s(t) for hypothesis H1.

4 Estimation Theory

4.1 Curve Fitting

In this problem you will solve a curve fitting problem. Assume that it is known that x(t) has the parame-terized form

x(t) = at2 + bt + c . (91)

The data available is the function r(t) where

r(t) = x(t) + n(t) , for − T/2 < t < T/2 , (92)

where n(t) is a Gaussian random process with covariance function Kn(t, u).a. In this first part of the problem, assume that the parameters a, b, and c are nonrandom. Find themaximum likelihood estimates for these parameters. This solution should be in a general form involvingintegrals. Note that this solution involves the inverse of a matrix. In order to give an example of when thismatrix is invertible, assume that Kn(t, u) = (N0/2)δ(t − u), and find the inverse of this matrix.b. Now assume that the parameters are independent, zero mean, Gaussian random variables with variancesσ2

a, σ2b , and σ2

c . Find the equation for the general MAP estimates. For the specific case of each of thesevariances being equal to 1 and n(t) being white noise with intensity N0/2, find the inverse of the matrixneeded for the solution.

18

Page 19: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

−1 0 1 2 3 4 5 6 7−1

0

1

2

Time, s

Sig

nal V

alue

Signal s1 (t)

−1 0 1 2 3 4 5 6 7−1

0

1

2

Time, s

Sig

nal V

alue

Signal s2 (t)

−1 0 1 2 3 4 5 6 7−1

0

1

2

Time, s

Sig

nal V

alue

Signal s3 (t)

Figure 3: Signals s1(t), s2(t), and s3(t).

4.2 Estimation of Linear Combination of Signals

Suppose that

r(t) = as1(t) + bs2(t) + cs3(t) + w(t), 0 ≤ t ≤ 6, (93)

where

• w(t) is white Gaussian noise, independent of the signals, with intensity N02

,

• the signals s1(t), s2(t), and s3(t) are shown in Figure 3, and

• the scale factors a, b, and c are independent and identically distributed Gaussian random variableswith zero mean and variance 9.

a. Find the maximum a posteriori (MAP) estimates for a, b, and c given r(t), 0 ≤ t ≤ 6.b. Find the Fisher information for estimating a, b, and c given r(t), 0 ≤ t ≤ 6 (take into account that

they are random variables).c. Do the MAP estimates achieve the Cramer-Rao lower bound consistent with the Fisher information

from part b? Comment.

4.3 Time-Scale Factor Estimation

4.3.1 Background Motivation

In ultrasonic imaging, the substance through which the sound waves pass determines the speed of sound.Local variations in the speed of sound, if detectable, may be used to infer properties of the medium. On agrand scale, whales are able to communicate in the ocean over long distances because variations in the speedof sound with depth create effective waveguides near the surface, thereby enabling sound to propagate overlong distances. On a smaller scale, variations in the speed of sound in human tissue may confound inferencesabout other structures. The speed of sound in a uniform medium causes an ultrasound signal to be scaledin time. Thus, an estimate of a time-scale factor, as discussed below, may be used to derive an estimate ofthe speed of sound.

A second example in which time-scale estimation plays a role is in estimating the speed of an emitter ora reflector whose velocity is such that the usual narrowband assumption does not hold. This is the case if

19

Page 20: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

the velocity is significant relative to the speed of propagation. This is also the case if the bandwidth of thesignal is comparable to its largest frequency, so the notion of a carrier frequency does not make sense.

4.3.2 Problem Statement

Suppose that a real-valued signal with an unknown time-scale factor is observed in white Gaussian noise:

r(t) =√

aEs(at) + w(t), T/2 ≤ t ≤ T/2, (94)

where w(t) is white Gaussian noise with intensity N0/2, w(t) is independent of the signal and the unknowntime-scale factor a, s(t) is a signal of unit energy, and E is the energy of the transmitted signal. This problemrelates to finding the maximum likelihood estimate for the time-scale factor a subject to the constraint thata > 0.

Assume that T is large enough so that, whatever value of a is considered,∫ T/2

−T/2

|√as(at)|2dt =∫ aT/2

−aT/2

|s(τ )|2dτ

=∫ T/2

−T/2

|s(t)|2dt

=∫ ∞

−∞|s(t)|2dt

= 1. (95)

Essentially, this assumption allows us to make some of the integrals that are relevant for the problem goover an infinite time interval. Do not use this assumption in part a below.

a. Find the log-likelihood functional for estimating a.b. Derive an equation that the maximum likelihood estimate for a must satisfy. What role does the

constraint a > 0 play?c. For a general estimation problem, state in words what the Cramer-Rao bound is.d. Assume that the function s(t) is twice continuously differentiable. Find the Cramer-Rao lower bound

for any estimate of a. Hint: For a > 0 and α > 0,∫ ∞

−∞

√as(at)

√αs(αt)dt =

∫ ∞

−∞

√as(at)

√αs(αt)dt

=∫ ∞

−∞

√α

as(τ )s

aτ)

= C(α

a

). (96)

The function C is a time-scale correlation function. This correlation between two time-scaled signals dependsonly on the ratio of the time scales. This correlation is symmetric in a and α, so C(ρ) = C(1/ρ). Note thatthe maximum of C is obtained at C(1) = 1. C(ρ) is differentiable at ρ = 1 if s(t) is differentiable. If inaddition to being differentiable, ts2(t) goes to zero as t gets large, dC

dρ (1) = 0. Derive the Cramer-Rao lowerbound in terms of C(ρ).

4.4 Finding a Needle in a Haystack

In order to understand the difficulty of finding a needle in a haystack, one first must understand the statisticsof the haystack. In this problem, you will derive a model for a haystack and then derive an algorithm forestimating how many pieces of hay are in the stack.

Suppose that a piece of hay is measure using hay units, abbreviated hu. On average a piece of hay haslength 1 hu and width 0.02 hu. The standard deviation of each of these dimensions is 20%. Assume a fill

20

Page 21: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

factor of f = 50% of a volume; that is, when the hay is stacked, on average 50% of the volume is occupiedby hay.

A stack of hay is typically much higher in the middle than around the edges. For simplicity, assume thestack is appoximately circularly symmetric.

a. Write down a reasonable model for the shape of a haystack. Give the model in hay units (hu). Assumethere is an overall scale factor that determines the size, A, given in hu. Thus, for a doubling of the value ofA, the volume of the haystack increases by a factor of A3.

b. Write down a reasonable model for N pieces of hay stacked up. That is, assume a distribution onhay shapes consistent with the statistics above. Determine is the distribution on the volume occupied by Npieces of hay.

c. For your model of the shape of a haystack, derive a reasonable estimator on the number of pieces ofhay. Justify your model based on an optimality criterion and your statistical models above.

d. Evaluate the performance of your estimator as a function of A.e. As A gets very large, what is the form of your estimator?f. Comment on the difficulty of finding a needle in a haystack. You may assume that the length of the

needle is much smaller than 1 hu.

4.5 Two Dimensional Random Walk, Observation at Fixed Time

Suppose that a random walk takes place in two dimensions. At time t = 0, a particle starts at (x, y) = (0, 0).The random walk begins at t = 0. At any time t > 0, the probability density function for the location of theparticle is

p(x, y; t, D) =1

2πDtexp(− 1

2Dt[x2 + y2]

), (97)

where D is the diffusion constant. If x and y have units of length, and t has units of time, then D has unitsof length squared per unit time.

Parts a and b

Suppose that the random walk is started at time t = 0 and then at some fixed time t = T , the position ismeasured exactly. Let this be repeated N independent times, yielding data {(Xi, Yi), i = 1, 2, . . . , N}.

a. Find the maximum likelihood estimate for the diffusion constant D from these N measurements.b. Find the Cramer-Rao lower bound on estimating D. Is the maximum-likelihood estimator efficient?

Parts c, d, e, and f

Now let us consider a more practical situation. The device making the measurement of position is of fintesize. Thus, for

x2 + y2 ≤ r, (98)the measurement is perfect (where r is the radius of the device). For

x2 + y2 ≥ r, (99)

there is no measurement; that is, no particle is detected.Suppose that this experiment is run N independent times. Out of those N times, only M , where M ≤ N ,

runs yield particle measurements.c. Find the probability for any value of M . That is, find P (M = m), for each 0 ≤ m ≤ N .d. Find the conditional distribution on the particles measured given M . In order to fix notation, the

particles that are measured are relabeled from 1 to M . The set of measurements, given M , is {(Xi, Yi), i =1, 2, . . . , M}.

e. Find the maximum likelihood estimate for D.f. Find the Cramer-Rao bound for estimating D given these data. Compare this bound to the original

bound. Under what conditions is the maximum likelihood estimate for D approximately efficient?

21

Page 22: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

4.6 Gaussian Estimation from Covariance Functions

Two jointly Gaussian random processes a(t) and r(t) have zero mean and are stationary. The covariancefunctions are

E[r(t)r(τ )] = Krr(t − τ ) = 7e−3|t−τ| + 6δ(t − τ ) (100)E[a(t)a(τ )] = Kaa(t − τ ) = 7e−3|t−τ| (101)E[a(t)r(τ )] = Kar(t − τ ) = 7e−3|t−τ| (102)

(103)

a. Find the probability density function for a(1) given all values of r(t) for −∞ < t < ∞.b. Interpret the optimal minimum mean square error (MMSE) estimator for a(t) given r(u) for −∞ <

u < ∞ in terms of power spectra.c. Now consider a measurement over a finite time interval, r(u), 0 ≤ u ≤ 2. Find the form of the optimal

MMSE estimator for a(t) for 0 ≤ t ≤ 2. Do not work through all of the details on this part–just convinceme that you could if you had enough time.

4.7 Gaussian Estimation: Covariance Functions and Dimension One KalmanFilter

In this problem, we use the same data model as in problem 4.6. Two jointly Gaussian random processes a(t)and r(t) have zero mean and are stationary. The covariance functions are

E[r(t)r(τ )] = Krr(t − τ ) = 7e−3|t−τ| + 6δ(t − τ ) (104)E[a(t)a(τ )] = Kaa(t − τ ) = 7e−3|t−τ| (105)E[a(t)r(τ )] = Kar(t − τ ) = 7e−3|t−τ| (106)

(107)

Suppose that a causal estimator is desired. Design an optimal MMSE estimator of the form

da

dt= −λa(t) + gr(t). (108)

Note that this estimator model has two parameters, λ and g.HINT: Consider the state space system

da

dt= −3a(t) + u(t) (109)

r(t) = a(t) + w(t), (110)

where u(t) and w(t) are appropriately chosen white noise processes. Do you know an optimal causal MMSEestimator for a(t)?

4.8 Linear Estimation Theory:Estimation of Colored Noise in AWGN

Suppose that n(t),−∞ < t < ∞ is a stationary Gaussian random process with covariance function

E[n(t)n(t − τ )] = δ(τ ) +54e−2|τ|

= Kn(τ ). (111)

22

Page 23: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

a. Assume that n(t) = nc(t) + w(t), where w(t) and nc(t) are independent stationary Gaussian randomprocesses, w(t) is white Gaussian noise, and nc(t) has mean finite energy over any finite interval. Find thecovariance functions for w(t) and nc(t). Denote the covariance function for nc(t) by Kc(τ ).

b. Find an equation for the optimal estimate of nc(t) given n(u),−∞ < u ≤ t, in terms of Kn and Kc.Be as specific as you can, that is, make sure that the equations define the unique solution to the problem.Make sure that you account for the causality, that is, the estimate of nc(t) at time t depends only on currentand previous values of n(t).

c. Examine the equations from part b carefully and argue that the unique solution for the estimator isa linear, causal, time-invariant filter. Find the Fourier transform of the impulse response of this filter andfind the impulse response.

4.9 Point Process Parameter Estimation

Many radioactive decay problems can be modeled as Poisson processes with intensity (or rate) functions thatdecay exponentially over time. That is, all radioactive decay events are independent and each such eventdecreases the total amount of the material. In this problem, you will estimate both the total amount of thematerial and the decay rate using a simplified, two-parameter model.

Assume that 0 < X1 < X2 < X3 < . . . < Xn < . . . is a set of points (a realization) drawn from a Poissonprocess with intensity function

λ(t) = ace−ct, t ≥ 0. (112)

Thus the two parameters are a and c. Denote such a realization by X. We know that number of points inany interval [T0, T1) is Poisson distributed with mean μ equal to

μ =∫ T1

T0

λ(t)dt. (113)

The numbers of points in two nonoverlapping intervals are independent.The derivation of the loglikelihood function for a Poisson process is technically involved. A simplified

derivation starts by placing intervals of width ε around each point Xi. The probability of getting one pointin the interval around Xi is

ελ(Xi)e−ελ(Xi), (114)

which for small ε is close to ελ(Xi). The probability of getting two or more points in an ε interval is negligiblefor small ε. The probability of getting no points between Xi−1 and Xi is

e−∫

Xi

Xi−1λ(t)dt

. (115)

The likelihood function for the first N points is then proportional to (taking X0 = 0)

L(X) =N∏

i=1

e−∫ Xi

Xi−1λ(t)dt

λ(Xi) (116)

= e−∫

XN

0λ(t)dt

N∏i=1

λ(Xi), (117)

and the loglikelihood function is the natural logarithm of L(X).a. What is the probability distribution on the total number of points? Argue that the total number is

finite with probability one. For a finite number of points, the likelihood function is modified by multiplyingby the term

e−∫∞

XNλ(t)dt

. (118)

23

Page 24: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

that corresponds to getting zero points after the last point XN . Thus the likelihood function for N totalpoints becomes

L(X) = e−∫∞0

λ(t)dtN∏

i=1

λ(Xi). (119)

b. Argue that a corresponds to the total amount of material. Find the maximum likelinood estimate fora.

c. The parameter c determines the rate of decay. Find the maximum likelihood estimate for c. Arguethat 1/cML is the maximum likelihood estimate of the time constant of the decay.

4.10 Variance Estimation in AWGN

Suppose that the random variable A is known to be Gaussian distributed with mean 0, but it has unknownvariance σ2. The problem here is to find the maximum likelihood estimate for σ2 given a realization of arandom process r(t), where

r(t) = A√

Es(t) + w(t), 0 ≤ t ≤ T. (120)

In this equation, s(t) is a known function that has unit energy in the interval [0, T ], the energy E is known,and the additive white Gaussian noise w(t) is independent of A and the signal and has intensity N0/2.

a. Find the maximum likelihood estimate for σ2.b. Find the Cramer-Rao lower bound for estimating σ2. State what the Cramer-Rao lower bound

signifies.c. Find the bias of the maximum-likelihood estimator (if you cannot find it in closed form, it suffices to

give an integral).d. Is the variance of the maximum-likelihood estimator greater than the Cramer-Rao lower bound, equal

to it, or less than it? Explain your answer.

5 Expectation-Maximization Algorithm

5.1 Exponentially Distributed Random Variables

Suppose that the random variable R is a sum of two exponentially distributed random variables, X and Y ,

R = X + Y, (121)

where pX(x) = α exp(−αx), x ≥ 0, and pY (y) = β exp(−βy), y ≥ 0. The value of β is known, but α is notknown. The goal of the problem is to derive an algorithm to estimate α.a. Write down the loglikeihood function for the data R (this is the incomplete data loglikelihood function).Find a first order necessary condition for α to be a maximum likelihood estimate. Is this equation easy tosolve for α?b. Define the complete data to be the pair (X, R), and write down the complete data loglikelihood function.c. Determine the conditional probability density function on X given R, as a function of a nominal value α.Denote this probability density function (pdf) p(x|r, α).d. Using the pdf p(x|r, α), determine the conditional mean of X given R and α.e. Determine the function Q(α|α), the expected value of the complete data loglikelihood function given theincomplete data and α.f. Derive the expectation-maximization algorithm for estimating α given R.

24

Page 25: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

5.2 Parameterized Signals in White Gaussian Noise

Suppose that an observed, continuous-time signal consists of a sum of signals of a given parameterizedform, plus noise. The problem is to estimate the parameters in each of the signals using the expectationmaximization algorithm.

To be more specific, assume that the real-valued signal is

r(t) =N∑

k=1

s(t; θk) + w(t), 0 ≤ t ≤ T, (122)

where s(t; θ) is a signal of a given type, and θk, k = 1, 2, . . . , N are the parameters; w(t) is white Gaussiannoise with intensity N0/2. Example signals (with Matlab code fragments) include:

1. Sinusoidal signals: θ′ = (f,phase,amplitudes) and s=amplitudes’*cos(2*pi*f*t+phase*ones(size(t)));

2. Exponential signals: θ′ = (exponents,amplitudes) and s=amplitudes’*exp(exponents*t);

3. Exponentially decaying sinusoids: θ′ = (exponents,f,phase,amplitudes) ands=amplitudes’*(exp(exponents*t).*cos(2*pi*f*t+phase*ones(size(t))));

Define the vector of all parameters to be estimated by

Θ = (θ1, θ2, . . . , θN)′. (123)

a. Derive the loglikelihood ratio functional for Θ. This is the incomplete-data loglikelihood ratio func-tional, where we refer to r(t) as the incomplete data. The ratio is obtained as in class relative to a nullhypothesis of white noise only.

b. For the expectation-maximization algorithm, define the complete-data signals

rk(t) = s(t; θk) + wk(t), 0 ≤ t ≤ T, k = 1, 2, . . . , N, (124)

where wk(t) is white Gaussian noise with intensity σ2k where

N∑k=1

σ2k = N0/2. (125)

Using these complete-data signals, we have

r(t) =N∑

k=1

rk(t). (126)

The standard choice for the intensities assigned to the components is σ2k = N0/(2N), but other choices may

yield better convergence. Derive the complete-data log-likelihood ratio functional; notice that it is written asa sum of log-likelihood ratio functionals for the rk (which are again relative to the noise only case). Denotethe log-likelihood ratio functional for rk given θk by l(rk|θk). Note that this log-likelihood ratio functionalis linear in rk.

c. Compute the expected value of the complete-data log-likelihood function given the incomplete data anda previous estimate for the parameters; denote this by Q. Suppose the previous estimate of the parametersis Θ(m), where m denotes the iteration number in the EM algorithm. So Q(Θ|Θ(m)) is a function of Θ andof the previous estimate. Note that Q can be decomposed as a sum

Q(Θ|Θ(m)) =N∑

k=1

Qk(θk|Θ(m)). (127)

25

Page 26: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

d. Conclude from the derivation in parts b and c that only the conditional mean of rk given r and Θ(m)

is needed to find Qk. Explicitly compute this expected value,

rk(t) = E{rk(t)|r, Θ(m)}. (128)

e. The maximization step of the EM algorithm consists of maximizing Q(Θ|Θ(m)) over Θ. The EM algo-rithm effectively decouples the problem at every iteration into a set of independent maximization problems.Show that to maximize Q over Θ it suffices to maximize each Qk over θk. Derive the necessary equationsfor θ

(m+1)k by taking the gradient of Qk(θk|Θ(m)) with respect to θk. Write the result in terms of rk. Note

that this equation depends on θk through two terms, s(t; θk) and the gradient of s(t; θk) with respect to θk.f. In this part, you will develop Matlab code for the general problem described above and apply it to

two of the cases listed above. From the derivation of the EM algorithm, it is clear that there are two criticalcomponents. The first is the maximization of Qk and the second is the conditional expected value of rk. Forthe maximization step, we only need to analyze a simpler problem. The simpler problem has data

ρ(t) = s(t; θ) + w(t), 0 ≤ t ≤ T, (129)

where s is of the same form as above, and w(t) is white Gaussian noise with intensity σ2. Write a Matlabprogram to simulate (129), taking into account the following guidelines. Note that the code developed earlierin the semester should help significantly here.

f.i. To simulate the continuous-time case using discrete-time data, some notion of sampling must beused. We know from class that white Gaussian noise cannot be sampled in the usual sense. The discrete-time processing must be accomplished so that the resulting implementations converge to the solution of thecontinuous-time problem in a mean-square sense as the sampling interval converges to zero. To accomplishthis, we model the data as being an integral of ρ(t) over a small time interval,

ρi =∫ (i+1)Δ

ρ(t)dt (130)

= s(iΔ; θ)Δ + wi + gi, (131)

where Δ is the sampling interval, and the term gi at the end is small and can be ignored (it is order Δ2).Show that the discrete-time noise terms wi are i.i.d. zero mean Gaussian random variables with varianceσ2Δ. In order to avoid having both the signal part (that is multiplied by Δ which is small) and the noisegoing to zero, it is equivalent to assume that the measured data are ηi = ρi/Δ, so

ηi = s(iΔ; θ) + ni, (132)

where ni are i.i.d. zero mean Gaussian with variance σ2/Δ. Given the signal, σ2, and Δ, the followingMatlab code fragment implements this model.size s=size(signal);noise=randn(size s);variance=sigma2/Delta;noise=noise*sqrt(variance);eta=signal+noise;Note that the noise is multiplied by σ and divided by the square root of Δ. Using this concept, the simulationis flexible in the number of samples. Only in the limit as Δ goes to zero does this truly approximate thecontinuous-time signal.

f.ii. In this part, you will write Matlab code to find the maximum likelihood estimates for parametersfor one sinusoid in noise. Assume that

s(t; θ) = a cos(2πf0t + φ), (133)

and that the three parameters (a, f0, φ) are to be estimated. For f0 much larger than 2πT , show that∫ T

0

cos2(2πf0t + φ)dt (134)

26

Page 27: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

is approximately equal to T/2. For this situation, show that the maximum likelihood estimates are foundby the following algorithm:Step 1: Compute the Fourier transform of the data

R(f) =∫ T

0

ρ(t)e−j2πftdt. (135)

Step 2: Find the maximum over all f of |R(f)|; set f0 equal to that frequency.Step 3: Find a > 0 and φ so that

a cos φ + ja sin φ = R(f0). (136)

Implement this algorithm. Perform some experiments that demonstrate your algorithm working. Derive theFisher Information Matrix for estimating the three parameters (a, f0, φ).

f.iii. Implement Matlab code for estimating two parameters for one decaying exponential. That is, assumethat the signal is

s(t; θ) = Ae−αt. (137)

Write Matlabe code for estimating (A, α). Perform some experiments that demonstrate that your algorithmis working. Derive the Fisher Information Matrix for estimating the two parameters (A, α).

f.iv. Implement general Matlab code to compute the estimate of rk(t) given r(t) and previous guesses forthe parameters.

f.v. To demonstrate that the code in f.iv. works, implement it on a sum of two sinusoids and, separately,on a sum of two decaying exponentials. Show that it works.

f.vi. Implement the full-blown EM algorithm for both a sum of two sinusoids and a sum of two decayingexponentials. Run the algorithm many times for one set of parameters and compute the sample variance ofthe estimates. Compare the sample variances to the entries in the inverse of the Fisher Information Matricescomputed above. Run your algorithm for selected choices of signal-to-noise ratio and parameters. Brieflycomment on your conclusions.

5.3 Gamma Density Parameter Estimation

In this problem, you will solve a maximum likelihood estimation problem in two ways. First, you will solveit directly, obtaining a closed form solution. Given a closed form solution, the derivation of an iterativealgorithm for this problem is not fundamental. However, in the second part of this problem you will derivean expecation-maximization (EM) algorithm for it. This algorithm may be extended to more complicatedscenarios where it would be useful.

Assume that λ is a random variable drawn from a gamma density function

p(λ|θ) =θM

Γ(M)λM−1e−λθ, λ ≥ 0. (138)

Here θ is an unknown nonnegative parameter and Γ(M) is the Gamma function. Note that Γ(M) is thenormalizing constant,

Γ(M) =∫ ∞

0

xM−1e−xdx. (139)

The Gamma function satisfies a recurrence formula Γ(M+1) = MΓ(M). For M an integer, Γ(M) = (M−1)!.Note that M is known. This implies that the mean of this gamma density function is E[λ|θ] = M

θ .The random variable λ is not directly observable. Given the random variable λ, the observations

X1, X2, . . . , XN , are i.i.d. with probability density functions

pXi|λ(xi|λ) = λe−λxi , xi ≥ 0. (140)

Note that there is one random variable λ and there are N random variables Xi that are i.i.d. given λ.

27

Page 28: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

a. Find the joint probability density function for X1, X2, . . . , XN conditioned on θ.b. Directly from the probability density function from part a, find the maximum likelihood estimate of

θ. Note that this estimate does not depend on λ.c. To start the derivation of the EM algorithm, write down the complete data loglikelihood function,

keeping only terms that depend on θ. Denote this function lcd(λ|θ).d. Compute the expected value of the complete data loglikelihood function given the observations

X1, X2, . . . , XN , and the previous estimate for θ denoted θ(k). Denote this function by Q(θ|θ(k)). Hint: Thisstep requires a little thought. The posterior density function on λ given the measurements is in a familiarform.

e. Maximize the Q function over θ to obtain θ(k+1). Write down the resulting recursion with θ(k+1) as afunction of θ(k).

f. Verify that the maximum likelihood estimate derived in part b is a fixed point of the iterations derivedin part e.

5.4 Mixture of Beta Distributions

The probability density function for the beta distribution is of the form

f(x : a, b) =Γ(a + b)Γ(a)Γ(b)

xa(1 − x)b, 0 ≤ x ≤ 1, (141)

where Γ(t) is the Gamma function

Γ(t) =∫ ∞

0

xt−1e−xdx. (142)

a. Suppose that N independent and identically distributed (i.i.d.) realizations Xi are drawn fromthe probability density function f(x : a, b). Find equations for the maximum likelihood estimates of theparameters a and b in terms of the observations. DO NOT SOLVE the equations, but represent the solutionin terms of the Gamma function and the derivative of the Gamma function, Γ′(t).

b. For the estimation problem in part a, what are the sufficient statistics?c. Now assume that the true distribution is a mixture of two beta distributions

f(x : π, a, b, α, β) = πΓ(a + b)Γ(a)Γ(b)

xa(1 − x)b + (1 − π)Γ(α + β)Γ(α)Γ(β)

xα(1 − x)β, 0 ≤ x ≤ 1. (143)

The generation of a realization from this probability density function can be viewed as a two-step process. Inthe first step, the first density is selected with probability π and the second is selected with probability 1−π.In the second step, a realization of the appropriate density function is selected. Define the complete datafor a realization as the pairs (ji, Xi), i = 1, 2, . . . , N , where ji ∈ {0, 1} indicates which density is selected inthe first step, and Xi is the realization from that density. For this complete data, write down the completedata loglikelihood function.

d. Find the expected value of the complete data loglikelihood function given the incomplete data{X1, X2, . . .XN}; call this function Q.

e. Maximize Q over the variables π, a, b, α, β. Write down the equations that the maximum likelihoodestimates satisfy in terms of the functions Γ(t) and Γ′(t).

f. If the maximum likelihood estimator defined in part a is given by (aML, bML) = h(x1, x2, . . . xN),express the result of the maximization step in part e in terms of h.

5.5 Mixture Density Estimation

In this problem, you will explore some classical issues in density estimation. These are the same typesof issues which arise when one tries to estimate a continuous function from discrete data. The difficultyarises because the estimates tend to be concentrated when in fact some prior knowledge leads you to believe

28

Page 29: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

that the functions are not “peaked.” To be specific, suppose that you observe N independent identicallydistributed random variables, and you were supposed to guess what the density was which gave rise to thisdata. One description of a maximum likelihood solution is

N∑k=1

1N

δ(x − Xk), (144)

where the Xk are the observations. If you have some prior knowledge that the original density was in factsmooth, you would immediately reject this solution as infeasible. One approach is to represent your set ofpossible solutions as a sum of smooth functions (or equivalently as the result of a convolution with a smoothfunction).

Suppose our set of admissible density functions are

p(x) =1N

N∑k=1

f(x − mk) (145)

where f(x) = 1√2πσ2 exp(−x2/2σ2). This set of functions makes sense in that it is the result of a concentrated

density being smoothed with a gaussian. The problem of interest is then, given M independent observationsof the random variable x, to find your best estimate for N and for the mk, k = 1, 2, . . . , N .

a. Assume that M = 2 and that N = 2. Thus there are two random variables observed, X1 and X2,and we wish to estimate m1 and m2. Use the method of maximum likelihood to find two equations which themaximum likelihood estimates must satisfy. These equations are difficult to solve. They may be reduced todetermining one parameter, however, by noting the symmetry involved in the equations. In particular, if welet m1 = X1 +Δ and m2 = X2 −Δ, then the equations to be solved reduce to one (still difficult) equationin Δ. You don’t need to solve these equations. Is Δ = 0 a valid solution? Comment. Is Δ = (X2 −X1)/2a valid solution? Comment. If X2 > X1, can Δ ever be negative? Comment.

As a side comment, it is worth noting that one way of finding the solution is to center a Gaussian oneach of the observed data points, sum them, then look for the peaks of the sum.

b. In this part of the problem, you will find an EM algorithm for solving for the maximum likelihoodsolution. A model for the data must be determined which can be put in the usual form of a complete dataspace and an incomplete data space. In the general problem, one models the observed random variable ashaving come from one of N equally likely experiments the kth one of which has probability density functionf(x − mk). The complete data consists of pairs (Xi, ni), where ni specifies which of the N p.d.f.’s Xi camefrom. The mapping from the complete data to the incomplete data just selects the Xi.

b.1. Assume M = N = 2. Determine the complete data loglikelihood. This step is crucial as itdetermines the function we are going to maximize. Define the indicator functions

Ik(n) ={

1 if k = n0 otherwise (146)

(A hint for finding the complete data loglikelihood is that the density on Xi if n = k is Ik(n) ln[f(x−mk)];since the Xi’s are independent, the complete data loglikelihood is the sum of terms like this.)

b.2. Find the expected value of the complete data loglikelihood given X1, X2, mr1, mr

2 . This involvesfinding the expected value of the indicator functions Ik(ni). Be careful with these. (I use r to indicate theiteration number here.)

b.3. Maximize the result of the last step to get the updates for mr+11 and mr+1

2 .b.4. Pick a good initial value for your estimates. Justify why this selection is good.c. This part is almost trivial. Let M = 2 and N = 1. There is only one parameter to determine here,

the mean of the Gaussian density. Find the maximum likelihood estimate for this mean.d. In this part, you will set up a detection problem to determine how many Gaussians should be included

in the sum. Suppose M = 2. Under hypothesis H1, there is one Gaussian in the density (N =1). Under

29

Page 30: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

hypothesis H2 there are two Gaussians in the density (N =2). Determine the likelihood ratio test for thisproblem. Assume the prior probabilities on the hypotheses are equal and that the costs of errors are equal.Since there are unwanted parameters in the ratio (namely the means of the Gaussians), and these parametersare nonrandom, substitute for them their appropriate estimates. Usually in hypothesis testing problems thecondition that the threshold is exactly equal to the likelihood ratio is not important. Is it important here?For the case of the ratio equalling the threshold, choose H1 (the choice with fewer parameters).

e. Let X1 = 1 and X2 = 2. First, suppose σ2 = 0.04. Find the maximum likelihood estimates form1 and m2 (this is the N = 2 case). Now, let σ2 get larger. At what point does the hypothesis test in dyield the decision that N=1? What are the maximum likelihood estimates for m1 and m2 at this point? Ifyou cannot determine the point exactly, find a few values nearby.

f. Write and test a Matlab routine to run the EM algorithm derived in part b. Show the performance ofthe algorithm by running it many times for one choice of the means. Compute the means and covariances ofthe estimates. Run for different choices of means. Note that it is sufficient to write the code assuming thatσ2 = 1 and to scale the means by 1/σ.

6 Recursive Detection and Estimation

6.1 Background and Understanding of Autoregressive Models

Suppose that R1, R2, . . . is a stationary sequence of Gaussian random variables with zero mean. The co-variance function is determined by an autoregressive model which the random variables satisfy. The autore-gressive model is an mth order Markov model, meaning that the probability density function of Rn givenRn−1, Rn−2, . . . , R1 equals the probability density function of Rn given Rn−1, Rn−2, . . . , Rn−m.

More specifically, suppose that

Rn = −a1Rn−1 − a2Rn−2 − . . .− amRn−m + Wn, (147)

where Wn are indepedent and identically distributed Gaussian random variables with zero mean and varianceσ2. Let the covariance function for the random process be Ck, so

Ck = E{RnRn−k}. (148)

Comment: In order for this equation to model a stationary random process and to be viewed as agenerative model for the data, the corresponding discrete time system must be stable. That is, if one were tocompute the transfer function in the Z-transform domain, then all of the poles of the transfer function mustbe inside of the unit disk in the complex plane. These poles are obviously the roots of the characteristicequation with coefficients aj .

a. Using the autoregressive model in equation (147), show that the covariance function satisfies theequations

C0 + a1C1 + a2C2 + . . . + amCm = σ2 (149)Ck + a1Ck−1 + a2Ck−2 + . . . + amCk−m = 0, (150)

where the second equation holds for all k > 0. Hint: Multiply both sides of (147) by a value of the randomsequence and take expected values. Use the symmetry property of covariance functions for the first equality.

b. Derive a recursive structure for computing the logarithm of the probability density function ofRn−1, Rn−2, . . . , R1. More specifically, let

vn = lnp(r1, r2, . . . , rn). (151)

Derive an expression for vn in terms of vn−1 and an update. Focus on the case where n > m.Hint: This is a key part of the problem, so make sure you do it correctly. It obviously relates to the Markovproperty expressed through the autoregressive model in (147).

30

Page 31: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

c. Consider the special case of m = 1. Suppose that C0 = 1. Find a relationship between a1 and σ2

(essentially you must solve (150) in this general case).Comment: Note that the stability requirement implies that |a1| < 1.

6.2 Recursive Detection for Autoregressive Models

Suppose that one has to decide whether data arise from an autoregressive model or from white noise. In thisproblem, the loglikelihood ratio is computed recursively.

Under hypothesis H1, the data arise from the autoregressive model (147). Under hypothesis H0, thedata Rn are i.i.d. Gaussian with zero mean and variance C0. That is, under either hypothesis the marginaldeistribution on any sample Rn is the same. The only difference between the two models is in the covariancestructure.

a. Find the loglikelihood ratio for n samples. Call this loglikelihood ratio ln. Derive a recursive expressionfor ln in terms of ln−1 and an update. Focus on the case n > m.

b. Consider the special case of m = 1. Write down the recursive structure for this case.c. The performance increases as n grows. This can be quantified in various ways. One way is to compute

the information rate functions for each n. In this problem, you will compute a special case.Consider again m = 1. Find the log-moment generating function for the difference between ln and

ln−1 conditioned on each hypothesis, and conditioned on previous measurements; call these two log-momentgenerating functions m0(s) and m1(s):

m1(s) = lnE{es(ln−ln−1)|H1, r1, r2, . . . , rn−1}. (152)

Compute and plot the information rate functions I0(x) and I1(x) for these two log-moment generatingfunctions.Comment: These two functions quantify the increase in information for detection provided by the newmeasurement.

6.3 Recursive Estimation for Autoregressive Models

In this problem, you will estimate the parameters in an autoregressive model given observations of the dataRn, Rn−1, . . . , R1.

a. First, assume that the maximum likelihood estimate for the parameters given data Rn−1, Rn−2, . . . , R1

satisfiesBn−1an−1 = dn−1, (153)

where the vector an−1 is the maximum likelihood estimate of the parameter vector

a = [a1 a2 . . . am]T . (154)

Find the update equations for Bn and dn. These may be obtained by writing down the likelihoodequation using the recursive update for the log-likelihood function, and taking the derivative with respectto the parameter vector.

b. The computation for an may also be written in recursive form. This is accomplished using the matrixinversion lemma. The matrix inversion lemma states that a rank one update to a matrix yields a rank oneupdate to its inverse. More specifically, if A is an m × m symmetric, invertible matrix and f is an m × 1vector, then

(A + f fT )−1 = A−1 −A−1f1

1 + fTA−1ffTA−1. (155)

Use this equation to derive an equation for the estimate an in terms of an−1. Hint: The final form shouldlook like

an = an−1 + gn[rn + aTn−1(rn−1 rn−2 . . . rn−m)T ], (156)

where an auxiliary equation defines the vector gn in terms of B−1n−1 and the appropriate definition of f .

31

Page 32: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

6.4 Recursive Detection:Order 1 Versus Order 2 Autoregressive Model

A decision must be made between two models for a sequence of Gaussian distributed random variables.Each model is an autoregressive model. The first model is autoregressive of order one, while the second isautoregressive of order two. There are two goals here as outlined below. First, the optimal test statisticfor a Neyman-Pearson test must be computed for a fixed number N of consecutive samples of a realization.Second, an efficient update of this test statistic to the case with N + 1 samples must be derived.

Consider the following two hypotheses. Under H1, the model for the measurements is

Yi = 0.75Yi−1 + Wi, (157)

where Wi are independent and identically distributed Gaussian random variables with zero mean and varianceequal to 7/4=1.75; Wi are independent of Y0 for all i; and Y0 is Gaussian distributed with zero mean andvariance 4.

Under H2, the model for the measurements is

Yi = 0.75Yi−1 + 0.2Yi−2 + Wi, (158)

where Wi are independent and identically distributed Gaussian random variables with zero mean and varianceequal to 1.75; Wi are independent of Y0 for all i; and Y0 is Gaussian distributed with zero mean and variance4. Y1 = 0.75Y0 + W1 where W1 is a zero mean Gaussian random variable with zero mean and variance 1.75.

a. Given Y0, Y1, . . . , YN , find the optimal test statistic for a Neyman-Pearson test. Simplify the expressionas much as possible. Interpret your answer.

b. Denote the test statistic computed in part a by lN . The optimal test statistic for N +1 measurementsis lN+1 . Find an efficient update rule for computing lN+1 from lN .

6.5 Sequential Estimation Problem

Suppose that X and Y are independent Gaussian random variables with means 0 and variances 3 and 5,respectively. Define the random variables

R1 = 5X + 3Y + W1 (159)R2 = 3X + Y + W2 (160)R3 = X − Y + W3, (161)

where W1, W2, and W3 are identically distributed Gaussian random variables with zero mean and variance1. The random variables X, Y , W1, W2, and W3 are all independent.

a. Find [X(1)Y (1)

]= E

{[XY

]|R1 = r1

}. (162)

b. Derive an expression for[X(2)Y (2)

]= E

{[XY

]|R1 = r1, R2 = r2

}(163)

that only depends on X(1), Y (1), and r2.c. Derive an expression for[

X(3)Y (3)

]= E

{[XY

]|R1 = r1, R2 = r2, R3 = r3

}(164)

that only depends on X(2), Y (2),and r3.

32

Page 33: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

6.6 One-Dimensional Kalman Filtering Problem

Assume that

x(t) = u(t) (165)r(t) = ax(t) + w(t) (166)

where u(t) is white Gaussian noise with intensity Q and w(t) is white Gaussian noise with intensity N0/2.Assume that the observation of r(t) is available from 0 to t. Assume that x(0) is N (0, Λ). What is thevariance of the estimate for x(T )? What is the steady state variance in the estimate for x(t) as t gets large?Are there any similarities between this problem for 0 < t < T and problem 4.1? Notice that if a series ofsolutions to problem 3 are pieced together properly, then the result could closely approximate the solutionof this problem. Comment on this further.

33

Page 34: Problems in Detection and Estimation Theory · PDF fileProblems in Detection and Estimation Theory Joseph A. O’Sullivan Electronic Systems and Signnals Research Laboratory Department

Appendix

Potentially useful information:∫ ∞

0

e−αte−j2πftdt =1

α + j2πf, Re{α} > 0, (167)∫ 0

−∞eαte−j2πftdt =

1α − j2πf

, Re{α} > 0. (168)

∞∑k=0

αk =1

1 − α, |α| < 1 (169)

∞∑k=0

kαk−1 =1

(1 − α)2, |α| < 1 (170)

A Poisson distribution for a random variable X with mean λ > 0 has probabilities

P (X = k) =λk

k!e−λ, k = 0, 1, 2, . . . (171)

34