EE278 Final Exam Solutions -...

17
EE 278 December 8, 2017 Statistical Signal Processing Handout #26 EE278 Final Exam Solutions This is a take home exam. The exam is open notes and open any electronic reading device, provided they are used solely for reading material already stored on them and not for any other form of communication or information retrieval. Calculators are permitted though not needed. Begin each problem on a new page. You are required to work on the exam on your own, NO collaboration and NO consulting anybody except the course staff. You are bound by the Stanford Honor Code in this regard. Please sign the honor code (provided on the next page) and submit it with your exam. Please scan it if you submit the exam online. We will not grade your exam without the signed honor code. Please do not share the question sheet with any one else for the next 7 days. The exam is due back to us BEFORE 2 p.m. Friday, December 8th. Please turn in your exam to Kara Marquez at Packard 267 (for written/printed portions) and/or upload it on Canvas (for online portions). If you have any comments or problems with submission please e-mail Pin Pin ([email protected]). We are not responsible for any illegibility due to the quality of electronic scanning. Please ensure the scans are of high enough quality. Good luck and have fun!

Transcript of EE278 Final Exam Solutions -...

EE 278 December 8, 2017Statistical Signal Processing Handout #26

EE278 Final Exam Solutions

This is a take home exam. The exam is open notes and open anyelectronic reading device, provided they are used solely for reading materialalready stored on them and not for any other form of communication orinformation retrieval. Calculators are permitted though not needed. Begineach problem on a new page.

You are required to work on the exam on your own, NO collaboration andNO consulting anybody except the course staff. You are bound by theStanford Honor Code in this regard. Please sign the honor code (providedon the next page) and submit it with your exam. Please scan it if yousubmit the exam online. We will not grade your exam without the signedhonor code.

Please do not share the question sheet with any one else for the next 7 days.

The exam is due back to us BEFORE 2 p.m. Friday, December8th. Please turn in your exam to Kara Marquez at Packard 267 (forwritten/printed portions) and/or upload it on Canvas (for online portions).If you have any comments or problems with submission please e-mail PinPin ([email protected]). We are not responsible for any illegibilitydue to the quality of electronic scanning. Please ensure the scans are ofhigh enough quality.

Good luck and have fun!

The Stanford University Honor Code

1. The Honor Code is an undertaking of the students, individually and collectively:

(a) that they will not give or receive aid in examinations; that they will not give or receiveunpermitted aid in class work, in the preparation of reports, or in any other work thatis to be used by the instructor as the basis of grading;

(b) that they will do their share and take an active part in seeing to it that others as wellas themselves uphold the spirit and letter of the Honor Code.

2. The faculty on its part manifests its confidence in the honor of its students by refraining fromproctoring examinations and from taking unusual and unreasonable precautions to preventthe forms of dishonesty mentioned above. The faculty will also avoid, as far as practicable,academic procedures that create temptations to violate the Honor Code.

3. While the faculty alone has the right and obligation to set academic requirements, the stu-dents and faculty will work together to establish optimal conditions for honorable academicwork.

I acknowledge and accept the Honor Code.

(Signed)

Page 2 of 17 EE 278, Autumn 2017

1. Continuous Time Random Process (15 points). Define the random process X(t) = At+B, t ≥ 0,where A and B are independent N (0, 1) random variables.

a. (2 points) Sketch possible sample functions of X(t).

b. (2 points) Find the first order probability density function of X(t).

c. (3 points) Is X(t) a WSS process?

d. (4 points) Find P{X(t) ≥ 0, for all t ≥ 0}.

e. (4 points) Does X(t)/t converge in distribution? If so, what is the limit?

Solution

a. Sample functions: See Figure 1.

Figure 1: Sample functions of X(t).

b. The first order pdf is the pdf of X(t) as a function of t. Since At and B are independent,the pdf of X(t) is a Gaussian with mean 0 and variance t2 + 1.

c. E(X(t)) = 0. The mean does not depend on time. Now we look at the autocorrelationfunction:

RX(t1, t2) = E(X(t1)X(t2))

= E((At1 +B)(At2 +B))

= t1t2 E(A2) + (t1 + t2) E(A) E(B) + E(B2)

= t1t2 + 1

Since the autocorrelation function does not only depend on t1 − t2, the process is not WSS.

EE278 Final Exam Solutions Page 3 of 17

d. Consider

P{X(t) ≥ 0, for all t ≥ 0} = P{At+B ≥ 0, for all t ≥ 0}

= P{A ≥ 0 and B ≥ 0} =1

4.

e. limt→∞X(t)t

= limt→∞A+ Bt

= A. Therefore, X(t)/t converges in distribution to N (0,1).

2. Benevolent uncle (30 points). A student tries to earn quick money to buy a Tesla whose priceis $N by visiting Las Vegas over Thanksgiving break. He brings $k with him, 0 < k < N ,and makes the following gamble with a casino. He tosses a fair coin repeatedly; if it comes upheads the casino pays him $1, but if it comes up tails he pays the casino $1. He plays this gamerepeatedly until

(1) he runs out of money, or(2) he wins enough to buy the Tesla.

a. (8 points) Derive the probability that the student wins enough money to buy the Tesla?Give your answers in terms of k and N .

Now suppose the student has a benevolent uncle who covers all his losses. Specifically, hisuncle gives him $1 every time he loses all of his fortune; so, if his money reached $0, hisuncle gives him $1 for the next coin toss.

b. (7 points) Let J represent the amount of money his uncle pays the student until he wins theTesla. Find P (J = j) given that the student started with $k of his own money. Give youranswers in terms of j, k and N .

c. (8 points) If k=10,000 and N=100,000, what is E(J)?

d. (7 points) This time suppose the uncle gives the student $2 every time he hits $0. Withk=10,000 and N=100,000, what is E(J)?

Solution

a. Let A be the event that the student runs out of money, and B be the event that the firstcoin toss comes up heads. Let Pk be the probability when the starting money is k. Fromthe law of total probability,

Pk(A) = Pk(A |B)Pk(B) + Pk(A |Bc)Pk(Bc). (1)

Note that if the first toss is a head, the student’s money increases to $(k + 1) and therest of game becomes as same as a new game with starting point $(k + 1). Therefore,Pk(A |B) = Pk+1(A) and similarly Pk(A |Bc) = Pk−1(A) Writing pk as Pk(A), 1 becomes

pk =1

2(pk+1 + pk−1) if 0 < k < N,

which is a linear difference equation subject to the boundary conditions p0 = 1 and pN = 0.Notice that pk − pk−1 = 1

2(pk − pk−2) = pk−1 − pk−2. Letting bk = pk − pk−1 we obtain

pk = b1 + pk−1 = 2b1 + pk−2 = . . . = kb1 + p0.

Page 4 of 17 EE 278, Autumn 2017

From the boundary conditions, we can compute b1 = pN−p0N

= − 1N

. The probability that thestudent runs out of money is

Pk(A) = pk

= kb1 + p0

= − k

N+ 1

Therefore, the probability that the student wins is 1− Pk(A) = kN

.

b. First consider when j = 0. This means that the student wins without his uncle’s help.Therefore, P (J = 0) = k

N.

Now consider the case where j > 0. In this scenario, the student must lose his $k once. Thenhis uncle gives him $1, which is equivalent to him starting the game over with initial money$1. In order for his uncle to lose $j, the student must lose another j−1 times before winning.

Therefore, the probability that J = j when j > 0 is P (J = j) =(1− k

N

) (1− 1

N

)j−1 1N

.

c.

E(J) = 0 · P (J = 0) +∞∑j=1

jP (J = j)

=∞∑j=1

j

(1− k

N

)(1− 1

N

)j−11

N

=

(1− k

N

)1

N

∞∑j=1

j

(1− 1

N

)j−1=

(1− k

N

)1

N

1(1−

(1− 1

N

))2=

(1− k

N

)N = N − k = $90, 000.

The uncle is expected to pay as much as the difference between the Tesla’s price and thestarting amount of money, which is $90,000, to help his nephew to buy the Tesla.

d. Similarly, we can compute the pmf of J to be

pJ(j) =

kN

for j = 0(1− k

N

) (1− 2

N

) j2−1 2

Nfor j even and j > 0

0 otherwise.

EE278 Final Exam Solutions Page 5 of 17

Then we can find the expected value of J ,

E(J) = 0 · P (J = 0) +∞∑j=2j even

jP (J = j)

=∞∑j=2j even

j

(1− k

N

)(1− 2

N

) j2−1

2

N

=∞∑i=1

(2i)

(1− k

N

)(1− 2

N

)i−12

N

= 2

(1− k

N

)2

N

1(1−

(1− 2

N

))2=

(1− k

N

)N = N − k = $90, 000.

Even if the benevolent uncle pays $2 every time the student goes bankrupt, he is still expectedto pay $90,000 to cover for his nephew.

3. Bounds (20 points). Let {Xi, i = 1, 2, ...} be IID r.v.s with P (Xi = 1) = P (Xi = −1) = 1/2.Let Sn =

∑ni=1Xi. Take n = 106 and evaluate your bounds explicitly in each part below.

a. (5 points) Use Chebyshev’s inequality to determine a bound on P (Sn ≥ n/2).

b. (5 points) Use the Central Limit Theorem to determine a bound on P (Sn ≥ n/2). You canuse tables to look up Q(·) values.

c. (10 points) Observing that P (Sn ≥ n/2) = P(eθSn ≥ eθn/2

)for θ > 0, use Markov’s inequa-

lity to get as tight a bound on P (Sn ≥ n/2) as possible by choosing θ optimally.

Solution

a.

P(Sn ≥

n

2

)=

1

2P(|Sn| ≥

n

2

)(1)

=1

2P

(|Sn| ≥

√n

2σSn

)(2)

=1

2P

(|Sn − E(Sn)| ≥

√n

2σSn

)(3)

≤ 1

2

1(√n2

)2 =

2

n= 2× 10−6 (4)

(1) follows because the process is symmetric. (2) follows because the variance of Sn is n. (3)uses the Chebyshev’s inequality.

b. Using the Central Limit Theorem, we know that 1√n

∑ni=1

Xi−E(X)σX

= 1√nSn converges to

Page 6 of 17 EE 278, Autumn 2017

N (0, 1) in distribution. We have used σX = 1 and E(X) = 0. Therefore,

P(Sn ≥

n

2

)= P

(1√nSn ≤

√n

2

)≈ Q

(√n

2

)= Q(500).

c. From Markov inequality we obtain P(eθSn ≥ eθn/2

)≤ E(eθSn)

eθn2

. Then, we calculate

E(eθSn

)= E

(eθ

∑ni=1Xi

)= E

(n∏i=1

eθXi

)

=n∏i=1

E(eθXi

)=

n∏i=1

eθ + e−θ

2=

(eθ + e−θ

2

)n.

Substituting this into the inequality we have

P(eθSn ≥ eθn/2

)≤

(eθ+e−θ

2

)neθn2

=

(eθ2 + e

−3θ2

2

)n

.

To find the minimum value for P(eθSn ≥ eθn/2

), we differentiate the above function with

respect to θ and find the value of θ that makes the derivative equals to 0:

d

dθP(eθSn ≥ eθn/2

)=

d

(eθ2 + e−

3θ2

2

)n

= 2−n[n(eθ2 + e−

3θ2

)n−1(1

2eθ2 − 3

2e−

3θ2

)]= 0.

Solving this equation we obtain θ = 12

ln 3. Substituting this value of θ back we obtain the

bound P (Sn ≥ n/2) = P(eθSn ≥ eθn/2

)≤(e14 ln 3+e−

34 ln 3

2

)106.

4. Random Walk and Wiener Process (20 points). Let {Xi, i = 1, 2, ...} be IID r.v.s with P (Xi =1) = P (Xi = −1) = 1/2, and let Sn =

∑ni=1Xi. Let Tn = eSn .

a. (5 points) Is {Tn, n = 1, ..., n} an IID process? a Markov process? a stationary process?Give reasons in each case.

b. (5 points) Determine the mean and autocorrelation functions of {Tn, n = 1, ..., n}.Recall that the Wiener process is defined as

W (t) =

∫ t

0

X(τ) dτ,

EE278 Final Exam Solutions Page 7 of 17

where X(t) is a continuous-time white Gaussian noise process (which is physically unreali-zable, but the integral makes the resemblance between Tn and W (t) apparent). W (t) hasthe following properties:

(1) W (0) = 0(2) W (t) is independent increment.(3) (W (t2)−W (t1)) ∼ N (0, t2 − t1) for all t2 > t1 ≥ 0(4) W (t) is continuous in t, and E(W (t)) = 0 and RW (t1, t2) = min{t1, t2}.

c. (10 points) Find the mean and autocorrelation function of Y (t) = eW (t).

Solution

a. Tn is not an IID process. To see this, let us look at T1 and T2. We find that

T1 =

{e with probability 1

2

e−1 with probability 12

, whereas

T2 =

e2 with probability 1

4

0 with probability 12

e−2 with probability 14.

Since the first-order pmf of T1 and T2 are different, the process Tn is not IID.Tn is a Markov process. To see this consider

P (Tn+1 = tn+1 |Tn = tn) = P(Tne

Xn = tn+1 |Tn = tn)

= P(Tne

Xn = tn+1 |Tn = tn)

= P (Tn+1 = tn+1 |Tn = tn) .

From our previous argument that T1 and T2 have different first-order pmfs, we concludethat the process is not strict-sense stationary. Also, E(T1) 6= E(T2), so the process is notwide-sense stationary either.

b.

E(Tn) = E(e∑ni=1Xi

)= E

(n∏i=1

eXi

)

=n∏i=1

E(eXi)

=

(e+ e−1

2

)n.

Page 8 of 17 EE 278, Autumn 2017

Assuming n1 < n2,

RT (n1, n2) = E (Tn1Tn2)

= E(e∑n1i=1Xie

∑n2j=1X2

)= E

(n1∏i=1

e2Xin2∏

j=n1+1

eXj

)

=

n1∏i=1

E(e2Xi

) n2∏j=n1+1

E(eXi)

=

(e2 + e−2

2

)n1(e+ e−1

2

)n2−n1

.

RT (n1, n2) =(e2+e−2

2

)min{n1,n2} (e+e−1

2

)|n2−n1|in general.

c.

E(Y (t)) =

∫ ∞−∞

eu√2πt

e−12u2/t du = e

12t.

To find the autocorrelation function, note that W (t1)+W (t2) = 2W (t1)+(W (t2)−W (t1)) ∼N (0, 3t1 + t2) if t1 < t2, implying that

RY (t1, t2) = E(Y (t1)Y (t2)) = E(eW (t1)+W (t2)

)= e

12(3t1+t2).

RY (t1, t2) = e12(3min{t1,t2}+max{t1,t2}) in general.

5. Sensor network (20 points). Consider a network with N ∼ Geom(p) nodes, for 0 < p < 1,arranged on a ring with each node communicating only to its neighbor to the right. Theyall wish to have the same estimate of the number of nodes in the network N . Consider thefollowing protocol: Each sensor chooses a number independently and according to an expo-nential distribution with parameter 1. Denote these numbers as X1, X2, . . . , XN . Thus, foreach n ≥ 1, (X1, X2, . . . , XN) | {N = n} are i.i.d. Exp(1). Each node passes its number toits neighbor, who then takes the minimum of the number it has and the number it receivesand passes it to the next neighbor. The process continues until all nodes have the numberZ = min{X1, X2, . . . , XN}.a. (10 points) Find the minimum MSE estimate of N given Z (this is the estimate of N that

each sensor will have). Your answer should be in terms only of Z and p.Hint : The following series sums may simplify your answers:

∑∞n=1 nx

n−1 = 1(1−x)2 and∑∞

n=1 n2xn−1 = 1+x

(1−x)3 , for 0 < x < 1.

b. (10 points) Let p = 1− e−1 and find the MAP decoder for N given Z.

Solution

a. First, note that Z ∼ Exp(n). Then we can find pN |Z(n | z) =fZ |N (z |n)pN (n)

fZ(z)= ne−nzp(1−p)n−1∑∞

m=ime−mzp(1−p)m−1 .

EE278 Final Exam Solutions Page 9 of 17

Let the denominator be α =∑∞

m=ime−mzp(1− p)m−1. We can compute

α =∞∑m=i

me−mzp(1− p)m−1

= pe−z∞∑m=i

m

(1− pez

)m−1=

pe−z(1− 1−p

ez

)2Now, we find

E(N |Z = z) =1

α

∞∑i=1

n2e−nzp(1− p)n−1

=1

αpe−z

∞∑i=1

n2

(1− pez

)n−1=

1

αpe−z

1 +(1−pez

)(1− 1−p

ez

)3=

(1− 1−p

ez

)2pe−z

pe−z(1− 1−p

ez

)3 (1 +1− pez

)=ez − p+ 1

ez + p− 1.

Therefore, the MMSE estimate is E(N |Z) = eZ−p+1eZ+p−1 .

b. Using results from part a., we can find the a posteriori probability to be

pN |Z(n | z) =fZ |N(z |n)pN(n)

pe−z

(1− 1−pez )

2

= n(ez + p− 1)2(e−z(1− p)

)n−1. To obtain the MAP decoder, we need to find n that gives the maximum posteriori proba-bility i.e.,

N̂ = arg maxn

n(ez + p− 1)2(e−z(1− p)

)n−1.

Consider the continuous function f(t) = t(ez + p − 1)2 (e−z(1− p))t−1. We can find t suchthat f(t) takes the maximum value by taking its derivative with respect to t and setting itequal to 0. Solving the equation we obtain t = 1

z+1. Note that since Z ≤ 0, 0 < 1

z+1≤ 1

and hence 0 < t ≤ 1. Extending the argument to the discrete case, we conclude that N̂ = 1for all Z.

6. Noisy pixels (15 points). A pixel signal X ∼ U[−k, k] is digitized to obtain

X̃ = i+1

2, if i < X ≤ i+ 1, i = −k,−k + 1, . . . , k − 2, k − 1.

To improve the the visual appearance, the digitized value X̃ is dithered by adding an indepen-dent noise Z with mean E(Z) = 0 and variance Var(Z) = N to obtain Y = X̃ + Z.

Page 10 of 17 EE 278, Autumn 2017

Find the best linear MSE estimate of X given Y . Your answer should be in terms only of kand Y .

Solution

The best linear estimate is of the form

X̂ =Cov(X, Y )

σ2Y

(Y − E(Y )) + E(X).

So, we need to find the means, variances and covariance in terms of k. Consider

E(X) = 0.

E(Y ) = E(X̃ + Z) = E(X̃) + E(Z) = 0.

E(XY ) = E(X(X̃ + Z))

= E(XX̃) + E(XZ)

= E(XX̃)

=k∑

i=−k+1

P{X ∈ (i− 1, i]}E(XX̃ |X ∈ (i− 1, i]), total expectation

=k∑

i=−k+1

1

2kE(XX̃ |X ∈ (i− 1, i])

=k∑

i=−k+1

1

2k

(i− 1 +

1

2

)E(X |X ∈ (i− 1, i])

=1

2k

k∑i=−k+1

(i− 1

2

)2

= 2 · 1

2k

k∑i=1

(i− 1

2

)2

=1

k

(k(k + 1)(2k + 1)

6− 2 · k(k + 1)

4+k

4

)=

1

12(4k2 − 1)

E(Y 2) = E(X̃2) + E(Z2)

=k∑

i=−k+1

1

2kE((X̃)2 |X ∈ (i− 1, i]) +N

=1

k

k∑i=1

(i− 1

2

)2

+N =1

12(4k2 − 1) +N.

Thus

X̂ =(4k2 − 1)

(4k2 − 1) + 12NY.

EE278 Final Exam Solutions Page 11 of 17

7. Dynamical system (15 points). Consider the system defined below:

Xn+1 = αXn + (1− α)Wn, Wn ∼ N (0, 1) ,

and the observation satisfies

Yn = Xn + Zn, Zn ∼ N (0, 1),

where X0 ∼ N (0, P ), X0,W0,W1,W2 · · · , Z1, · · · are mutually independent, and 0 < α < 1.

a. (2 points) What value of P makes {Xn, n = 0, 1, ...} a stationary process? Fix this value ofP for all subsequent questions.

b. (4 points) Compute the autocorrelation functions of {Xn} and {Yn}. Sketch them andcomment on how they depend on α, and give some intuition on why.

c. (4 points) Compute the power spectral densities of {Xn} and {Yn}. Sketch them and com-ment on how they depend on the parameter α, and give some intuition on why.

d. (5 points) Write down the Kalman filter equations for the system. Note that these are thefiltering equations, not the predictor equations. Compute the steady-state gain of the filterand the steady-state mean square error of the filter. Sketch them as a function of α.

Solution

a. To make Xn stationary, we need E(X2n) to be independent of n. Note that

E(X21 ) = α2 E(X2

0 ) + (1− α)2 E(W 20 ) = α2P + (1− α)2.

Setting E(X20 ) = E(X2

1 ) hence solving the equation P = α2P + (1−α)2 we obtain P = 1−α1+α

.

b. We know that Xn+m = αmXn + f(Wn, · · · ,Wn+m−1), where the second term is some termindependent of Xn. Since Xn is stationary, then

RX(m) = E(Xn, Xn+m) = α|m| E(X2n) = α|m|

1− α1 + α

.

For {Yn}, we know that

RY (n1, n2) = E(Yn1 , Yn2)

= E((Xn1 + Zn1)(Xn2 + Zn2))

= α|n1−n2|1− α1 + α

+ δ(n1 − n2).

We can also wee that Yn is stationary, so we can also define RY (m) = α|m| 1−α1+α

+ δ(m).

See Figure 2a and Figure 2b for the plot.

Page 12 of 17 EE 278, Autumn 2017

(a) RX(m) (b) RY (m)

Figure 2: Autocorrelation functions of Xn and Yn.

EE278 Final Exam Solutions Page 13 of 17

(a) SX(f) (b) SY (m)

Figure 3: Power spectral densities of Xn and Yn.

c. The power spectral densities are calculated as below:

SX(f) =∞∑

m=−∞

RX(m)e−i2πfm =

(1− α1 + α

)(2Re

(1

1− αe−j2πf

)− 1

)=

(1− α1 + α

)(1− α2

1− 2α cos(2πf) + α2

)=

(1− α)2

1− 2α cos(2πf) + α2

SY (f) =∞∑

m=−∞

RY (m)e−i2πfm =

((1− α)2

1− 2α cos(2πf) + α2

)+ 1.

See Figure 3a and Figure 3b for the plot.

d. The filtering equation of the Kalman filter is as follows:

X̂i+1|i+1 = α(1− ki)X̂i|i + kiYi+1

where ki =α2σ2

i|i + (1− α2)

α2σ2i|i + (1− α2) + 1

and σ2i+1|i+1 = (1− ki)(a2iσ2

i|i + (1− α2)).

Using the information that Xn is stationary, we know that all ki’s and σ2i | i’s are equal.

Letting the steady-state gain and mean square error be k and σ2, respectively, we can solve

Page 14 of 17 EE 278, Autumn 2017

Figure 4: Steady-state gain and mean square error.

for them to be both equal to 1−α1+√1+α2 .

See Figure 4 for the plot.

8. Race to the bottom (20 points). An online gaming platform conducts a race and N participantshave registered. Let participant i’s finishing time be given by the continuous time randomvariable Ui with density fi(x) for x > 0. The only thing that is known about fi(x) is thatfi(0) = 1 for all i. Suppose all the Ui’s are independent. The race is very popular, hence N isvery large.

The race organizer pays the winner a prize proportional to their winning time; specifically, thewinner gets a prize of $WN = N min{U1, U2, ..., UN}.a. (8 points) What is E(WN) as N →∞?

b. (12 points) Does WN converge in distribution? If so, what is the limiting distribution?

Solution

a.

limN→∞

P (WN ≤ w) = 1− P (N miniUi > w)

= limN→∞

1− P(

miniUi >

w

N

)= lim

N→∞1−

N∏i=1

(1− P (Ui >

w

N))

= limN→∞

1−N∏i=1

(1− Fi

(wN

)),

where Fi(u) is the cdf of Ui.To get some intuition of what limN→∞

∏Ni=1

(1− Fi

(wN

))converges to, first we consider the

EE278 Final Exam Solutions Page 15 of 17

case when Ui ∼ Exp(1) for all i. Here Fi(wN

)= 1− e− w

N , so then we have

limN→∞

N∏i=1

(1− Fi

(wN

))= lim

N→∞

N∏i=1

e−wN

= e−w.

Therefore, limN→∞ 1 −∏N

i=1

(1− Fi

(wN

))= limN→∞ (1− e−w) meaning WN converges to

exponential distribution with rate 1.Secondly, we consider the case when Ui ∼ U[0,1] for all i. Fi

(wN

)=(wN

)and therefore,

limN→∞

N∏i=1

(1− Fi

(wN

))= lim

N→∞

N∏i=1

(1− w

N

)= lim

N→∞

(1− w

N

)N= e−w.

We can see that in this case WN also converges to Exp(1).

In general, when fi(0) = 1 and if either (i) Ui’s are IID or (ii) there exists some δ such thatfor all x ∈ (−δ, δ), |f ′i(x)| < M for some M , and all i, we will show that for all w ≥ 0,limN→∞

∏Ni=1

(1− Fi

(wN

))= e−w.

Note that (i) implies (ii), so let’s assume (ii) holds.

The Taylor’s series approximation

Fi

(wN

)= Fi(0) + fi(0)

w

N=w

N(because Fi(0) = 0 and fi(0) = 1)

and the corresponding error bound∣∣∣Fi (wN

)−(wN

)∣∣∣ < f ′i(δi,N)w2

2N2, for some δi,N > 0

yield ∣∣∣1− Fi (wN

)−(

1− w

N

)∣∣∣ < Mw2

2N2.

Rewriting the above as

1− w

N−M w2

2N2< 1− Fi

(wN

)< 1− w

N+M

w2

2N2∀i = 1, 2, . . . , N,

gives (1− w

N−M w2

2N2

)N<

N∏i=1

(1− Fi

(wN

))<

(1− w

N+M

w2

2N2

)N.

Therefore, by a sandwich argument

limN→∞

P (WN ≤ w) = limN→∞

1−N∏i=1

(1− Fi

(wN

))= 1− e−w.

Or, WN converges to in distribution to an Exp(1) random variable.

Page 16 of 17 EE 278, Autumn 2017

b. Since P (WN > w) =∏N

i=1

(1− Fi

(wN

))and E(WN) =

∫∞0P (WN > w) dw, the sandwich∫ ∞

0

(1− w

N−M w2

N2

)Ndw <

∫ ∞0

N∏i=1

(1− Fi

(wN

))dw <

∫ ∞0

(1− w

N+M

w2

N2

)Ndw

shows that E(WN)→ 1.

Note. If neither of the conditions (i) or (ii) holds, it is possible to construct examples where WN

goes to 0. We’ve given partial credit for this answer, depending on how rigorously it’s beenargued.

EE278 Final Exam Solutions Page 17 of 17