Estimation and Detection -...

51
1 Estimation and Detection Deterministic Signals (Ch. 4) & Random Signals (Ch. 5) Dr. ir. Richard C. Hendriks – 7/12/2018

Transcript of Estimation and Detection -...

1

Estimation and Detection Deterministic Signals (Ch. 4) & Random Signals (Ch. 5)

Dr. ir. Richard C. Hendriks – 7/12/2018

2

Previous Lecture: Neyman-Pearson Theorem

Problem statement

Assume a data set x= [x[0],x[1], ...,x[N�1]]T is available. The detection problem is defined

as follows

H0 : T (x)< �

H1 : T (x)> �

where T is the decision function and � is the detection threshold. Our goal is to design T

so as to maximize PD subject to PFA < ↵.

3

Neyman-Pearson Theorem

To maximize PD for a given PFA = ↵ decide H1 if

L(x) =p(x;H1)

p(x;H0)> �

where the threshold � is found from

PFA =

Z

{x:L(x)>�}p(x;H0)dx= ↵

The function L(x) is called the likelihood ratio and the entire test is called the likelihood

ratio test (LRT).

4

Neyman-Pearson Theorem – Example (1)

Let p(x;H1) and p(x;H0) be given by the Rayleigh pdfs

p(x;H1) =

8<

:

x

21exp

⇣� 1

2x

2

21

⌘x� 0

0 x 0

and

p(x;H0) =

8<

:

x

20exp

⇣� 1

2x

2

20

⌘x� 0

0 x 0

with �

21 > �

20 .

L(x) =

p(x;H1)

p(x;H0)=

20

21

exp

�1

2

✓x

2

21

� x

2

20

◆�> �.

5

Neyman-Pearson Theorem – Example (2)

Test statistic:

T (x) = x

2> �

0

PFA = P (x

2> �

0;H0) = P (x >

p�

0;H0) =

Z 1

p�0

x

20

exp

�1

2

x

2

20

�= exp

� �

0

2�

20

�dx

with �

0=�2�

20 ln(PFA).

The detection probability:

PD =

Z 1

p�0

x

21

exp

�1

2

x

2

21

�= exp

� �

0

2�

21

�dx.

6

Minimum Probability of Error

Assume the prior probabilities of H0 and H1 are known and represented by P (H0) and

P (H1), respectively. The probability of error, Pe, is then defined as

Pe = P (H1)P (H0|H1)+P (H0)P (H1|H0) = P (H1)PM +P (H0)PFA

Our goal is to design a detector that minimizes Pe. It is shown that the following detector is

optimal in this casep(x|H1)

p(x|H0)>

P (H0)

P (H1)= �

In case P (H0) = P (H1), the detector is called the maximum likelihood detector.

7

Remember: Neyman-Pearson – Example Consider the following signal detection problem

H0 : x[n] = w[n] n= 0,1, . . . ,N �1

H1 : x[n] = s[n]+w[n] n= 0,1, . . . ,N �1

where the signal is s[n] = A for A > 0 and w[n] is WGN with variance �

2. Now the NP

detector decides H1 if

1

(2⇡�2)N2exp

h� 1

2�2

PN�1n=0 (x[n]�A)

2i

1

(2⇡�2)N2exp

h� 1

2�2

PN�1n=0 x

2[n]

i> �

Taking the logarithm of both sides and simplification results in

1

N

N�1X

n=0

x[n]>

2

NA

ln�+

A

2

= �

0

8

The NP detector compares the sample mean x= 1N

PN�1n=0 x[n] to a threshold �

0. To deter-

mine the detection performance, we first note that the test statistic T (x) = x is Gaussian

under each hypothesis and its distribution is as follows

T (x)⇠8<

:N (0, �

2

N ) under H0

N (A,

�2

N ) under H1

We have then PFA = Pr(T (x)> �

0;H0) =Q

✓�0p

�2/N

◆) �

0=q

�2

N Q

�1(PFA) and

PD = Pr(T (x)> �

0;H1) =Q

0 �Ap�

2/N

!

PD and PFA are related to each other according to the following equation

PD =Q

Q

�1(PFA)�r

NA

2

2

!

Remember: Neyman-Pearson – Example

Signal energy-to-noise ratio

9

Minimum Probability of Error– Example DC in WGN Consider the following signal detection problem

H0 : x[n] = w[n] n= 0,1, . . . ,N �1

H1 : x[n] = s[n]+w[n] n= 0,1, . . . ,N �1

where the signal is s[n] = A for A > 0 and w[n] is WGN with variance �

2. Now the min.

probability of error detector decides H1 if

p(x|H1)p(x|H0)

>

P (H0)P (H1)

= 1 (assuming P (H0) = P (H1) =

0.5), leading to

1

(2⇡�2)N2exp

h� 1

2�2

PN�1n=0 (x[n]�A)

2i

1

(2⇡�2)N2exp

h� 1

2�2

PN�1n=0 x

2[n]

i> 1

Taking the logarithm of both sides and simplification results in

1

N

N�1X

n=0

x[n]>

A

2

Similar detector as NP, but, different threshold and performance.

10

Minimum Probability of Error– Example DC in WGN

Pe is then given by

Pe =1

2[P (H0|H1)+P (H1|H0)]

=1

2

"Pr(

1

N

N�1X

n=0

x[n]<A/2|H1)+Pr(1

N

N�1X

n=0

x[n]>A/2|H0)

#

=1

2

" 1�Q

A/2�Ap

2/N

!!+Q

A/2p�

2/N

!#

= Q

rNA

2

4�2

!

11

Minimum Probability of Error – MAP detector

Starting from

p(x|H1)

p(x|H0)>

P (H0)

P (H1)= �

we can use Bayes’ rule:

p(Hi|x) = p(x|Hi)p(Hi)

p(x)

we arrive at

p(H1|x)> p(H0|x).

this is called the MAP detector, which, if P (H1) = P (H0) reduces again to the ML detector.

12

Bayes Risk

A generalisation of the minimum Pe criterion is one where costs are assigned to each type

of error:

Let Cij be the cost if we decide Hi while Hj is true. Minimizing the expected costs we get

R= E[C] =1X

0=1

1X

j=0

CijP (Hi|Hj)P (Hj)

If C10 >C00 and C01 >C11 the detector that minimises the Bayes risk is to decide H1 when

p(x|H1)

p(x|H0)>

C10�C00

C01�C11

P (H0)

P (H1)= �.

13

Previous Lecture H0 : T (x)< �

H1 : T (x)> �

Using detection theory, rules can be derived on how to chose � and how to find T (x).

• Neyman-Pearson Theorem: L(x) = p(x;H1)p(x;H0)

> � ,

where � found from PFA =R{x:L(x)>�} p(x;H0)dx= ↵

• Minimum probability of error:

p(x|H1)p(x|H0)

> P (H0)P (H1)

= �

• Bayesian detector:

p(x|H1)p(x|H0)

> C10�C00C01�C11

P (H0)P (H1)

= �.

Similar test statistic,

different threshold.

14

Today and next Lecture

• Detection of deterministic signals: Detecting a known signal in noise using the NP

criterion.

• Detection of realisations of random processes : Detecting the realisation of random

process with known covariance structure.

15

Deterministic Signals

Binary detection problem:

H0 x[n] = w[n]

H1 x[n] = s[n]+w[n]

Assumptions

• s[n] is deterministic and known.

• w[n] is white Gaussian noise with variance �

2.

16

Deterministic Signals Notice that presence of s[n]

implies change in mean of

observe signal. Optimal detector

will test whether there is a

change in the mean of the test

statistic.

The NP detector decides H1 if the likelihood ratio exceeds a threshold,

L(x) =

p(x;H1)

p(x;H0)> �

where x= [x[0],x[1], . . . ,x[N �1]]

T. Since

p(x;H1) =1

(2⇡�

2)

N2

exp

"� 1

2�

2

N�1X

n=0

(x[n]�s[n])

2

#

p(x;H0) =1

(2⇡�

2)

N2

exp

"� 1

2�

2

N�1X

n=0

x

2[n]

#

we have

L(x) = exp

"� 1

2�

2

N�1X

n=0

(x[n]�s[n])

2�N�1X

n=0

x

2[n]

!#> �.

17

Taking the logarithm of both sides does not change the inequality, so we have

l(x) = lnL(x) =� 1

2�2

N�1X

n=0

(x[n]�s[n])2�N�1X

n=0

x

2[n]

!> ln�

We decide H1 if

1

2

N�1X

n=0

x[n]s[n]� 1

2�2

N�1X

n=0

s

2[n]> ln�

Since s[n] is known, we may incorporate the energy term into the threshold to yield

T (x) =N�1X

n=0

x[n]s[n]> �

0

where �

0= �

2 ln�+ 12

N�1Pn=0

s

2[n].

Deterministic Signals

How to interpret this?

Correlation between observed signal x[n] and s[n].

18

Deterministic Signals - Interpretation Example: DC Level in WGN

Assume that s[n] =A and A> 0. Thus T (x) =PN�1

n=0 Ax[n]> �

0.

Recall this example from last week:

Dividing the right and left hand sides by NA, we have

T

0(x) =

1

NA

T (x) =1

N

N�1X

n=0

x[n] = x > �

00

where �

00= �

0/NA.

Correlation between observed signal x[n] and s[n].

19

Deterministic Signals - Interpretation

Interpretation 2: The resulting T (x) =PN�1

n=0 x[n]s[n]

is a matched filter.

Fig. 4.1 from Kay-II.

Interpretation 1: The resulting T (x) =PN�1

n=0 x[n]s[n]

is a correlator. The received data is correlated with a

replica of the signal.

20

Deterministic Signals – Matched Filter Interpretation

Fig. 4.1 from Kay-II.

• Let x[n] be the input to an FIR filter.

• Impulse response h[n].

• Output y[n] =Pn

k=0h[n�k]x[k]

• Select as impulse response h[n] = s[N �1�n] for n= 0,1, ...,N �1

Then

y[n] =nX

k=0

s[N �1� (n�k)]x[k].

Output at N �1 is y[N �1] =PN�1

k=0 s[k]x[k] = T (x)

21

Deterministic Signals – Matched Filter Interpretation

Fig. 4.1 from Kay-II.

Matched filter:

y[n] =nX

k=0

s[N �1� (n�k)]x[k].

Output at N �1 is y[N �1] =PN�1

k=0 s[k]x[k] = T (x)

• This implementation of the NP detector is known as the matched filter.

• Matched filter impulse response is obtained by flipping s[n] about n= 0 and shifting it

to the right with N-1 samples.

• The best detection performance will be at n=N �1. Then y[n] = T (x).

22

Matched Filter - SNR Maximizer

Matched filter:

y[n] =nX

k=0

s[N �1� (n�k)]x[k].

Output at N �1 is y[N �1] =PN�1

k=0 s[k]x[k] = T (x)

• For deterministic signals, Matched filter is optimal implementation of the NP detector!

• To optimize detection probability PD, we have seen we should increase the deflection

coefficient. Generally this means increasing the SNR.

• The matched filter maximises the SNR at the output of an FIR filter.

23

Matched Filter - SNR Maximizer output SNR:

⌘ =E2(y[N �1];H1)

var(y[N �1]);H1

Let s= [s[0], ..., s[N �1]]T , w = [w[0], ...,w[N �1]]T and h= [h[N �1], ..., s[0]]T .

⌘ =(hT s)2

E(hTw)2=

(hT s)2

hTE(wwT)h

=(hT s)2

hT�2Ih=

1

�2

(hT s)2

hTh

Cauchy-Schwarz inequality:(hT s)2 (hTh)(sT s), with equality if and only if h= cs.

) ⌘ 1

�2sT s=

E�2

Taking c= 1, maximum SNR is obtained if

h[N �1�n] = s[n], n= 0,1, ...,N �1

24

Performance of the Matched Filter What is PD for a given PFA?

H1 is decided when T (x) =PN�1

k=0 s[k]x[k] = �

0.

x[n] is Gaussian ) T (x) is also Gaussian. Therefore,

E(T ;H0) = E

N�1X

n=0

w[n]s[n]

!= 0

E(T ;H1) = E

N�1X

n=0

(s[n]+w[n])s[n]

!= E

var(T ;H0) = var

N�1X

n=0

w[n]s[n]

!= �

2E

var(T ;H1) = var

N�1X

n=0

(s[n]+w[n])s[n]

!= �

2E

where E is the signal energy.

25

Performance of the Matched Filter

The larger E , the the further the pdfs

move away from each other,

the better the performance will be.

T ⇠8<

:H0 : N (0,�2E)H1 : N (E ,�2E)

Figure: pdfs of matched filter statistic.(Fig. 4.4 Kay-II)

26

Performance of the Matched Filter

This way, the probability of false alarm and detection are as follows

PFA = Pr⇣T > �

0;H0

⌘=Q

�0

p(�2)E

!

PD = Pr⇣T > �

0;H1

⌘=Q

�0 �Ep(�2)E

!

Deriving �0=p�2EQ�1(PFA) and substituting in PD, we have PD =Q

Q�1(PFA)�

qE�2

!

where E/�2is the energy to noise ratio.

27

Performance of the Matched Filter

Fig. 4.5 from Kay-II.

PD =Q

Q�1(PFA)�

rE�2

!

where E/�2 is the energy to noise ratio. To increase

PD: Increase PFA, and/or increase SNR E�2 .

Remember the example from previous lecture:

DC in WGN with PD

PD =Q

Q�1(PFA)�

rNA2

�2

!

In that example s[n] =A and E =NA2. The shape

of the signal does not influence the detection

performance for white noise. Only the total

energy E . In our example

thus N and the amplitude A.

28

Correlated Noise What about coloured noise?

p(x;H1) =1

(2⇡)

N2det

12(C)

exp

�1

2

(x� s)

TC

�1(x� s)

p(x;H0) =1

(2⇡)

N2det

12(C)

exp

�1

2

x

TC

�1x

�.

The NP detector decides H1 if the likelihood ratio exceeds a threshold: L(x) =

p(x;H1)p(x;H0)

> �.

lnL(x) = x

TC

�1s� 1

2

s

TC

�1s> ln�

x

TC

�1s> ln�+

1

2

s

TC

�1s= �

0

For WGN (C= �

2I) we obtain the special case we already know:

x

Ts

2> �

0 ) x

Ts=

N�1X

n=0

x[n]s[n]> �

2�

0

29

Deterministic Signals (WGN) - Interpretation

Interpretation 2: The resulting T (x) =PN�1

n=0 x[n]s[n]

is a matched filter.

Fig. 4.1 from Kay-II.

Interpretation 1: The resulting T (x) =PN�1

n=0 x[n]s[n]

is a correlator. The received data is correlated with a

replica of the signal.

Binary detection problem with w ⇠N(0,�2I) and deterministic s:

H0 x[n] = w[n]

H1 x[n] = s[n]+w[n]

30

Deterministic Signals – Summary

Fig. 4.7 Kay-II.

Binary detection problem with w ⇠N(0,C) and deterministic s:

H0 x[n] = w[n]

H1 x[n] = s[n]+w[n]

T (x) = x

TC

�1s

Notice that if C is positive definite, C

�1can be written as C

�1 =D

TD, leading to

T (x) = x

TD

TDs

31

Performance of the Matched Filter – Colored noise

For white Gaussian noise, PD does not depend on signal shape, only on the energy sT s:

PD =Q

Q�1(PFA)�

rsT s

�2

!

What is PD for a given PFA in colored noise?

32

Performance of the Matched Filter – Colored noise H1 is decided when T (x) = x

TC

�1s> �

0.

x is Gaussian ) T (x) is also Gaussian. Therefore,

E(T ;H0) = E�w

TC

�1s

�= 0

E(T ;H1) = E�(s+w)TC�1

s

�= s

TC

�1s

var(T ;H0) = E�(wT

C

�1s)2

��E�(wT

C

�1s)�2

= s

TC

�1s

var(T ;H1) = E�(xT

C

�1s)2

��E�(xT

C

�1s)�2

= s

TC

�1s.

Pfa =Q

✓�0

ps

TC

�1s

◆! �0 =Q�1 (Pfa)

ps

TC

�1s

PD =Q

✓�0� s

TC

�1sp

s

TC

�1s

◆=Q

⇣Q�1 (Pfa)�

ps

TC

�1s

⌘,

with s

TC

�1s the "SNR" of the "whitened"signal.

33

Colored noise: Optimal Detection Signal

Notice:

For white Gaussian noise, PD does not depend on signal shape, only on the energy sT s:

PD =Q

Q�1(PFA)�

rsT s

�2

!

For colored Gaussian noise, PD DOES depend on the shape of the s compared to the

statistics of the noise:

PD =Q

✓�0� sTC�1sp

sTC�1s

◆=Q

⇣Q�1 (Pfa)�

psTC�1s

⌘.

What is the optimal s for the PD?

34

Colored noise: Optimal Detection Signal 1. Constrain the total energy to be s

Ts= E.

2. Optimize for the shape of s:max

s

s

TC

�1s

s.t. s

Ts= E

L(s) = s

TC

�1s+�(E� s

Ts)

Use @bTx

@x = b and @xTAx

@x = 2Ax.

@L(s)

@s= 2C

�1s�2�s= 0 !C

�1s= �s

s is thus an eigenvector of C�1 with eigenvalue �.

To maximize s

TC

�1s, we should choose the eigenvector s that corresponds with the maxi-

mum eigenvalue � of C�1 (or the minimum eigenvalue of C).

35

Optimal Detection Signal - Example

Keeps the target signal (as s[0] =�s[1])

and reduces the noise (as a beamformer).

Let 0< ⇢< 1 be the correlation coefficient between consecutive noise samples, and let

C=

2

4 1 ⇢

⇢ 1

3

5

1. Eigenvalues: (��1�⇢)(��1+⇢) = 0! �1 = 1+⇢ and �2 = 1�⇢ with corresponding

eigenvectors following from (C��iI)vi = 0, v1 =

2

41p2

1p2

3

5and v2 =

2

41p2

� 1p2

3

5.

2. The minimum eigenvalue of C is thus �2 = 1�⇢.

3. Therefore, s=pE

2

41p2

� 1p2

3

5with

T (x) = x

TC

�1s= x

TC

�1pEv2 =

pE

1

�2x

Tv2 =

qE2

1�⇢

(x[0]�x[1]).

36

Random Signals Assumptions:

• Target signal s[n]: Random, zero-mean Gaussian random process with known covari-

ance.

• Noise w[n]: WGN with known �

2and independent from s[n].

Binary detection problem:

H0 x[n] = w[n]

H1 x[n] = s[n]+w[n]

The NP detector: Decides H1 if

L(x) =p(x;H1)

p(x;H0)> �

37

Random Signals - Example

Notice: Scalar Wiener filter!

H0 : x⇠N (0,�

2I)

H1 : x⇠N (0,(�

2s +�

2)I)

Thus, we have L(x) =

1�2⇡(�2

s+�2)�N

2exp

� 1

2(�2s+�2)

N�1Pn=0

x

2[n]

1

(

2⇡�2)

N2exp

� 1

2�2

N�1Pn=0

x

2[n]

� .

Calculating the Log-Likelihood Ratio (LLR ), we have

l(x) =

N

2

ln

✓�

2

2s +�

2

◆� 1

2

✓1

2s +�

2� 1

2

◆N�1X

n=0

x

2[n]

=

N

2

ln

2

2s +�

2

!+

1

2

2s

2(�

2s +�

2)

N�1X

n=0

x

2[n]

38

Random Signals - Example

Hence, we decide H1 if

T (x) =N�1X

n=0

x

2[n]> �

0

The NP detector computes the energy in the received data and compares it to a threshold.

T (x) under H0 and H1 is distributed as follows

H0 :T (x)

2⇠ �

2N

H1 :T (x)

2s +�

2⇠ �

2N

S is �

2N distributed if S =

PNi x

2i and xi ⇠N(0,1).

39

Random Signals - Example

Therefore, we have

PFA = Pr{T (x)> �0;H0}

= PrnT (x)

�2>

�0

�2;H0

o

=Q�2N

⇣ �0

�2

and

PD = Pr{T (x)> �0;H1}

= Prn T (x)

�2s +�2

>�0

�2s +�2

;H0

o

=Q�2N

⇣ �0

�2s +�2

40

Random Signals - Example

Energy Detector Performance for N = 25

(Fig. 5.1 Kay-II)

PD =Q�2N

⇣ �0

�2s +�2

⌘=Q�2

N

0

@�0

�2

�2s

�2 +1

1

A .

PD thus increases with SNR �2s/�

2

41

Random Signals - Generalization

H0 : x⇠N (0,�2I)

H1 : x⇠N (0,Cs+�2I)

Thus, we have L(x) =

1�2⇡�N

2 det12(

Cs+�2I)

exp

h� 1

2xT�Cs+�2

I

��1x

i

1

(

2⇡�2)

N2exp

⇥� 12�2x

Tx

⇤.

Calculating the Log-Likelihood Ratio (LLR ), we have

T (x) = �2x

T

1

�2I� �

Cs+�2I

��1�x> 2�

0�2

42

Random Signals - Generalization T (x) = �2

x

T

1

�2I�

�Cs+�2

I

��1�x> 2�

0�2

Using the matrix inversion lemma

(A+BCD)�1 =A

�1�A

�1B(DA

�1B+C

�1)�1DA

�1

we can write

�Cs+�2

I

��1

as

1

�2I� 1

�4

✓C

�1s +

1

�2I

◆�1

such that

T (x) = x

T

"1

�2

✓C

�1s +

1

�2I

◆�1#x> 2�

0�2

43

Random Signals - Generalization

T (x) = x

T

"1

�2

✓C

�1s +

1

�2I

◆�1#x> 2�

0�2

Now set s equal to

s=

"1

�2

✓C

�1s +

1

�2I

◆�1#x

which can be rewritten as

s=1

�2

1

�2

�Cs+�2

I

�C

�1s

��1

x=Cs(Cs+�2I)�1

x

s

Hence, we decide for H1 if

T (x) = x

Ts> �

00

Recognize this as the LMMSE filter!!

(✓ =C

✓x

C

�1xx

x)

44

Random Signals – Estimator Correlator Hence, we decide for H1 if

T (x) = x

Ts> �

00

with

s=1

�2

1

�2

�Cs+�2

I

�C

�1s

��1

x=Cs(Cs+�2I)�1

x

Fig. 5.2 Kay-II.

45

Random (Correlated) Signal – Example (1) Let 0 < ⇢ < 1 be the correlation coefficient between consecutive signal samples s[n], and

let

Cs =

2

4 1 ⇢

⇢ 1

3

5

• Eigenvalues: (��1�⇢)(��1+⇢) = 0! �1 = 1+⇢ and �2 = 1�⇢ with corresponding

eigenvectors following from (Cs��iI)vi = 0, v1 =

2

41p2

1p2

3

5and v2 =

2

41p2

� 1p2

3

5.

Now use the fact that Cs =V⇤sVT

(= eigenvalue decomposition)

s=1

�2

1

�2

�Cs+�2

I

�C

�1s

��1

x=⇥�Cs+�2

I

�C

�1s

⇤�1x

=h�V⇤sV

T +�2I

��V⇤sV

T��1

i�1x

=V

�I+�2

⇤s�1��1

V

Tx

46

Random (Correlated) Signal – Example (2)

So the total test statistic becomes:

T (x) = x

TV

�I+�2

⇤s�1��1

V

Tx

If we set y = V

Tx, then y is a decorrelated version of x (Hence E[yyT ] = �s + �2

I is

diagonal) and we get

T (x) = y

T�I+�2

⇤s�1��1

y =N�1X

n=0

y2[n]�s[n]

�s[n]+�2

Hence, in the transformed space we have obtained a weighted energy detector.

47

Random (Correlated) Signal – Example (3) Fig. 5.2 Kay-II.

Fig. 5.3 Kay-II.

48

Random (Correlated) Signal – Example (4) • Eigenvalues: (��1�⇢)(��1+⇢) = 0! �1 = 1+⇢ and �2 = 1�⇢ with corresponding

eigenvectors following from (Cs��iI)vi = 0, v1 =

2

41p2

1p2

3

5and v2 =

2

41p2

� 1p2

3

5.

• y =V

Tx

• T (x) = y

T�I+�2

⇤s�1��1

y =PN�1

n=0 y2[n]�s[n]

�s[n]+�2

Fig. 5.4 Kay-II

49

Summary (1) Deterministic signals:

T (x) = x

TC

�1s

Notice that is C is positive definite, C�1 can be written as C

�1 =D

TD, leading to

T (x) = x

TD

TDs

Fig. 4.7 Kay-II.

50

Summary (2)

with

s=1

�2

1

�2

�Cs+�2

I

�C

�1s

��1

x=Cs(Cs+�2I)�1

x

Fig. 5.2 Kay-II.

For random signals:

we decide for H1 if

T (x) = x

Ts> �

00

51

Random Signals – Exercise Hence, we decide for H1 if

T (x) = x

Ts> �

00

with

s=1

�2

1

�2

�Cs+�2

I

�C

�1s

��1

x=Cs(Cs+�2I)�1

x

Fig. 5.2 Kay-II.

Problem Ch. 5 Kay-II.