Analysis of the FXLMS algorithm with norm-constant time ...Ndt2 N NR j(n+ Ndt 1) + a Ndt2 N e(n+ Ndt...

4
Analysis of the FXLMS algorithm with norm-constant time-varying primary path Norihiro Ishibushi * , Yoshinobu Kajikawa , and Seiji Miyoshi * Graduate School of Science and Engineering, Kansai University, Japan. Faculty of Engineering Science, Kansai University, Japan. E-mail: [email protected] Abstract—We analyze the behaviors of active noise control with a time-varying primary path using a statistical-mechanical method. The principal assumption used in the analysis is that the impulse responses of the primary path and adaptive filter are sufficiently long. We analyze a novel model in which the reference signal is not necessarily white and the primary path is time- varying while its norm is kept constant in the mean sense. We show the existence of macroscopic steady states and the optimal step size. I. I NTRODUCTION Active noise control (ANC), which has been practically realized owing to the progress of digital signal processing, is implemented using an adaptive filter [1], [2]. The least- mean-square (LMS) algorithm is the most commonly used algorithm for adaptive filters [3], [4], [5]. When we apply the LMS algorithm to ANC, we should estimate the secondary path beforehand and use inputs that have passed through the estimated secondary path. This procedure is called the Filtered- X LMS (FXLMS) algorithm. Various methods have been proposed for theoretically ana- lyzing the LMS algorithm. The principal method is to use the independence assumption [6]. The FXLMS algorithm has also been analyzed on the basis of the independence assumption [7], [8]. In this assumption, the input vectors of the tapped- delay line are assumed to be independently generated at each time step. However, the actual input vector components are merely shifted to the next position. Hence, each input vector is strongly related to the previous one and the vectors are thus not independent. Owing to this fact, analytical results based on the independence assumption cannot precisely and generally explain experimental results [4], [7]. In addition, analyses of cases where the step size is small [9], [10] and the periodic reference signal is assumed [11] have been reported. In our previous paper reported at APSIPA ASC 2014 [12], we analyzed the behaviors of the FXLMS algorithm when the reference signal is white and the primary path is naively time-varying by applying a statistical-mechanical method . In the previous naive model, the norm of the primary path and the mean square error (MSE) diverged and the optimal step size disappeared, indicating that the previous model was rather unrealistic. In this paper, we propose an analytically solvable model, where the primary path is time-varying and the norm of the coefficient vector of the primary path is kept constant in the mean sense. The model is analyzed by applying a statistical- mechanical method [13], [14], [15], [16]. In addition, the model is generalized to nonwhite reference signal. The anal- ysis gives meaningful results. II. ANALYTICAL MODEL OF FXLMS ALGORITHM Figure 1 shows a block diagram of the ANC system considered in this paper. Fig. 1. Block diagram of the ANC system. The primary path P is represented by an N -tap FIR filter. Its coefficient vector is p(n)=[p 1 (n),p 2 (n),...,p N (n)] > . Each coefficient p i (0) is independently generated from a distribution with a mean of zero and a variance of unity. Here, > denotes transposition and n denotes the time step. The primary path is time-varying, that is, p(n + 1) = a 1 N p(n)+ q 1 - a 2 N w(n), (0 a 1), (1) where w(n) is an N -dimensional vector. Each coefficient w i (n) is independently generated from a distribution with a mean of zero and a variance of unity at every time step. a is a parameter that controls the rate of time variation of the primary path. Here, a =1 corresponds to the time-invariant primary path. Note that Eq. (1) means that the norm of the coefficient vector of the primary path is kept constant in the mean sense although the primary path itself is time-varying. The adaptive filter H is also an N -tap FIR filter. Its coefficient vector is h(n)=[h 1 (n),h 2 (n),...,h N (n)] > . The initial value h i (0) of each coefficient is zero. The reference signal x(n) is drawn from a distribution with hx(n)i =0, hx(n)x(n - k)i = r k /N, (2) Proceedings of APSIPA Annual Summit and Conference 2015 16-19 December 2015 978-988-14768-0-7©2015 APSIPA 165 APSIPA ASC 2015

Transcript of Analysis of the FXLMS algorithm with norm-constant time ...Ndt2 N NR j(n+ Ndt 1) + a Ndt2 N e(n+ Ndt...

Page 1: Analysis of the FXLMS algorithm with norm-constant time ...Ndt2 N NR j(n+ Ndt 1) + a Ndt2 N e(n+ Ndt 1) XK k=1 ~c kd(n k+ Ndt j): (19) Here, terms including w are omitted in (17) –

Analysis of the FXLMS algorithm withnorm-constant time-varying primary path

Norihiro Ishibushi∗, Yoshinobu Kajikawa†, and Seiji Miyoshi†∗ Graduate School of Science and Engineering, Kansai University, Japan.

†Faculty of Engineering Science, Kansai University, Japan.E-mail: [email protected]

Abstract—We analyze the behaviors of active noise controlwith a time-varying primary path using a statistical-mechanicalmethod. The principal assumption used in the analysis is thatthe impulse responses of the primary path and adaptive filter aresufficiently long. We analyze a novel model in which the referencesignal is not necessarily white and the primary path is time-varying while its norm is kept constant in the mean sense. Weshow the existence of macroscopic steady states and the optimalstep size.

I. INTRODUCTION

Active noise control (ANC), which has been practicallyrealized owing to the progress of digital signal processing,is implemented using an adaptive filter [1], [2]. The least-mean-square (LMS) algorithm is the most commonly usedalgorithm for adaptive filters [3], [4], [5]. When we apply theLMS algorithm to ANC, we should estimate the secondarypath beforehand and use inputs that have passed through theestimated secondary path. This procedure is called the Filtered-X LMS (FXLMS) algorithm.

Various methods have been proposed for theoretically ana-lyzing the LMS algorithm. The principal method is to use theindependence assumption [6]. The FXLMS algorithm has alsobeen analyzed on the basis of the independence assumption[7], [8]. In this assumption, the input vectors of the tapped-delay line are assumed to be independently generated at eachtime step. However, the actual input vector components aremerely shifted to the next position. Hence, each input vectoris strongly related to the previous one and the vectors are thusnot independent. Owing to this fact, analytical results based onthe independence assumption cannot precisely and generallyexplain experimental results [4], [7]. In addition, analyses ofcases where the step size is small [9], [10] and the periodicreference signal is assumed [11] have been reported.

In our previous paper reported at APSIPA ASC 2014 [12],we analyzed the behaviors of the FXLMS algorithm whenthe reference signal is white and the primary path is naivelytime-varying by applying a statistical-mechanical method . Inthe previous naive model, the norm of the primary path andthe mean square error (MSE) diverged and the optimal stepsize disappeared, indicating that the previous model was ratherunrealistic.

In this paper, we propose an analytically solvable model,where the primary path is time-varying and the norm of thecoefficient vector of the primary path is kept constant in the

mean sense. The model is analyzed by applying a statistical-mechanical method [13], [14], [15], [16]. In addition, themodel is generalized to nonwhite reference signal. The anal-ysis gives meaningful results.

II. ANALYTICAL MODEL OF FXLMS ALGORITHM

Figure 1 shows a block diagram of the ANC systemconsidered in this paper.

Fig. 1. Block diagram of the ANC system.

The primary path P is represented by an N -tap FIR filter. Itscoefficient vector is p(n) = [p1(n), p2(n), . . . , pN (n)]>. Eachcoefficient pi(0) is independently generated from a distributionwith a mean of zero and a variance of unity. Here, > denotestransposition and n denotes the time step. The primary pathis time-varying, that is,

p(n+ 1) = a1N p(n) +

√1− a 2

Nw(n), (0 ≤ a ≤ 1), (1)

where w(n) is an N -dimensional vector. Each coefficientwi(n) is independently generated from a distribution with amean of zero and a variance of unity at every time step. ais a parameter that controls the rate of time variation of theprimary path. Here, a = 1 corresponds to the time-invariantprimary path. Note that Eq. (1) means that the norm of thecoefficient vector of the primary path is kept constant in themean sense although the primary path itself is time-varying.

The adaptive filter H is also an N -tap FIR filter. Itscoefficient vector is h(n) = [h1(n), h2(n), . . . , hN (n)]>. Theinitial value hi(0) of each coefficient is zero. The referencesignal x(n) is drawn from a distribution with

〈x(n)〉 = 0, 〈x(n)x(n− k)〉 = rk/N, (2)

Proceedings of APSIPA Annual Summit and Conference 2015 16-19 December 2015

978-988-14768-0-7©2015 APSIPA 165 APSIPA ASC 2015

Page 2: Analysis of the FXLMS algorithm with norm-constant time ...Ndt2 N NR j(n+ Ndt 1) + a Ndt2 N e(n+ Ndt 1) XK k=1 ~c kd(n k+ Ndt j): (19) Here, terms including w are omitted in (17) –

where 〈·〉 denotes expectation. The reference signal is shiftedthrough the tapped delay line in the FIR filter. Therefore, thetap input vector is x(n) = [x(n), x(n−1), . . . , x(n−N+1)]>.The output of the primary path P is d(n) = p>(n)x(n). Onthe other hand, the output of the adaptive filter H is u(n) =h>(n)x(n).

The secondary path C is modeled by a K-tap FIR filter.Its coefficient vector is c = [c1, c2, . . . , cK ]> and is time-invariant. The output y(n) of the secondary path is

y(n) =K∑k=1

cku(n− k + 1). (3)

An error signal e(n) is generated by adding an independentbackground noise ξ(n) to the difference between d(n) andy(n). That is,

e(n) = d(n)− y(n) + ξ(n). (4)

Here, the mean and variance of ξ(n) are zero and σ2ξ , respec-

tively.The LMS algorithm is used to update the adaptive filter.

Here, the coefficient vector c of the secondary path is un-known in general. Therefore, the estimated secondary pathC, which has been estimated in advance using some othermethod, is used to update the adaptive filter. This procedure iscalled the FXLMS algorithm. When the estimated secondarypath C is a K-tap FIR filter and its coefficient vector isc = [c1, c2, . . . , cK ]>, the update obtained by applying theFXLMS algorithm is

h(n+ 1) = h(n) + µe(n)

K∑k=1

ckx(n− k + 1) (5)

where µ is the step-size parameter.

III. THEORY

The mean square error (MSE) is⟨e2(n)

⟩=⟨

(d(n)− y(n) + ξ(n))2⟩

(6)

=⟨d2(n)

⟩− 2

K∑k=1

ck

⟨d(n)u(n− k + 1)

⟩+ σ2

ξ

+K∑k=1

K∑k′=1

ckck′⟨u(n− k + 1)u(n− k′ + 1)

⟩. (7)

Equation (7) includes products of d and u and products ofu and u including cases where their time steps are different.To calculate these products, we introduce the N -dimensionalvectors kj(n) = [kj,1(n), kj,2(n), . . . , kj,N (n)]>, whose ele-ments are kj,i(n) = hi+j(n). That is, kj(n) is the j-shiftedvector of the coefficient vector h(n) of the adaptive filter. Notethat k0(n) = h(n).

In the following, the limit N →∞ is considered. Note thatthis long-impulse-response assumption or long-filter assump-tion is reasonable, considering actual acoustic systems. Whenthe shift number j is O(1), we obtain

h>(n)x(n) ' k>j (n)x(n− j). (8)

Equation (8) is based on the fact that the shift of the tap inputvector is canceled by the shift of the elements of the adaptivefilter. Here, the effect of the edge of the adaptive filter can beignored since both h(n) and kj(n) are N -dimensional, i.e.,infinitely long, vectors. Equation (8) implies that the gap j inthe time direction can be replaced by the subscript of vectork. In addition, we introduce macroscopic variables defined by

Rj =1

N

N∑i=1

pi(n)kj,i(n), (9)

Qj =1

N

N∑i=1

hi(n)kj,i(n). (10)

Equations (9) and (10) indicate that Rj and Qj are the cross-correlation between p(n) and h(n) and the autocorrelation ofh(n), respectively. Then, we obtain

〈d(n− j)u(n)〉 =M∑

i=−MRiri−j , (11)

〈u(n− j)u(n)〉 =M∑

i=−MQiri−j , (12)

〈d(n− j)d(n)〉 = rj . (13)

We have omitted the time steps of the microscopic variablessince they do not change by O(1) in the O(1) time updatesin the model treated in this paper, as described later. We canexpress the MSE (7) in terms of Rj and Qj as

⟨e2(n)

⟩=

K∑k=1

ck

M∑i=−M

(K∑k′=1

ck′Qiri−k+k′

− 2Riri+k−1

)+ r0 + σ2

ξ . (14)

This formula shows that the MSE is a function of the macro-scopic variables Ri and Qi. Therefore, we derive differentialequations that describe the dynamical behaviors of thesevariables in the following.

We first derive a differential equation for Rj . When thecoefficient vector h(n) of the adaptive filter is updated, thej-shifted vector kj(n) is also changed. This change can bedescribed as

kj(n+ 1) = kj(n) + µe(n)K∑k=1

ckx(n− k + 1− j). (15)

Note that the time step of the tap input vector x is shifted byj compared with that in (5). Multiplying both sides of (1) by(15), we obtain

NRj(n+ 1) = a1N NRj(n) + a

1N µe(n)

K∑k=1

ckd(n− k + 1− j)

+

√1− a

2N w

>(n)

(kj(n) + µe(n)

K∑k=1

ckx(n− k + 1− j)). (16)

In (16), the l.h.s. and the first term on the r.h.s. are O(N) andthe other terms are O(1). This means that the coefficient vector

Proceedings of APSIPA Annual Summit and Conference 2015 16-19 December 2015

978-988-14768-0-7©2015 APSIPA 166 APSIPA ASC 2015

Page 3: Analysis of the FXLMS algorithm with norm-constant time ...Ndt2 N NR j(n+ Ndt 1) + a Ndt2 N e(n+ Ndt 1) XK k=1 ~c kd(n k+ Ndt j): (19) Here, terms including w are omitted in (17) –

h(n) of the adaptive filter should be updated O(N) times tochange Rj by O(1). Therefore, we introduce the continuoustime t, which is the time step n normalized by the tap lengthN , and use it to represent the adaptive process. If the adaptivefilter is updated Ndt times in an infinitely small time dt, wecan obtain Ndt equations as follows:

NRj(n+ 1) = a1N NRj(n)

+ a1N µe(n)

K∑k=1

ckd(n− k + 1− j),

(17)

a−1N NRj(n+ 2) = NRj(n+ 1)

+ µe(n)

K∑k=1

ckd(n− k + 2− j), (18)

......

...

a−Ndt−1N NRj(n+Ndt) = a−

Ndt−2N NRj(n+Ndt− 1)

+ a−Ndt−2N µe(n+Ndt− 1)

×K∑

k=1

ckd(n− k +Ndt− j). (19)

Here, terms including w are omitted in (17) – (19) since theirexpectations are zero. Note that each equation is multiplied bythe power of a. Summing all these equations, we obtain

N(Rj + dRj) = adtNRj +

Ndt∑i=1

aiN Ndtµ

⟨e(m)

K∑k=1

ckd(m− k + 1− j)⟩.

(20)

Here, we can represent the effect of the probabilistic variablesby their means since the updates are executed Ndt times,that is, many times, to change Rj by dRj . m is an auxiliarytime step that is introduced to represent the difference in timesteps. We see that

⟨e(m)

∑Kk=1 ckd(m− k + 1− j)

⟩in (20)

includes many products of d and u from (3) and (4). Since wecan represent this expectation using the macroscopic variables,we obtain differential equations that describe the dynamicalbehaviors of Rj in a deterministic form as

dRj

dt= (ln a)Rj + µ

K∑k′=1

ck′(r−k′+1−j −

K∑k=1

M∑i=−M

ckri+k−k′−jRi

),

(21)

where δ denotes the Kronecker delta. Here, we also usedl’Hopital’s rule.

Next, multiplying (5) by (15) and proceeding in the samemanner as for the derivation of the above differential equationfor Rj , we can derive a differential equation for Qj , which isgiven by

dQj

dt= µ

M∑i=−M

K∑k′=1

ck′

[(ri−γ + ri−ε)Ri

−K∑k=1

ck(ri−(k−k′+j) + ri−(k−k′−j))Q|i|

− µ{sgn(γ)

|γ|∑k′′=1

K∑k′′′′=1

ck′′′′rζ−k′′′′

(rα + σ

2ξδα,0

−K∑k=1

ck

((ri−(1−k+α) + ri−(1−k−α))Ri

−K∑

k′′′=1

ck′′′ri−(α−k+k′′′)Q|i|

))}

− µ{sgn(ε)

|ε|∑k′′=1

K∑k′′′′=1

ck′′′′rη−k′′′′

(rβ + σ

2ξδβ,0

−K∑k=1

ck

((ri−(1−k+β) + ri−(1−k−β))Ri

−K∑

k′′′=1

ck′′′ri−(β−k+k′′′)Q|i|

))}]

+ µ2

M∑i=−M

{r0 +

K∑k=1

ck

[K∑k′=1

ck′ri−k′+kQ|i| − 2ri−k+1Ri

]+ σ

}

×K∑

k′′=1

K∑k′′′=1

ck′′ ck′′′rk′′−k′′′+j , (22)

where

α ≡ Θ(γ)γ − k′′, β ≡ Θ(ε)ε− k′′,γ ≡ j + 1− k′, ε ≡ −j + 1− k′,ζ ≡ −Θ(−γ)γ − k′′ + 1, η ≡ −Θ(−ε)ε− k′′ + 1.

Here, Θ(·) and sgn(·) denote the step function and signfunction, respectively.

Considering the 2M + 1 vectors kj , j = −M, . . . ,M, andassuming Rj = Qj = 0 when |j| > M , then (21) and (22)are first-order ordinary differential equations with 3M + 2variables, that is,

d

dtz = Gz + b, (23)

where z = [Q0, · · · , QM , R−M , · · · , R0, · · · , RM ]>. Thematrix G and vector b are determined by (21) and (22). Usingthe fact that z at t = 0 is a zero vector as the initial condition,we can analytically solve (23) to obtain

z(t) = eGt(G−1eGtb + G−1b). (24)

IV. RESULTS AND DISCUSSION

We confirm the validity of the theory obtained in theprevious section by comparison with simulation results. Figure2 shows the learning curves obtained theoretically and bysimulation. The conditions are µ = 0.1, rk = δk,0, thatis, the reference signal is white, σ2

ξ = 0, K = 2, andc1 = c2 = c1 = c2 = 1. In the theoretical calculation, theresults are obtained by substituting Rj and Qj , which areobtained by solving (23), into (14) in the case of M = 100.In the computer simulations, N = 200 and ensemble meansfor 104 trials are plotted. Figure 2 shows that the theoreticalresults agree with the simulation results. The primary path

Proceedings of APSIPA Annual Summit and Conference 2015 16-19 December 2015

978-988-14768-0-7©2015 APSIPA 167 APSIPA ASC 2015

Page 4: Analysis of the FXLMS algorithm with norm-constant time ...Ndt2 N NR j(n+ Ndt 1) + a Ndt2 N e(n+ Ndt 1) XK k=1 ~c kd(n k+ Ndt j): (19) Here, terms including w are omitted in (17) –

does not have the time-varying property when a = 1. Whena is relatively small, we see that the MSE increases andapproaches a steady value after taking the minimum value.These phenomena are different from those reported in [12].

0

0.2

0.4

0.6

0.8

1

1.2

0 10 20 30 40 50

MS

E

t=n/N

a=0.5

a=0.7

a=0.9

a=0.95

a=1.0

TheorySimulation

Fig. 2. Learning curves obtained theoretically and by simulation.

Figure 3 shows the relationship between the step size µ andthe steady-state MSE. The steady-state MSE can be calculatedby letting the l.h.s. of (23) be a zero vector. Our previous naivetime-varying model [12] was also analytically solvable butrather unrealistic since the MSE diverged and there were nosteady states. In contrast, the time-varying model analyzed inthis paper has steady states. Therefore, this novel model basedon Eq. (1) is interesting and its analysis gives meaningfulresults.

The conditions are a = 0.99, 0.95, 0.9, 0.8, and 0.6, andσ2ξ = 0. The correlation functions of the reference signal arerk = δk,0 (white), and r0 = 1, r±1 = 0.5, r±k = 0 whenk ≥ 2 (nonwhite). In the theoretical calculation, M = 50.In the computer simulations, N = 103 and the ensemblemeans from t = 90 to t = 100 for 103 trials are plotted. Thetheoretical results agree with the simulation results reasonablywell. As the step size µ increases, the noise misadjustmentincreases gradually and the MSE increases. The upper limitof µ for the convergence of the MSE is µ = 0.5 for thewhite reference signal and µ = 0.3317 for this nonwhitereference signal. Taking a closer look at Fig. 3, when thestep size µ is small, as µ decreases, the MSE also increases.This is due to the lag misadjustment for small µ. We seethat an optimal step size µ exists owing to the trade-offbetween the noise misadjustment and the lag misadjustment.As a decreases, the optimal value of µ decreases. Whena = 0.99, 0.95, 0.9, 0.8, and 0.6, the optimum step sizesare µopt = 0.427, 0.383, 0.356, 0.319, and 0.256 for thewhite reference signal and µopt = 0.242, 0.218, 0.203, 0.184,and 0.150 for this nonwhite reference signal, respectively.Note that the optimum step sizes do not disappear. Thesephenomena are different from those reported in [12] in whichthere were not the optimum step sizes since the MSE diverged.

V. CONCLUSION

We have analyzed the behaviors of the FXLMS algorithmusing a statistical-mechanics approach. In particular, we haveanalyzed the case where the reference signal is not necessarily

0.1

1

10

0 0.1 0.2 0.3 0.4 0.5

Ste

ad

y-S

tate

MS

E

µ

Wh

ite

No

nw

hite

Simulation (White)(Nonwhite)

Theory (a=0.6)(a=0.8)(a=0.9)

(a=0.95)(a=0.99)

Fig. 3. Relationship between step size µ and steady-state MSE obtainedtheoretically and by simulation.

white and the primary path is time-varying while its normis kept constant in the mean sense. The obtained theoryquantitatively agrees with the results of computer simulations.We have shown the existence of macroscopic steady states andthat the optimal step size does not disappear in this model.These results are in contrast to those of our previous naivemodel reported at APSIPA ASC 2014.

ACKNOWLEDGMENTS

This work was partially supported by JSPS KAKENHIGrant Numbers 24360152 and 15K00256.

REFERENCES

[1] S. M. Kuo and D. R. Morgan, “Active noise control: a tutorial review,”Proc. of IEEE, vol.87, no.6, pp.943–973, June 1999.

[2] Y. Kajikawa, W. S. Gan, and S. M. Kuo, “Recent advances on activenoise control: open issues and innovative applications,” APSIPA Trans.Signal and Information Processing, e3, vol.1, 2012.

[3] B. Widrow and M. E. Hoff, Jr., “Adaptive switching circuits,” IREWESCON Conv. Rec., Pt.4, pp.96–104, 1960.

[4] S. Haykin, Adaptive Filter Theory, Fourth ed., Prentice Hall, New Jersey,2002.

[5] A. H. Sayed, Fundamentals of Adaptive Filtering, Wiley, Hoboken, NJ,2003.

[6] B. Widrow, J. M. McCool, M. G. Larimore, and C. R. Johnson, Jr., “Sta-tionary and nonstationary learning characteristics of the LMS adaptivefilter,” Proc. of IEEE, vol.64, no.8, pp.1151–1162, Aug. 1976.

[7] E. Bjarnason, “Analysis of the filtered-X LMS algorithm,” IEEE Trans.Speech Audio Process., vol.3, no.6, pp.504–514, Nov. 1995.

[8] I. T. Ardekani and W. Abdulla, “Theoretical convergence analysis ofFxLMS algorithm,” Signal Process., vol.90, pp.3046–3055, 2010.

[9] H. J. Butterweck, “The independence assumption: a dispensable tool inadaptive filter theory.” Signal Process., vol.57, pp.305–310, 1997.

[10] O. J. Tobias, J. C. M. Bermudez, and N. J. Bershad, “Mean weightbehavior of the Filtered-X LMS algorithm,” IEEE Trans. Signal Process.,vol.48, no.4, pp.1061–1075, Apr. 2000.

[11] Y. Hinamoto and H. Sakai, “Analysis of the Filtered-X LMS algorithmand a related new algorithm for active control of multitonal noise,” IEEETrans. Speech Audio Process., vol.14, no.1, pp.123–130, Jan. 2006.

[12] N. Egawa, Y. Kajikawa, and S. Miyoshi, “Statistical-mechanical analysisof the FXLMS algorithm with time-varying primary path,” Proc. APSIPAASC 2014, 1088 (5 pages), Dec. 2014.

[13] H. Nishimori, Statistical Physics of Spin Glasses and InformationProcessing: An Introduction, Oxford University Press, New York, 2001.

[14] A. Engel and C. V. Broeck, Statistical Mechanics of Learning, Cam-bridge University Press, Cambridge, 2001.

[15] S. Miyoshi and Y. Kajikawa, “Statistical-mechanics approach to theFiltered-X LMS algorithm,” Electron. Lett., vol.47, no.17, pp.997–999,Aug. 2011.

[16] S. Miyoshi and Y. Kajikawa, “Statistical-mechanical analysis of theFXLMS algorithm with actual primary path,” Proc. ICASSP2015,pp.3502–3506, Apr. 2015.

Proceedings of APSIPA Annual Summit and Conference 2015 16-19 December 2015

978-988-14768-0-7©2015 APSIPA 168 APSIPA ASC 2015