Array Lattice Filters

download Array Lattice Filters

of 7

Transcript of Array Lattice Filters

  • 8/11/2019 Array Lattice Filters

    1/7

    T h e nterpretations we provided for the coefficients{ K M i),KLi), KLi)} in ~ e c .2.1,

    in terms

    o

    solutions to first-order least-squares problems, can be used to m otivate yet an-

    other lattice implementation in array form. We discussed array methods and their advan-

    tages in some detail in Chapter 33. We show here that such array methods can also be

    developed for orde r-recursive problem s.

    Thus, recall that in Sec. 42.1 we in troduced the angle-norm alized estimation errors

    and the corresp onding angle-norm alized error vectors

    { b L f i ,i, , a , bh,A

    We then argued that th e reflection coefficients {

    KM (i),

    KLi),

    K

    (i)} can be interpreted

    as the solu tions to three simple (regularized) projection proble ms, namely

    K E . M ( ~ ) rojects T L ~nto

    bh,i

    K (i) projects jh,a nto

    /c&(i)projects bLf,i onto fh,i

    That is, each of these reflection coefficients solves the problem of projecting one angle-

    norma lized error vector onto ano ther. More specifically, they solve the following regular-

    ized least-squares problems:

    where

    i j = q X M f 2 and

    f j = q A 2

    The above interpretations were used in Sec. 42.1 to show that the reflection coefficients

    { K M ~ ) ,

    a ( i ) ,& ( i ) }an be time-updated by resorting to the

    RLS

    algorithm in each

    case.

    680

    Adaptive Filters,by Ali H. Sayed

    Copyright

    @ 2008 John Wiley Sons,

    Inc.

  • 8/11/2019 Array Lattice Filters

    2/7

    Now in Cha pter

    33,

    we arg ued that least-squares solutions can also be updated in array

    form,

    e.g., by using the

    QR

    algorithm of Sec.

    35.2.

    The

    QR

    method can th erefore be used

    here to d evelop array m ethods for up dating the reflection coefficients themselves.

    43.1

    ORDER-UPDATE

    OF

    OUTPUT ESTIMATION ERRORS

    We start with the reflection coefficient K M ~ ) .Comparing the cost function for

    I C M ~ )

    n

    (43.1) with the one that appears in the statemen tof the QR method in Alg. 35.2 we see that

    we can make the following identifications:

    and

    Q i

    +

    d * A,b

    = f j-lXi+l + (i) =

    CL(i)

    M , i

    M , z

    If we now write down the QR equation s of Alg. 35.2 for these new variab les, we arrive at

    the follow ing statement. Define the normaliz ed reflection coefficient

    Then sta rt with Cz2- 1 = v nd q M - 1)= 0, and repeat for i

    2 0.

    At each

    iteration, find a 2 x 2 unitary matrix OM,^ that generates the zero entry in the post-array

    shown below, along with a lea ding positive entry in the first row and a positive entry s. The

    entries in the post-array wou ld then correspon d to:

    where, as was the case with Alg. 35.2, the scalar quantities {s z} can be determined from

    the identities:

    The first identity follows by eq uating the inner products of the second and third lines of the

    arrays, while the se cond identity follow s from equating the norm s of the last lines of the

    arrays. It is easy to see that the first expression leads to

    sz*

    =

    (i) h Z ) K M i )

    689

    SECTION

    43.1

    ORDER-UPDATE

    OF

    OUTPUT

    ESTIMATION

    ERRORS

  • 8/11/2019 Array Lattice Filters

    3/7

    112 1/ 2

    whereas the second expression leads to

    s

    = yM+ l

    i ) / y M

    i ) and, correspondingly, =

    i;C;+l(i). he array algorithm then becomes

    690

    CHAPTERRRAY

    43

    ATTICE

    FILTERS

    (43.4)

    1

    If we further multiply the last rows on both sides

    of

    (43.4)

    by

    r z 2 i )

    e arriv e at the array

    equation:

    This step tells us how to orde r-update the angle-no rmalized variable r h ( i ) . f desired, the

    reflection coefficient KM 2) can be determined from the equality

    (43.6)

    We now derive array methods for order-updating the angle-n ormalize d variables

    {

    ,

    (i)

    bhf i ) }

    by applying similar arguments to the other cost functions in (43.1).

    43.2

    ORDER-UPDATE

    OF

    BACKWARD ESTIMATION ERRORS

    Consider the reflection coefficient ~ ( i ) .omparing its cost function from

    (43.1)

    with

    the one that appears in the statement of the QR method in Alg.

    35.2

    we see that we can

    make the following identifications

    If we now write down the Q R equation s of Alg. 35.2 for these new variables, we arrive at

    the followin g statemen t. Define the norm alized reflection coefficient

    (43.7)

    Then start with CC2 -l)

    =

    v nd

    (-1) =

    0, and repeat for i . At each

    iteration, find

    a

    2

    x 2

    unitary matrix O ,i that generate s the zero entry in the post-array

    below, along with a positive leading entry in the first row and

    a

    positive

    s.

    The entries in

    the post-array would then correspon d to:

  • 8/11/2019 Array Lattice Filters

    4/7

  • 8/11/2019 Array Lattice Filters

    5/7

    92

    CHAPTER

    43

    If we now write down the QR equation s of Alg. 35.2 for these new variab les, we arrive at

    the follow ing statement. Define the normalized reflection coefficient

    ARRAY

    LATTICE

    FILTERS

    (43.12)

    T h e n ~ t a r t w i t h & ~ ( - l ) = d-and&(-l) = 0,andrepeatfor i

    2

    0. Ateach

    iteration find a 2 x

    2

    unitary matrix QL hat generates the zero entry in the post-array

    below, along with a lead ing positive entry in the first row an d a positive

    s.

    The entries in

    the post-array would then co rrespond to:

    (43.13)

    where , as was the case with Alg. 35.2, the scalar quantities {s, z} can be determined from

    the identities:

    The first identity follows by e quating the inner prod ucts of the secon d and third lines of the

    arrays, while the seco nd identity follows from equating the norms of the last lines of the

    arrays. It is easy to see that the first expression leads to

    sz = (i) 6/M(i)rcR(i)

    whereas the second expression gives

    s

    =

    7 M + l i ) / 7 M

    /2

    1 2 (i)

    and, consequently,

    z

    =

    fLcl i). n this way, the array algorithm (43.13) becomes

    If we further multiply the last rows on both sides of (43.14) by 7 z 2 i ) we arrive at the

    array equation:

  • 8/11/2019 Array Lattice Filters

    6/7

    693

    This step tells us how to order-up date the angle-no rmalized variable (i). If desired, the

    ECTION 43.4

    SlGNfFlCANCE

    OF DATA

    STRUCTURE

    reflection coefficient ,(i) can be determined from the equality

    (43.16)

    43.4

    SIGNIFICANCE

    OF

    DATA STRUCTURE

    As we already know, when the successive regressors have shift struc ture it holds that

    b M ( i ) =

    b M ( i

    ,

    6/M(i)

    =

    b/M(i ,

    ? M ( i ) = Y M ( i

    ,

    (i) = ( Z - 1

    and we are led to the array-b ased lattice algorithm, also known as the QR D-based lattice

    filter ee Fig. 43.1; the qualification QRD -based is used to indica te that the array

    recursions correspond to Q R decompo sitions of the corresponding pre-arrays (recall the

    third remark following the statement of Alg. 35.1).

    For comparison purposes, Table 43.1 lists the estimated computational cost per itera-

    tion for the various lattice filters derived in

    this

    chapter assuming real data. The costs are

    in terms of the number of multiplications, additions, divisions, and square-roots that are

    needed for each iteration. It is seen that lattice filters generally require O 2 0 M )operations

    per iteration.

    . . .

    . . .

    FlOURE 43 1

    The QRD-based lattice filter.

  • 8/11/2019 Array Lattice Filters

    7/7

    694

    HAPTER

    43

    ARRAY

    L AT IC E

    FILTERS

    Algorithm 43 1 Array lattice filter) Consider again the same setti ng of

    Alg. 42.2. For each

    i 2

    0,

    the

    M - th o rder

    a posteriori

    estimation error,

    r ~ ( i )

    d ( i )

    u ~ , i t u ~ , i ,

    hat results from the solution of the regularized

    least-squares problem

    min

    W M

    can be computed as follows:

    1.

    Initialization. From m = 0 t o m = M

    1

    set:

    Ciyy-1) =

    JV

    2y-q

    =d

    q m ( - l ) = o,

    q i - 1 )

    =

    o,

    q ( - l ) =

    o,

    b m ( - l )

    =

    o

    2. For i 2

    0,

    repeat:

    Set ~ ; / ~ ( i )

    1, bb(i) =

    (i)

    =

    u i ) ,and

    rb(i) =

    d i )

    For

    m

    = 0 t o m =

    M 1,

    apply 2 x 2 unitary rotations

    f

    Om,i ,

    and

    Oh, i ,

    with positive

    2,2)

    entries, in order to annihilate

    the 1,2) entries of the post-arrays below:

    and set rm+l(i)= ~ k + ~ ( i ) ~ ~ + ~ ( i ) ./2

    bk+l(i)

    TABLE 43.1

    Estimated computational cost per iteration for various lattice filters.