VLSI Implementa

download VLSI Implementa

of 51

Transcript of VLSI Implementa

  • 8/13/2019 VLSI Implementa

    1/51

    1

    VLSI Implementation of LMS Adaptive Filter

    By

    Ravikumar.C

    (Reg. No. 15803044)

    A PROJECT REPORT

    Submitted to the department of

    ELECTRONICS AND COMMUNICATION

    in theFACULTY OF ENGINEERING & TECHNOLOGY

    in partial fulfillment of the requirementfor the award of the degree

    ofMASTER OF TECHNOLOGY

    IN

    VLSI DESIGN

    S.R.M ENGINEERING COLLEGES.R.M INSTITUTE OF SCIENCE AND TECHNOLOGY

    (DEEMED UNIVERSITY)

    MAY 2005

  • 8/13/2019 VLSI Implementa

    2/51

    2

    BONAFIDE CERTIFICATE

    Certified that this project report titled VLSI Implementation of LMS Adaptive

    Filter is the bonafide work of Mr.Ravikumar who carried out the research under my

    supervision. Certified further, that to the best of my knowledge the reported herein does

    not form any part of any other project report or dissertation on the basis of which a

    degree or award was conferred on an earlier occasion on this or any other certificate.

    Signature of the Guide Signature of HOD

    [Mr.T.VIGNESWARAN] [Dr.S. JAYASHRI]

    ABSTRACT

    In speech processing noise cancellation and echo cancellation are very much

    important. In this project I am going to design adaptive filter which is very applicable in

    the above mentioned applications. Adaptive signal processing (a new field in DSP) is a

  • 8/13/2019 VLSI Implementa

    3/51

    3

    relatively new area in which application are increasing rapidly. Adaptive signal

    processing evolved from the techniques developed to enable the adaptive control of

    time-varying systems.

    Since, there is no dedicated IC for adaptive filter. I am going to design the filter

    using VHDL code. For designing the adaptive filter I am going to use LMS algorithm.

    The LMS algorithm is the most popular for real-time adaptive system implementations.

    It is employed in many areas such as modelling, controlling, beamforming and

    equalization.

    ACKNOWLEDGEMENT

    It is my great pleasure to thank our chairman, Thiru T.R.Pachamuthu, and our

    director Dr.T.P.Ganesan.

  • 8/13/2019 VLSI Implementa

    4/51

    4

    I take this opportunity to thank our beloved principal Prof. R.Venkatramanai.

    I am indebted to our Head of the Department Prof. S. Jayashree who had been

    a source of inspiration for all my activities.

    I wish to convey my deepest sense of gratitude to my guide

    Mr.T.Vigneswaran, M.E., for his valuable guidance.

    I also thank our department faculty members for their great support in all my

    activities.

    Signature of the Student

    (C.RAVIKUMAR)

    TABLE OF CONTENTS

    CHAPTER TITLE PAGE

    NO NO

    ABSTRACT iiiLIST OF FIGURES vii

    LIST OF ABBREVATIONS viii

    1. INTRODUCTION 1

    2. DIGITAL FILTERS 3

    3. STRUCTURE OF DIGITAL FILTERS 5

    3.1 ADAVANTAGES OF DIGITAL FILTERS OVERANALOG FILTERS 6

  • 8/13/2019 VLSI Implementa

    5/51

  • 8/13/2019 VLSI Implementa

    6/51

    6

    12.2 FLOATING POINT MULTIPLIER 4512.3 FLOATING POINT ADDER 48

    13. REFERENCES 51

    LIST OF FIGURES

    CHAPTER TITLE PAGE

    NO NO

    5.1 Transversal Structure 11

    5.2 Symmetric Transversal Structure 12

    5.3 Lattice Structure 14

    8.1 Architecture adaptive filter 20

    8.2 General Form Adaptive Filters 21

    10.1 A strong narrowband interference N(f) in a wideband signal S(f) 30

    10.2 Block diagram for the adaptive filter problem 31

    10.3 a. Identification 33

    10.3 b. inverse modeling 33

    10.3 c. prediction 34

    10.3 d. interference cancellation 34

    10.4 Example cross section of an error-performance 37

    . surface for a two tap filter

  • 8/13/2019 VLSI Implementa

    7/51

  • 8/13/2019 VLSI Implementa

    8/51

    8

    CHAPTER 1

    INTRODUCTION

    In recent years, a growing field of research in adaptive systems has resulted in a

    variety of applications such as communications, radar, sonar, seismology, mechanical

    design, navigation systems and biomedical electronics. Adaptive noise canceling has

    been applied to areas such as speech communications, electrocardiograph, and seismic

    signal processing. Adaptive noise canceling is in fact applicable to a wide variety of

    signal enhancement situations, because noise characteristics are not often stationary inreal-world situations.

    An adaptive filter is a system whose structure is adjustable in such a way

    that its performance improves in accordance with its environment. A simple example of

    an adaptive system is the automatic gain control (AGC) used in radio and television

    receivers. The function of this circuit is to adjust the sensitivity of the receiver inversely

    as the average incoming signal strength. The receiver is thus able to adapt to a wide

    range of inputs levels and to produce a much narrower range of inputs intensities. These

    systems usually have many of the following characteristics:

    1. They can automatically adapt (self-optimize) in the face changing(nonstationary) environment and changing system requirements.

    2. They can be trained to perform specific filtering and decision-making tasks.Synthesis of systems having these capabilities can be accomplished

    automatically through training. In a sense, adaptive systems can beprogrammed by a training process.

  • 8/13/2019 VLSI Implementa

    9/51

    9

    3. Because of the above, adaptive systems do not require the elaborate

    synthesis procedures usually needed for nonadaptive systems. Instead, they

    tend to be self designing.

    4. They can extrapolate a model of behavior to deal with new situations after

    having been trained on a finite and often small number of training signals or

    patterns.

    5. To a limited extent they can repair themselves; that is they can adapt aroundcertain kinds of internal defects.

    6. They can usually be described as nonlinear systems with time-varyingparameters.

    CHAPTER 2

    DIGITAL FILTERS

    Digital filters are very important part of digital signal processing. Infact, their

    extra-ordinary performance is one of the key reasons that digital signal processing has

    become so popular. Digital filters can be used for:

    1. Separation of signals that have been combined.

    2. Restoration of signals that have been distorted in someway.

    Analog filters can be used for these same tasks. Digital filters can achieve far

    superior results. Signal separation is needed when a signal has been distorted with

    interference, noise or other signals. For example, imagine a device for measuring the

  • 8/13/2019 VLSI Implementa

    10/51

    10

    electrical activity of a babys heart while still in the womb. The breathing and the

    heartbeat of the mother will still corrupt the raw signal. A filter might be used to

    separate these signals so that they can be individually analyzed.

    Signal restoration is used when a signal has been distorted in some way. For

    example, an audio recording made with poor equipment may be filtered in-order to

    represent the sound in a better way as it actually occurred. Another example is the

    blurring of an image acquired with an improperly focused lens, or a shaky camera.

    A digital filter is a numerical procedure or an algorithm that transforms a given

    sequence of numbers in to a second sequence that has some desirable properties, such

    as less noise or distortion. Else, it can as well be defined as a digital machine that

    performs filtering process by the numerical evaluation of linear difference equation in

    real time under the program control.

    In Radar applications, digital filters are used to improve the detection of

    airplanes. In speech processing, digital filters have been employed to reduce the

    redundancy in the speech signal so as to allow more efficient transmission and for

    speech recognition.

    Input sequence to a digital filter can be generated in several ways. One common

    method is to sample a continuous time signal at a set of equally spaced time intervals. If

    the continuous time signal is denoted by X(t), then the values of the discrete time

    sequence are denoted as,

    X(nTs)=X(t); t= nTs ------- (2.1)

    Where Ts is the sampling period.

    The implementation of a digital filter depends on the application. In education

    and research, a digital filter is typically implemented as a program on a general-purpose

    computer. The types of computers, which can be used, vary widely from personal

  • 8/13/2019 VLSI Implementa

    11/51

    11

    computers through the larger minicomputers to the large time-shared mainframes. In

    commercial instrumentation and in industrial applications, the digital filter program is

    commonly implemented with the microcomputer that may also be used for control and

    monitoring purposes as well. For high speed or large volume applications, such as use

    in automobiles for controlling engine operation, the digital filter may consist of special

    purpose integrated circuit chips. These circuits perform the computation and storage

    functions required for the digital filter operation.

    CHAPTER 3

    STRUCTURE OF DIGITAL FILTERS

    A digital filter consists of three simple elements: adders, multipliers and delays.

    The adder and multiplier are conceptually are simple components that are readily

    implemented in the arithmetic logic unit of the computer. Delays are components that

    allow access to future and past va lues in the sequence. Delays come in two basic

    flavors: positive and negative. A memory register that stores the current values of a

    sequence for one sample interval, thus making it available for future calculations,

    implements a positive delays or simple delays. A negative delay, or advance delay, is

    used to look ahead to the next value in the sequence. Advance delays are typically used

    in applications, such as image processing, in which the entire data sequence to be

    processed is available at the start of processing. So that the advance delay serves to

    access the next data sample in the sequence. The availability of the advance delay will

    simplify the analysis of digital filters. A digital filter design involves selecting and

    interconnecting a finite number of these elements and determining the multipliers co-

    efficient values.

  • 8/13/2019 VLSI Implementa

    12/51

    12

    3.1 ADAVANTAGES OF DIGITAL FILTERS OVER ANALOG FILTERS

    1. Digital filters are programmable and its design parameters can be easilymodified and implemented through a computer.

    2. Temperature will not affect digital filters whereas the passive elements ofanalog filters are sensitive to temperature.

    3. Digital filters can handle low frequency signals, unlike analog filters.4. Digital filters are more flexible and are vastly superior in the level of

    performance compared to the analog filters.

    CHAPTER 4

    DIGITAL FILTER DESIGN

    The design technique will be forcing the digital filter to behave very closely

    with some reference analog filter. The analog filter is analyzed by standard technique to

    obtain analog representation in terms of the differential equation. The equation can be

    solved easily by analytic means for the unknown voltage, when the input voltage is a

    sinusoidal function of time, or an exponential function of time, or a step function of

    time. Digital filters allow digital signal processors to work in real time. When the input

    signal is a more complicated function of time, though it is possible to obtain an

    appropriate solution by numerical methods. The numerical methods provide the basics

    of digital filtering.

  • 8/13/2019 VLSI Implementa

    13/51

    13

    Digital filters use a digital processor to perform numerical calculations on

    sampled values of the signal. The processor in this case may be specialized with a

    digital signal processor chip. They are highly reliable and predictable since they work

    with the binary states of zero and one. They are very flexible. The most basic step being

    the selection of the filter type based on parameters such as linear phase, efficiency and

    stability. Then the filter specifications need to be specified along with the number of

    co-efficients involved. The realization structure involved also is analyzed.

    4.1 TYPES OF DIGITAL FILTERS

    Digital filters are classified as either recursive or nonrecursive. Depending on

    whether the filter is recursive or nonrecursive, the digital can be categorized as finite or

    Infinite Impulse Response filters.

    A nonrecursive filters that has a finite time duration impulse response is known

    as Infinite Impulse Response filter or FIR.A recursive filter with an infinite time

    duration impulse response is known as Infinite Impulse Response filter or IIR.

    4.1.1 FIR FILTER

    Finite Impulse Response (FIR) filter is one in which the impulse response h(n)

    is limited to a finite number of samples defined over the range (0, N-1)

    For an N- tap FIR filter with co-efficients h(k), whose output is described by:

    Y(n)= h(0) x(n) +h(1) x(n-1)+h(2) x(n-2)+. . . . . . . .+h(N-1) x(n-N-1) ------- (4.1),

    The filters Z transform is,

    H(Z) = h(0) Z+ h(1) Z + h(2) Z +. . . . . . . .+ h(N-1) Z ------- (4.2),

  • 8/13/2019 VLSI Implementa

    14/51

  • 8/13/2019 VLSI Implementa

    15/51

    15

    CHAPTER 5

    FILTER STRUCTURE

    Several types of filter structures can be implemented for the adaptive filters such

    as IIR or FIR. The FIR filter has only adjustable zeroes and hence it is free of stability

    problems associated with adaptive. IIR filters have adjustable poles as well as zeroes.

    An adaptive IIR filter with poles as well as zeroes, makes it possible to offer the same

    filter characteristics as the FIR filter but with higher filter complexity. An adaptive FIR

    filter can be realized using transversal and lattice structures.

    5.1TRANSVERSAL STRUCTURE

    The most common implementation of the adaptive filter is the transversal

    structure (tapped delay line). Figure 5.1 shows the structure of a transversal FIR filter

    with N tap weights (adjustable during the adaptation process).

    The filter output signal y(n) is given as,

    y(n) = WT(n) U(n) = ? wi(n)u(n-i) ------- (5.1)

    where

    U(n) = [u(n), u(n-1),..........., u(n-N+1)]Tis the input vector,

    W(n) = [wo(n), w1(n)...........WN-1(n)]Tis the weight vector

  • 8/13/2019 VLSI Implementa

    16/51

    16

    u(n)

    u( -1) u( -2) u( -N+1)

    wo(n) w1(n) w2(n)wN-1(n)

    +

    Y(n)

    Figure 5.1 Transversal Structure

    5.2 SYMMETRIC TRANSVERSAL STRUCTURE

  • 8/13/2019 VLSI Implementa

    17/51

    17

    ? ?

    +

    u(n)

    u( -1) u( -2) u( -N+1))

    + +

    wo(n) w1(n) wN/2-1(n)

    y(n)

    A transversal filter with symmetric impulse response (weight values) about the

    center weight has a linear phase response. The characteristic of linear phase response in

    filter is sometimes desirable because it allows a system to reject or shape energy bands

    of the spectrum and still maintain the basic pulse integrity with a constant filter group

    delay. Image and digital communication are examples where this characteristic is

    desirable. The adaptive symmetric transversal structure is shown in the figure 5.2.

    Figure 5.2 Symmetric Transversal Structure

    An FIR filter with time domain symmetry, such as

  • 8/13/2019 VLSI Implementa

    18/51

    18

    wo(n) = wN-1(n) w1(n) = wN-2(n) ------- (5.2)

    has a linear phase response in the frequency domain. Consequently, the number of

    weights is reduced by half in a transversal structure, as shown in the figure 5.2 with

    even n tap weights.

    The tap-input vector becomes

    u(n) = [u(n) + u(n-N+1), 8(n-10 + u(n-N+2),........., u(n-N/2+1) + u(n-N/2)]T ------- (5.3)

    As a result, output y(n) becomes,

    y(n) =? [wi(n) u(n- i) + (n- N + i + 1)] ------- (5.4)

    5.3 LATTICE STRUCTURE

    The lattice structure offers several advantages over the transversal structure:

    1.

    The lattice structure has good numerical round-off characteristics that makesit less sensitive than the transversal structure to round-off errors and

    parameter variations.

    2. The lattice structure orthoganalises the input signal stage-by-stage, whichleads to faster convergence and efficient tracking capabilities when used in

    an adaptive environment.

    3. The various stages are decoupled from each other, so it is relatively easy to

    increase the prediction order if required.

    5. The lattice structure is order recursive, which allows adding or deleting ofstages from the lattice without affecting the existing stages.

  • 8/13/2019 VLSI Implementa

    19/51

    19

    6. The lattice filter (predictor) can be interpreted as wave propagation in astratified medium. This can represent an acoustical tube model of the

    human vocal tract, which is extremely useful in Digital Signal Processing.]

    The lattice filter has a modular structure with cascaded identical stages, as shown

    below,

    Figure 5.3 Lattice Structure

    CHAPTER 6

    SAMPLING THEORY

    Sampling has a great importance in DSP. The transformation of a signal from

    digital to analog and from analog to digital is vital in Digital signal processing. High

    sampling rate provides a good frequency response. Sampling theory defines how a

    variant quantity can be captured in an instant of time. A signal can be sampled at a set

    of points spaced at equal intervals of time. Sampling is the process in which the

    continuous time signal is converted into discrete time signal.

    Method of sampling is based on:

    STAGE

    1

    STAGE

    2

    STAGE

    m

    fo(n)

    bo(n)

    u(n)

    f1(n)

    b1(n)

    fm(n)

    bm(n)

  • 8/13/2019 VLSI Implementa

    20/51

    20

    1. Establishing the rate of sampling.2. Changing an analog signal to a digital one.

    3. Changing the digital number to an analog signal.

    Nyquist theory is also known as the sampling theory, and it states that in-order

    to obtain all information in a signal of one-sided bandwidth B, it must be sampled at a

    rate greater than 2B.

    CHAPTER 7

    HISTORY OF ADATPIVE FILTERS

    Until the mid 1960's, telephone channel equalizers were fixed equalizers that

    caused fixed performance degradation or they were manually adjustable equalizers that

    were cumbersome to adjust.

    In 1965, Lucky introduced the Zero-Forcing algorithm for automatic adjustment

    of the equalizer weights. This algorithm minimizes a certain distortion, which has the

    effect of forcing the intersymbol interference to zero. This breakthrough by Lucky

    inspired the other researchers to investigate different aspects of adaptive equalization

    problem, leading to new improved solutions.

    The gain in popularity of adaptive signal processing is primarily due to

    advances in the digital technology that have increased computing capabilities and

    therefore broadened the scope of Digital Signal Processing as a whole.

  • 8/13/2019 VLSI Implementa

    21/51

  • 8/13/2019 VLSI Implementa

    22/51

    22

    If the signal is deterministic with known spectrum and this spectrum does not

    overlap with that of the noise, then the signal can be recovered by the conventional

    filtering techniques. However, this situation is very rare. Instead, we are often faced

    with the problem of estimating an unknown random signal in the presence of noise.

    This is usually accomplished so as to minimize the error in the estimation according to

    a certain criterion. This leads to the area of adaptive filtering.

    For example, when a telephone call is initiated, the transfer function of the

    telephone channel is unknown, and the connection may involve signal reflections that

    produce undesirable echoes. This problem can be corrected with a linear filter once the

    transfer function is measured. The measurement can be made, however, only after the

    system is running. Therefore, the correction filter must be designed by the system itself,

    and in a reasonably short time. This can be done by using adaptive filters.

    8.1 OPERATION

    For designing the adaptive filter, we have used Least Mean Square (LMS)

    algorithm. The LMS adaptive algorithm has found many applications in noise

    cancellation, line enhancing, etc., to design our filter, we used conventional Look-ahead

    technique that uses serial LMS algorithm. There is also another technique called

    Relaxed Look-ahead technique that uses delayed LMS algorithm. The Relaxed Look-

    ahead technique does not maintain input-output mapping and does not result in a final

    unique architecture. Convergence characteristics would also differ for different

    combinations. These disadvantages are overcomes in Conventional Look-ahead

    technique.

  • 8/13/2019 VLSI Implementa

    23/51

    23

    We designed a fourth order adaptive filter. Designing the adaptive filter does

    not require any other frequency response information or specification. To define the

    self learning process, the filter uses the adaptive algorithm, used to reduce the error

    between the output signal y(k) and the desired signal d(k). [Initially the weight values

    are initialized to zero].

    When the LMS performance criteria for e(k) have achieved its minimum value

    through the iterations of the adaptive algorithm, the adaptive filter has finished its

    weight updation process and its coefficients have converged to a solution. Now the

    output from the adaptive filter matches closely the desired signal d(k).

    When you change the input data characteristics, sometimes called the filter

    environment by generating a new set of coefficients for the new data. Notice that when

    e(k) goes to zero and remains these, we achieve perfect adaptation, but not likely in the

    real world[2].

  • 8/13/2019 VLSI Implementa

    24/51

    24

    XXX

    + + +

    XX

    +++ +

    XX

    W1(n W2(n)

    0

    WN(n)

    y(n)

    u(n)

    F-BLOCK

    d(n)

    e(n)

    m

    WUD BLOCK

    Figurer 8.1 architecture adaptive filter

  • 8/13/2019 VLSI Implementa

    25/51

    25

    8.2 GENERAL FORM

    An adaptive filter is a filter containing the co-efficients that are updated by an

    adaptive algorithm to optimize the filter's response to a desired performance criterion.

    Figure 8.2 General Form Adaptive Filters

    In general, adaptive filters consists of two distinct parts: a filter, whose structure

    is designed to perform a desired processing function: and an adaptive algorithm, for

    adjusting the co-efficients of that filter to improve its performance, as illustrated in the

    figure 8.2

    The incoming signal, u(n), is weighted in a digital filter to produce an output

    y(n). The adaptive algorithm adjusts the weights in the filter to minimize the error, e(n),

    between the filter output, y(n) and the desired response of the filter, d(n). Because of

    their robust performance in the unknown and time-variant environment, adaptive filters

    have been widely used from telecommunications to control applications.

    X(n)

    Y(n)

    +

    -

    _

    en =xn-xn

    Adaptive

    Filter

    Adaptive

    Algorithm

  • 8/13/2019 VLSI Implementa

    26/51

    26

    An adaptive filter can eliminate phase errors that arise from distortion in the

    propagating medium or from the signal processing system. Suppose that a distortion of

    a test impulse u(t) = ? (t), which has a uniform amplitude as a function of frequency

    when it passes through a transmission medium, is H(? ), where

    H(? ) = A(? ) exp [-j? (? )] ------- (8.1)

    and A(? ) is a real function.

    If a compensating matched filter is constructed with a response H*(? ), the

    resultant output, when a signal U(? ) is passed through the medium, will be

    Y(? ) = U(? ) H(? ) H*(? ) = A2(? )U(? ) ------- (8.2)

    All phase errors due to transmission medium are removed by this process, and

    the output of a test impulse, after passing through the distortion medium and the

    matched filter, has no phase variation with frequency. The transform of this output will

    be y(t), and the combined response of the transmission system and the equalizer will be

    the autocorrelation function of h(t) convolved with u(t).

    By removing the phase errors an output at t=0 corresponds to the peak of the

    autocorrelation function of h(t) and can be obtained from an impulse input

    [u(t) = ? (t)]. If H(? ) H*(? ) is constant in amplitude over a finite frequency range ? , so

    that H(? ) = ? [(? -? o) / ? ], the output with an impulse input will be a sinc function of

    time. If there are major peaks in the frequency spectrum, the time response will exhibit

    service ringing, but provided that the input spectrum is reasonably smooth (ideally, a

    Gaussian function).

  • 8/13/2019 VLSI Implementa

    27/51

    27

    The time domain response corresponding to H(? )H*(? ) should be reasonably

    compact with no severe ringing. Suppose that a reference impulse is sent through a

    distorting medium. If a distorted pulse h(t) stored in the storage correlator, phase

    distortion caused by the medium can be removed from it. Now any other signal u(t),

    sent along the same path into the correlator will be correlated with the stored reference

    H*(? ) or h(-t) and will have its phase distortion removed.

    8.3 CHARACTERISTICS

    An adaptive system is one that is designed primarily for the purpose of adaptive

    control and adaptive signal processing. Such a system usually has some or all of the

    following characteristics:

    1. Firstly they can automatically adapt in the face of changing environmentsand changing system requirements.

    2. Secondly they can be trained to perform specific filtering and decision-

    making tasks, i.e., they can be programmed by a training process. Because

    of this adaptive systems do not require the elaborate synthesis procedures

    usually needed for non-adaptive systems. Instead, they tend to be self

    designing.

    3. Thirdly, they can extrapolate a model of behaviour to deal with new

    situations after having been trained on a finite small number of training

    signals or patterns.

  • 8/13/2019 VLSI Implementa

    28/51

    28

    4. Fourthly, to a limited extent they can repair themselves, i.e., adapt around

    certain kinds of internal defects.

    5. Finally, they are more complex and difficult to analyze than non-adaptive

    systems, but they offer the possibility of substantially increased system

    performance when input signal characteristics are unknown or time varying.

    By an adaptive, we mean a self-designing device that has the following

    characteristics:

    1. It contains a set of adjustable filter co-efficients.2. The co-efficients are updated in accordance with an algorithm.3. The algorithm operates with arbitrary initial conditions; each time new

    samples are received for the input signal and the desired response,

    appropriate corrections are made to the previous values of the filter co-

    efficients.

    4. The adaptation is continued until the operating point of the filter on the error

    performance surface moves close enough to the minimum points.

    CHAPTER 9

    ALGORITHMS

    Two types of adaptive algorithms are discussed in this section:

    1. Recursive least square algorithm (RLS)

  • 8/13/2019 VLSI Implementa

    29/51

    29

    2. Least mean square algorithm (LMS)RLS algorithm provides faster convergence but has more computational

    complexity. LMS algorithms based on gradient-type search for tracking time-varying

    signal characteristics.

    9.1 RLS ALGORITHM

    In the Recursive Least Square Algorithm, the order of operations are

    1. Compute the filter output2. Find the error signal3. Compute the kalman gain vector4. Update the inverse of the correlation matrix5. Update the weights

    9.2 LMS ALGORITHM

    LMS algorithm has simplicity, which is its major advantages, and hence it is

    widely used in many applications. The LMS algorithm is described as,

    W(n+1)=W(n)+m*e(n)*U(n) ------- (9.1)

    e(n) = d(n)-WT(n)*U(n) ------- (9.2)

  • 8/13/2019 VLSI Implementa

    30/51

    30

    where

    W(n) = {w1),w2(n), . . . . . . . .WN(n)}T

    U(n) = {u(n),u(n-1), . . . . . . . .,u(n-N+1)}T

    m is the adaptation constant ,

    d(n) is the desired output ,

    e(n) is the error obtained during every iteration.

    As we specify m smaller, the correction to filter weights gets smaller for each

    sample and the LMS error falls more slowly. Larger m changes the weights more each

    step and hence the error falls more rapidly, but the resulting error does not approach the

    ideal solution closely.

    9.2.1 MSE CRITERION

    The adaption algorithm uses the error signal,

    e(n) = d(n) y(n) ------- (9.3)

    where

    d(n) is the desired signal

    y(n) is the filter output

    The input vector u(n) and e(n) are used to update the adaptive filter co-efficients

    according to a criterion. The criterion employed in this section is the Mean Square

    Error (MSE) e:

    e = E[e2 (n)] ------- (9.4)

    we know that,

  • 8/13/2019 VLSI Implementa

    31/51

  • 8/13/2019 VLSI Implementa

    32/51

    32

    approximate solutions. In this algorithm, the next weight vector W(n+1) is increased by

    a change proportional to the negative gradient of mean-square error performance.

    W(n+1) = W(n) m? (n) ------- (9.10)

    Where m is the adaptation step size that controls the stability and convergence

    rate. For the LMS algorithm, the gradient at the nth iteration, ? (n), is estimated by

    assuming squared error e2(n) as an estimate of the MSE in equation (9.3). Thus the

    expression for the gradient estimate can be simplified to,

    ? (n) = d[e2 (n)] / dW(n) ------- (9.11)

    = -2e(n) U(n) ------- (9.12)

    Substitution of this instantaneous gradient estimate into (9.10) yields the

    Windrow- Hoff LMS algorithm,

    W(n+1) = W(n) + 2me(n)U(n) ------- (9.13)

    where 2m in the above equation is replaced by m in practical implementation.

    m should be selected such that,

    0

  • 8/13/2019 VLSI Implementa

    33/51

    33

    CHAPTER 10

    ADAPTIVE FILTER OVERVIEW

    Adaptive filters learn the statistics of their operating environment and

    continually adjust their parameters accordingly. This chapter presents the theory of the

    algorithms needed to train the filters.

    10.1 INTRODUCTION

    In practice, signals of interest often become contaminated by noise or other

    signals occupying the same band of frequency. When the signal of interest and the

    noise reside in separate frequency bands, conventional linear filters are able to extract

    the desired signal .However, when there is spectral overlap between the signal and

    noise, or the signal or interfering signals statistics change with time, fixed coefficient

    filters are inappropriate.

    .

  • 8/13/2019 VLSI Implementa

    34/51

  • 8/13/2019 VLSI Implementa

    35/51

    35

    Figure 10.2 Block diagram for the adaptive filter problem.

    The discrete adaptive filter accepts an input u(n) and Produces an output y(n) by

    a convolution with the filters weights, w(k). A desired reference signal, d(n), is

    compared to the output to obtain an estimation error e(n). This error signal is used to

    incrementally adjust the filters weights for the next time instant. Several algorithms

    exist for the weight adjustment, such as the Least-Mean-Square (LMS) and the

    Recursive Least-Squares (RLS) algorithms .The choice of training algorithm is

    dependent upon needed convergence time and the computational complexity available,

    as statistics of the operating environment.

    10.3 APPLICATIONS

    Because of the ir ability to perform well in unknown environments and track

    statistical time-variations, adaptive filters have been employed in a wide range of fields.

    However, there are essentially four basic classes of applications for adaptive filters.

    These are: Identification, inverse modeling, prediction, and interference cancellation,

    with the main difference between them being the manner in which the desired response

    is extracted. These are presented in figure 10.3 a, b, c, and d, respectively. The

    adjustable parameters that are dependent upon the applications at hand are the number

    of filter taps, choice of FIR or IIR, choice of training algorithm, and the learning rate.

    Beyond these, the underlying architecture required for realization is independent of the

    application. Therefore, this thesis will focus on one particular application, namely noise

    cancellation, as it is the most likely to require an embedded VLSI implementation. This

    is because it is sometimes necessary to use adaptive noise cancellation incommunication systems such as handheld radios and satellite systems that are contained

    on a single silicon chip, where real-time processing is required. Doing this efficiently is

    important, because adaptive equalizers are a major component of receivers in modern

    communications systems and can account for up to 90% of the total gate count.

  • 8/13/2019 VLSI Implementa

    36/51

    36

    Figure 10.3 a Identification

    Figure 10.3 b inverse modeling

  • 8/13/2019 VLSI Implementa

    37/51

    37

    Figure 10.3 c prediction

    Figure 10.3 d interference cancellation

    .

  • 8/13/2019 VLSI Implementa

    38/51

  • 8/13/2019 VLSI Implementa

    39/51

  • 8/13/2019 VLSI Implementa

    40/51

  • 8/13/2019 VLSI Implementa

    41/51

    41

    Also, let Prepresent the cross-correlation vector between the tap inputs and the

    desired response d(n):

    p = E [u(n)d *(n)], ------- (10.8)

    which expanded is:

    p = [p(0), p(-1),,p(1-M)]T ------- (10.9)

    Since the lags in the definition of are p either zero or negative, the Wiener-Hopf

    equation may be written in compact matrix form:

    RwO= P, ------- (10.10)

    with wO stands for the M-by-1 optimum tap-weight vector, for the transversal

    filter. That is, the optimum filters coefficients will be:

    wO = [wO0 , wO1, , wO, M-1 ]T-------(10.11)

    This produces the optimum output in terms of the mean-square-error, however if

    the signals statistics change with time then the Wiener-Hopf equation must berecalculated. This would require calculating two matrices, inverting one of them and

    then multiplying them together. This computation cannot be feasibly calculated in real

    time, so other algorithms that approximate the Wiener filter must be used.

  • 8/13/2019 VLSI Implementa

    42/51

    42

    10.4.2 METHOD OF STEEPEST DESCENT

    With the error-performance surface defined previously, one can use the method

    of steepest-descent to converge to the optimal filter weights for a given problem. Since

    the gradient of a surface (or hypersurface) points in the direction of maximum increase,

    then the direction opposite the gradient ( ) will point towards the minimum point of

    the surface. One can adaptively reach the minimum by updating the weights at each

    time step by using the equation

    w n+1 = w n+ ( n ), ------- (10.12)

    where the constant is the step size parameter. The step size parameter

    determines how fast the algorithm converges to the optimal weights. A necessary and

    sufficient condition for the convergence or stability of the steepest descent algorithm is

    for to satisfy,

    0

  • 8/13/2019 VLSI Implementa

    43/51

    43

    10.4.3 LEAST-MEAN-SQUARE ALGORITHM

    The least-mean-square (LMS) algorithm is similar to the method of Steepest-

    descent in that it adapts the weights by iteratively approaching the MSE minimum.

    Widrow and Hoff invented this technique in 1960 for use in training neural networks.

    The key is that instead of calculating the gradient at every time step, the LMS algorithm

    uses a rough approximation to the gradient. The error at the output of the filter can be

    expressed as,

    en = dn wT

    n un , ------- (10.14)

    which is simply the desired output minus the actual filter output. Using this

    definition for the error an approximation of the gradient is found by

    Substituting this expression for the gradient into the weight update equation

    from the method of steepest-descent gives

    wn+1 = wn +2 .en un , ------- (10.15)

  • 8/13/2019 VLSI Implementa

    44/51

    44

    which is the Widrow-Hoff LMS algorithm. As with the steepest-descent

    algorithm, it can be shown to converge for values of less than the reciprocal of ?max,

    but ?max may be time-varying, and to avoid computing it another criterion can be used.

    This is

    where M is the number of filter taps and Smax is the maximum value of the

    power spectral density of the tap inputs u. The relatively good performance of the LMS

    algorithm given its simplicity has caused it to be the most widely implemented in

    practice. For an N-tap filter, the number of operations has been reduced to 2*N

    multiplications and N additions per coefficient update. This is suitable for real-time

    applications, and is the reason for the popularity of the LMS algorithm[4].

    CHAPTER 11

    CONCLUSIONS

    This paper presents the development of algorithm, architecture and

    implementation for speech processing using FPGAs. The VHDL code developed is

    RTL compliant and works for using Xilinx tools. Adaptive filter is a filter that vary in

    time, adapting their coefficients according to some reference using LMS algorithm. We

    are often faced with the problem of estimating an unknown random signal in the

    presence of noise. This is usually accomplished so as to minimize the error in the

    estimation according to a certain criterion. This leads to the area of adaptive filtering.

  • 8/13/2019 VLSI Implementa

    45/51

    45

    CHAPTER 12

    SYNTHESIS REPORT

    12.1. LMS REPORT

    =============================================================Final Report

    =============================================================

    Final ResultsRTL Top Level Output File Name : lmsvhd.ngr

    Top Level Output File Name : lmsvhdOutput Format : NGCOptimization Goal : Speed

    Keep Hierarchy : NO

    Design Statistics# IOs : 1

    Cell Usage:

    =============================================================

    Device utilization summary:---------------------------

    Selected Device : 2s15tq144-5

    TIMING REPORT

    NOTE: THESE TIMING NUMBERS ARE ONLY A SYNTHESIS ESTIMATE.

    FOR ACCURATE TIMING INFORMATION PLEASE REFER TO THE TRACEREPORT

    GENERATED AFTER PLACE-and-ROUTE.

  • 8/13/2019 VLSI Implementa

    46/51

    46

    Clock Information:

    ------------------No clock signals found in this design

    Timing Summary:

    ---------------Speed Grade : -5

    Minimum period: No path foundMinimum input arrival time before clock: No path found

    Maximum output required time after clock: No path foundMaximum combinational path delay: No path found

    Timing Detail:

    --------------All values displayed in nanoseconds (ns)

    CPU: 56.64 / 57.50 s | Elapsed : 56.00 / 57.00 s

    -->

    Total memory usage is 115620 kilobytes

    12.2 FLOATING POINT ADDER

    =============================================================Final Report

    =============================================================

    Final ResultsRTL Top Level Output File Name : floatadd.ngr

    Top Level Output File Name : float addOutput Format : NGCOptimization Goal : Speed

    Keep Hierarchy : NO

    Design Statistics : 96# IOs

    Macro Statistics:# Multiplexers : 46

  • 8/13/2019 VLSI Implementa

    47/51

    47

    # 2-to-1 multiplexer : 46# Logic shifters : 46

    # 23-bit shifter logical left : 23# 24-bit shifter logical right : 23

    # Adders/Subtractors : 49# 24-bit adder carry out : 1# 26-bit subtractor : 1

    # 8-bit adder : 1# 8-bit subtractor : 46# Comparators : 29

    # 23-bit comparator equal : 1# 23-bit comparator less : 1

    # 8-bit comparator equal : 1# 8-bit comparator greater : 25# 8-bit comparator less : 1

    Cell Usage:# BELS : 2 995

    # GND : 1# LUT1 : 8# LUT2 : 85

    # LUT3 : 652# LUT4 : 1356

    # MUXCY : 431# MUXF5 : 41# VCC : 1

    # XORCY : 420

    # IO Buffers : 96# IBUF : 64# OBUF : 32

    =============================================================

    Device utilization summary:---------------------------

    Selected Device : 2s15tq144-5

    Number of Slices : 1305 out of 192 679% (*)Number of 4 input LUTs : 2101 out of 384 547% (*)Number of bonded IOBs : 96 out of 90 106% (*)

    CPU: 80.20 / 81.20 s | Elapsed: 80.00 / 81.00 s

  • 8/13/2019 VLSI Implementa

    48/51

  • 8/13/2019 VLSI Implementa

    49/51

    49

    12.3 FLOATING POINT MULTIPLIER

    =============================================================

    Final Report

    Final Results

    RTL Top Level Output File Name : floatmult.ngrTop Level Output File Name : floatmult

    Output Format : NGCOptimization Goal : SpeedKeep Hierarchy : NO

    Design Statistics

    # IOs : 96

    Macro Statistics:# Multiplexers : 23# 2-to-1 multiplexer : 23

    # Logic shifters : 23# 23-bit shifter logical left : 23

    # Adders/Subtractors : 26# 8-bit adder : 2# 8-bit subtractor : 24

    # Multipliers : 1# 24x24-bit multiplier : 1

  • 8/13/2019 VLSI Implementa

    50/51

    50

    Cell Usage:

    # BELS : 3490# GND : 1# LUT1 : 55

    # LUT2 : 322# LUT3 : 464

    # LUT4 : 712# MULT_AND : 276# MUXCY : 775

    # MUXF5 : 117# VCC : 1

    # XORCY : 767# IO Buffers : 96# IBUF : 64

    # OBUF : 32

    Device utilization summary:

    Selected Device : 2s15tq144-5

    Number of Slices : 867 out of 192 451% (*)

    Number of 4 input LUTs : 1553 out of 384 404% (*)Number of bonded IOBs : 96 out of 90 106% (*)

    =============================================================

    CPU: 35.66 / 36.51 s | Elapsed: 35.00 / 36.00 s

    -->

    Total memory usage is 79780 kilobytes

  • 8/13/2019 VLSI Implementa

    51/51

    51

    Figure 12.2 synthesized FLOATMULT Architecture

    CHAPTER 13

    REFERENCES

    1. Thomas Kailath, Adaptive Filter Theory Fourth edition, Pearson Education Asia,

    2002.

    2. K.Parhi. VLSI digital signal processing systems, design and implementation.

    John Wiley and sons, 1999.

    3. S.Palnitkar. Verilog HDL, a guide to digital design and synthesis. Prentice Hall.

    4. Nabeel shirazi, Al Walters, and Peter Athanans, Quantitative Analysis of Floating

    Point Arithmetic on FPGA Based Custom Computing Machines.