GCET-EC-MAA-20-44-48

download GCET-EC-MAA-20-44-48

of 40

Transcript of GCET-EC-MAA-20-44-48

  • 8/3/2019 GCET-EC-MAA-20-44-48

    1/40

    MULTIRATE ADAPTIVE FILTERING

    A PROJECT REPORT

    Submitted by

    NEERAJ LAD 080110111020

    PRITESH PRAJAPATI 080110111044

    KRUPESH SHAH 080110111048

    In fulfillment for the award of the degree

    of

    BACHELOR OF ENGINEERING

    in

    ELECTRONICS AND COMMUNICATION

    G. H. Patel College of Engineering And

    Technology, VallabhVidyanagar

    Gujarat Technological University,Ahmedabad

    DECEMBER2011

  • 8/3/2019 GCET-EC-MAA-20-44-48

    2/40

    G. H. Patel College of Engineering and Technology

    Department of Electronics and Communication

    2011

    CERTIFICATE

    Date:

    This is to certify that the dissertation entitled MULTIRATE ADAPTIVE

    FILTERING has been carried out by NEERAJ LAD 0801110111020, PRITESH

    PRAJAPATI 080110111044, KRUPESH SHAH 080110111048 under my guidance in

    fulfillment of the degree of Bachelor of Engineering in Electronics and

    Communication (7th Semester / 8th Semester) of Gujarat Technological

    University,Ahmedabad during the academic year 2011-12.

    Guide:

    Mr. Mayank Ardeshana

    Assistant Professor

    G. H. Patel College of Engineering and Technology

    Vallabh Vidyanagar

    Head of the Department

  • 8/3/2019 GCET-EC-MAA-20-44-48

    3/40

    ACKNOWLEDGEMENT

    We take immense pleasure in thanking Dr. Chintan Modi, Head - EC Department,

    GCET College for having permitted us to carry out this project work.

    We take this opportunity to express our regards and sincere thanks to my advisor and

    guide Assistant Prof. Mayank Ardeshana, without whose support, this project would not

    have been possible. His constant encouragement and moral support gave us the motivation to

    carry out the project successfully.

    I am also indebted to Mr. Darshak Bhatt for his valuable and timely guidance.

    The discussions with him helped a lot in developing an in-depth understanding of the topics

    involved.

    Finally, yet importantly, we would like to express our heartfelt thanks to our beloved

    parents for their blessings, our friends/classmates for their help and wishes for the successful

    accomplishment of this part of the project.

    ABSTRACT

  • 8/3/2019 GCET-EC-MAA-20-44-48

    4/40

  • 8/3/2019 GCET-EC-MAA-20-44-48

    5/40

    Figure No Figure Description Page No

    Figure 2.1 Block diagram of a digital signal processing 2

    Figure 2.2 Block diagram of Convolution 3

    Figure 2.3 Relationship between Impulse response and frequency 6

    Figure 2.4 Ideal LowPass filter response 10

    Figure 2.5 MATLAB implementation of LowPass Filter response 11

    Figure 2.6 Ideal HighPass filter response 11

    Figure 2.7 MATLAB implementation of HighPass Filter response 12

    Figure 2.8 Ideal BandPass filter response 12

    Figure 2.9 MATLAB implementation of BandPass Filter response 13

    Figure 2.10 Ideal Bandstop filter Characteristic 13

    Figure 2.11 MATLAB implementation of Bandstop Filter response 14

    Figure 3.1 Basic block diagram of Adaptive Filter 18

    Figure 3.2 Block diagram of System Identification 21

    Figure 3.3 Block diagram of Linear Prediction 22

    Figure 3.4 Block diagram of Noise Cancellation 23

    Figure 4.1 Least Mean Square Filter using Adaptive Algorithm 24

    Figure 4.2 Block diagram of Transversal Filter 26

    Figure 4.3 Signal flow graph representation of LMS 29

    Figure 4.4 Summary of LMS algorithm using Flow Chart 31

  • 8/3/2019 GCET-EC-MAA-20-44-48

    6/40

    TABLE OF CONTENTS

    Acknowledgement i

    Abstract ii

    List ofFigures iii

    Table ofContents iv

    Chapter : 1 Introduction

    1.1 Motivation 1

    1.2 Project specification 1

    Chapter : 2 Theoretical Background

    2.1 Fundamentals of signals and systems processing 2

    2.2 The filtering problem 8

    2.3 Implementation of filters 10

    Chapter : 3 Adaptive filters

    3.1 Introduction of adaptive filters 15

    3.2 Adaptive filters using block diagram 17

    3.3 Approaches to the development of linear

    adaptive filtering algorithms 19

    3.4 How to Choose an Adaptive Filter 20

    3.5 Application of adaptive filters 20

    Chapter : 4 LMS algorithm

    4.1 Introduction to LMS algorithm 24

    4.2 Detail description of the algorithm 24

    4.3 Least-mean-square adaptation algorithm 25

    4.4 Summary of the LMS algorithm 28

    4.5 Summary of the LMS algorithm 31

    Future implementation 32

    Conclusion 32

    References 33

  • 8/3/2019 GCET-EC-MAA-20-44-48

    7/40

  • 8/3/2019 GCET-EC-MAA-20-44-48

    8/40

    domain adaptive filtering. Time domain adaptive filtering will not be replaced entirely as

    it is far more efficient for systems with short impulse responses.

    Chapter 2

    Theoretical Background

    2.1. Fundamentals of signals and systems processing

    2.1.1. Signals and Systems

    A signal is a description of how one parameter varies with another parameter. A

    system is any process that produces an output signal in response to an input signal.

    Signal processing is the operation/s done on signals. The general block diagram of such

    a process is shown [1]

    2.1.2. Linearity

    A system is called linear if it has two mathematical properties: homogeneity

    and additivity. If you can show that a system has both properties, then you have

    proven that the system is linear. Likewise, if you can show that a system doesn't have

    one or both properties, you have proven that it isn't linear. A third property, shift

    invariance, is not a strict requirement for linearity, but it is a mandatory property for

    most DSP techniques. When you see the term linear system used in DSP, you should

    assume it includes shift invariance unless you have reason to believe otherwise. These

    Fig2.1 Block diagram of a digital signal processing

  • 8/3/2019 GCET-EC-MAA-20-44-48

    9/40

    three properties form the mathematics of how linear system theory is defined and

    used. [1]

    2.1.3. Convolution

    Convolution is a formal mathematical operation, just as multiplication,

    addition, and integration. Addition takes two numbers and produces a third number,

    while convolution takes two signals and produces a third signal. Convolution is used

    in the mathematics of many fields, such as probability and statistics. In linear systems,it is used to describe the relationship between three signals of interest: the input

    signal, the impulse response, and the output signal.

    An input signal, x[n], enters a linear system with an impulse response, h[n],

    resulting in an output signal, y[n]. In equation form: x[n] * h[n] = y[n]. Expressed in

    words, the input signal convolved with the impulse response is equal to the output

    signal. Just as addition is represented by the plus, +, and multiplication by the cross,

    , convolution is represented by the star, *.

    The equation that represents the convolution is:

    Fig2.2Block diagram of Convolution

  • 8/3/2019 GCET-EC-MAA-20-44-48

    10/40

    The system output y[n] is the convolution of the input signal x[n] which

    convolves with the system impulse response h[n]. [1]

    2.1.4. Correlation

    Correlation is a mathematical operation that is very similar to convolution.

    Just as with convolution, correlation uses two signals to produce a third signal. This

    third signal is called the cross-correlation of the two input signals. If a signal is

    correlated with itself, the resulting signal is instead called the autocorrelation.

    Correlation is the optimal technique for detecting a known waveform in

    random noise. That is, the peak is higher above the noise using correlation than can be

    produced by any other linear system. Using correlation to detect a known waveform is

    frequently called matched filtering. [1]

    2.1.5. The Discrete Fourier Transform (DFT)

    Fourier analysis is a family of mathematical techniques, all based on

    decomposing signals into sinusoids. The Discrete Fourier Transform is the family

    member used with digitized signals. Before we get started on the DFT, let's look for a

    moment at the Fourier transform (FT) and explain why we are not talking about it

    instead. The Fourier transform of a continuous-time signal x(t) may be defined as:

    The DFT, on the other hand, replaces the infinite integral with a finite sum:

  • 8/3/2019 GCET-EC-MAA-20-44-48

    11/40

    A signal can be either continuous or discrete, and it can be either periodic or

    aperiodic. The combination of these two features generates the four categories:

    1) Aperiodic-Continuous

    This includes, for example, decaying exponentials and the Gaussian curve.

    These signals extend to both positive and negative infinity without repeating in a

    periodic pattern. The Fourier Transform for this type of signal is simply called the

    Fourier Transform. [1]

    2) Periodic-Continuous

    Here the examples include: sine waves, square waves, and any waveform thatrepeats itself in a regular pattern from negative to positive infinity. This version of the

    Fourier transform is called the Fourier series. [1]

    3) Aperiodic-Discrete

    These signals are only defined at discrete points between positive and negative

    infinity, and do not repeat themselves in a periodic fashion. This type of Fourier

    transform is called the Discrete Time Fourier Transform. [1]

    4) Periodic-Discrete

    These are discrete signals that repeat themselves in a periodic fashion from

    negative to positive infinity. This class of Fourier Transform is sometimes called the

    Discrete Fourier Series, but is most often called the Discrete Fourier Transform. [1]

    2.1.6. Frequency Response of Systems

    Systems are analyzed in the time domain by using convolution. A similar

    analysis can be done in the frequency domain. Using the Fourier transform, every

    input signal can be represented as a group of cosine waves, each with a specified

    amplitude and phase shift. Likewise, the DFT can be used to represent every output

    signal in a similar form. This means that any linear system can be completely

    described by how it changes the amplitude and phase of cosine waves passing through

    it. This information is called the system's frequency response. Since both the impulseresponse and the frequency response contain complete information about the system,

  • 8/3/2019 GCET-EC-MAA-20-44-48

    12/40

    there must be a one-to-one correspondence between the two. Given one, you can

    calculate the other. The relationship between the impulse response and the frequency

    response is one of the foundations of signal processing: A system's frequency

    response is the Fourier Transform of its impulse response. [1]

    Keeping with standard DSP notation, impulse responses use lower case

    variables, while the corresponding frequency responses are upper case. Since h[ ] is

    the common symbol for the impulse response, H[ ] is used for the frequency response.

    Systems are described in the time domain by convolution, that is: x[n] * h[n] = y[n].

    In the frequency domain, the input spectrum is multiplied by the frequency response,

    resulting in the output spectrum. As an equation: X[f ] H[f ] = Y[f ]. That is,

    convolution in the time domain corresponds to multiplication in the frequency

    domain. And convolution in the frequency domain corresponds to multiplication in

    the time domain. The next figure illustrates the relationship between the impulse

    response and the frequency response:

    2.1.7. Inverse DFT

    The inverse DFT (the IDFT) is given by:

    Fig2.3Relationship between Impulse response and

  • 8/3/2019 GCET-EC-MAA-20-44-48

    13/40

    In summary, the DFT is proportional to the set of coefficients of projection

    onto the sinusoidal basis set, and the IDFT is the reconstruction of the original signal

    as a superposition of its sinusoidal projections. This basic architecture extends to all

    linear orthogonal transforms, including wavelets, Fourier transforms, Fourier series,

    the discrete-time Fourier transform (DTFT), and certain short-time Fourier transforms

    (STFT).

    2.1.8. The Fast Fourier Transform (FFT)

    The Fast Fourier Transform is another method for calculating the DFT. While

    it produces the same result as the other approaches, it is incredibly more efficient,

    often reducing the computation time by hundreds. This is the same improvement as

    flying in a jet aircraft versus walking! The FFT requires a few dozen lines of code,

    and it is one of the most complicated algorithms in DSP. [1]

    2.1.8.1. How the FFT works

    By making use of periodicities in the sines that are multiplied to do the

    transforms, the FFT greatly reduces the amount of calculation required. Here's a little

    overview. Functionally, the FFT decomposes the set of data to be transformed into a

    series of smaller data sets to be transformed. Then, it decomposes those smaller sets

    into even smaller sets. At each stage of processing, the results of the previous stage

    are combined in special way. Finally, it calculates the DFT of each small data set. For

    example, an FFT of size 32 is broken into 2 FFT's of size 16, which are broken into 4

    FFT's of size 8, which are broken into 8 FFT's of size 4, which are broken into 16

    FFT's of size 2. Calculating a DFT of size 2 is trivial.

    Here is a slightly more rigorous explanation: It turns out that it is possible to

    take the DFT of the first N/2 points and combine them in a special way with the DFT

    of the second N/2 points to produce a single N-point DFT. Each of these N/2-point

    DFTs can be calculated using smaller DFTs in the same way. One (radix-2) FFT

    begins, therefore, by calculating N/2 2-point DFTs. These are combined to form N/4

    4-point DFTs. The next stage produces N/8 8-point DFTs, and so on, until a single

    N-point DFT is produced. [1]

  • 8/3/2019 GCET-EC-MAA-20-44-48

    14/40

    2.1.8.2. Efficiency of the FFT

    The DFT takes N^2 operations for N points. Since at any stage the

    computation required to combine smaller DFTs into larger DFTs is proportional to

    N, and there are log2(N) stages (for radix 2), the total computation is proportional to

    N * log2(N). Therefore, the ratio between a DFT computation and an FFT

    computation for the same N is proportional to N / log2(n). In cases where N is small

    this ratio is not very significant, but when N becomes large, this ratio gets very large.

    (Every time you double N, the numerator doubles, but the denominator only increases

    by 1.)

    2.1.8.3. Some Terminology used in FFT

    The radix is the size of FFT decomposition. For single-radix FFT's, the

    transform size must be a power of the radix. FFT's can be decomposed using DFT's of

    even and odd points, which is called a Decimation-In-Time (DIT) FFT, or they can be

    decomposed using a first-half / second-half approach, which is called a Decimation-

    In-Frequency (DIF) FFT. Generally, the user does not need to worry which type is

    being used.

    2.1.8.4. Implementation of FFT

    Except as a learning exercise, you generally will never have to. Many good FFT

    implementations are available in C, FORTRAN and other languages, and microprocessor

    manufacturers generally provide free optimized FFT implementations in their processors'

    assembly code, therefore, it is not so important to understand how the FFT really works,

    as it is to understand how to use it.[1]

    2.2 The filtering problem

    Filtering is a process of changing signals spectral content. The change is usually the

    reduction of certain frequencies in the signal while allowing the other frequencies to pass.

    A fundamental aspect of signal processing is filtering. Filtering involves the

    manipulation of the spectrum of a signal by passing or blocking certain portions of the

    spectrum, depending on the frequency of those portions. Filters are designed according to

    what kind of manipulation of the signal is required for a particular application. Digital filters

  • 8/3/2019 GCET-EC-MAA-20-44-48

    15/40

    are implemented using three fundamental building blocks: an adder, a multiplier, and a delay

    element.

    The design process of a digital filter is long and tedious if done by hand. With the aid

    of computer programs performing filter design algorithms, designing and optimizing filters

    can be done relatively quickly. We will use MATLAB, a mathematical software package, to

    design, manipulate, and analyze digital filters. The design options in MATLAB allow the

    user to either create a code for designing filters that calls built-in functions.

    The term filter is often used to describe a device in the form of a piece of physical

    hardware or software that is applied to a set of noisy data in order to extract information

    about a prescribed quantity of interest. The noise may arise from a variety of sources. For

    example, the data 'may have been derived by means of noisy sensors or may represent a

    useful signal component that has been corrupted by transmission through a communication

    channel. In any event, we may use a filter to perform three basic information-processing

    tasks: [2]

    1. Filtering is an operation that involves the extraction of information about a quantity

    of interest at time t by using data measured up to and including time t.

    2. Smoothing is different from filtering, in that information about the quantity of

    interest need not be available at time t, and data measured later than time t can be used in

    obtaining this information. This means that in the case of smoothing there is a delay in

    producing the result of interest. Since in the smoothing process we are able to use data

    obtained not only up to time t but also data obtained after time t, we would expect smoothing

    to be more accurate in some sense than filtering.

    3. Prediction is the forecasting side of information processing. The aim here is to

    derive information about what the quantity of interest will be like at some time t + in the

    future, for some > 0, by using data measured up to and including time t.

    We may classify filters into linear and nonlinear. A filter is said to be linear if the

    filtered, smoothed, or predicted quantity at the output of the device is a linear function of the

    observations applied to the filter input. Otherwise, the filter is nonlinear.

  • 8/3/2019 GCET-EC-MAA-20-44-48

    16/40

    Filters used for direct filtering can be either Fixed or Adaptive. [3]

    1. Fixed filters - The design of fixed filters requires a priori knowledge of

    both the signal and the noise, i.e. if we know the signal and noise

    beforehand; we can design a filter that passes frequencies contained in the

    signal and rejects the frequency band occupied by the noise. Filter can be

    divided into several types which are lowpass filter, highpass filter, bandpass

    filter and bandstop filter.

    2. Adaptive filters - Adaptive filters, on the other hand, have the ability to

    adjust their impulse response to filter out the correlated signal in the input.

    They require little or no a priori knowledge of the signal and noise

    characteristics. (If the signal is narrowband and noise broadband, which is

    usually the case, or vice versa, no a priori information is needed; otherwise

    they require a signal (desired response) that is correlated in some sense to

    the signal to be estimated.) Moreover adaptive filters have the capability of

    adaptively tracking the signal under non-stationary conditions.

    2.3 Implementation of filters

    2.3.1Lowpass filter:

    The ideal lowpass filter with cutoff frequency is shown. Also the practical

    filter is shown having cutoff frequency 5K Hz, and sampling frequency of 30K Hz is

    used.

  • 8/3/2019 GCET-EC-MAA-20-44-48

    17/40

    2.3.2 Highpass filter:

    Fig2.4Ideal LowPass filter response

    X axis Frequency(Hz)

    Y axisMagnitude(dB)

    Cut off freq5 KHz

    X axis Frequency(Hz)

    Y axisPhase(degrees)

    Cut off freq5 KHz

    Fig2.5 MATLAB implementation of LowPass Filter response

  • 8/3/2019 GCET-EC-MAA-20-44-48

    18/40

    The ideal highpass filter with cutoff frequency is shown. Also the

    practical filter is shown having cutoff frequency 5K Hz, and sampling frequency of

    30K Hz is used.

    2.3.3 Bandpass filter:

    Fig2.6Ideal HighPass filter response

    X axis Frequency(Hz)

    Y axisMagnitude(dB)

    Cut off freq5 KHz

    X axis Frequency(Hz)

    Y axisPhase(degrees)

    Cut off freq5 KHz

    Fig2.7 MATLAB implementation of HighPass Filter

    response

  • 8/3/2019 GCET-EC-MAA-20-44-48

    19/40

    The ideal bandpass filter with allowable bandwidth is shown.

    Also the practical filter is shown having lower cutoff frequency of 1200 and higher

    cutoff frequency of 1800, and sampling frequency of 8K Hz is used.

    Fi 2.8 Ideal BandPass ilter res onse

    X axis Frequency(Hz)

    Y axisPhase(degrees)

    Cut off freq1200Hz

    X axis Frequency(Hz)

    Y axisPhase(degrees)

    Cut off freq1800Hz

    Fig2.9 MATLAB implementation of BandPass Filter response

  • 8/3/2019 GCET-EC-MAA-20-44-48

    20/40

    2.3.4 Bandstop filter:

    The ideal bandstop filter with frequencies is shown. Also the practical

    filter is shown having lower cutoff frequency of 1200 and higher cutoff frequency of

    1800, and sampling frequency of 8K Hz is used.

    Fig2.10Ideal Bandstop filter Characteristic

    X axis Frequency(Hz)

    Y axisPhase(degrees)

    Cut off freq2800Hz

    X axis Frequency(Hz)

    Y axisMagnitude(dB)

    Cut off freq1800Hz

    Fig2.11 MATLAB implementation of Bandstop Filter response

  • 8/3/2019 GCET-EC-MAA-20-44-48

    21/40

    Chapter 3

    Adaptive filters

    3.1 Introduction

    In normal filters, a straight forward approach is used where the "estimate and plug"

    procedure is implemented. This is a two-stage process whereby the filter first "estimates" the

    statistical parameters of the relevant signals and then "plugs" the results so obtained into a

    nonrecursive formula for computing the filter parameters. For real-time operation, this

    procedure has the disadvantage of requiring excessively elaborate and costly hardware. A

    more efficient method is to use an adaptive filter.

    By such a device we mean one that is self-designing in that the adaptive filter relies

    for its operation on a recursive algorithm, which makes it possible for the filter to perform

    satisfactorily in an environment where complete knowledge of the relevant signal

    characteristics is not available. The algorithm starts from some predetermined set of initial

    conditions, representing whatever we know about the environment. Yet, in a stationary

    environment, we find that after successive iterations of the algorithm it converges to the

    optimum Wiener solution in some statistical sense. In a nonstationary environment, the

  • 8/3/2019 GCET-EC-MAA-20-44-48

    22/40

    algorithm offers a tracking capability, in that it can track time variations in the statistics of the

    input data, provided that the variations are sufficiently slow.

    As a direct consequence of the application of a recursive algorithm whereby the

    parameters of an adaptive filter are updated from iteration to iteration, the parameters become

    data dependent. This, therefore, means that an adaptive filter is in reality a nonlinear device,

    in the sense that it does not obey the principle of superposition. Notwithstanding this

    property, adaptive filters are commonly classified as linear or nonlinear. An adaptive filter is

    said to be linear if the estimate of a quantity of interest is computed adaptively (at the output

    of the filter) as a linear combination of the available set of observations applied to the filter

    input. Otherwise, the adaptive filter is said to be nonlinear.

    A wide variety of recursive algorithms has been developed in the literature for the

    operation of linear adaptive filters. In the final analysis, the choice of one algorithm over

    another is determined by one or more of the following factors:

    Rate of convergence: This is defined as the number of iterations required for the

    algorithm, in response to stationary inputs, to converge "close enough" to the

    optimum Wiener solution in the mean-square sense. A fast rate of convergence allows

    the algorithm to adapt rapidly to a stationary environment of unknown statistics.

    Misadjustment: For an algorithm of interest, this parameter provides a quantitative

    measure of the amount by which the final value of the mean-squared error, averaged

    over an ensemble of adaptive filters, deviates from the minimum mean-squared error

    that is produced by the Wiener filter.

    Tracking: When an adaptive filtering algorithm operates in a nonstationary

    environment, the algorithm is required to track statistical variations in the

    environment. The tracking performance of the algorithm, however, is influenced by

    two contradictory features: (1) rate of convergence, and (b) steady-state fluctuation

    due to algorithm noise.

    Robustness: For an adaptive filter to be robust, small disturbances (i.e., disturbances

    with small energy) can only result in small estimation errors. The disturbances may

    arise from a variety of factors, internal or external to the filter.

    Computational requirements. Here the issues of concern include (a) the number of

    operations (i.e., multiplications, divisions, and additions/subtractions) required to

    make one complete iteration of the algorithm, (b) the size of memory location

  • 8/3/2019 GCET-EC-MAA-20-44-48

    23/40

    required to store the data and the program, and (c) the investment required So

    program the algorithm on a computer.

    Structure: This refers to the structure of information flow in the algorithm,

    determining the manner in which it is implemented in hardware form. For example,an algorithm whose structure exhibits high modularity, parallelism, or concurrency is

    well suited for implementation using very large-scale integration (VLSI).

    Numerical properties. When an algorithm is implemented numerically, inaccuracies

    are produced due to quantization errors. The quantization errors are due to analog-to-

    digital conversion of the input data and digital representation of internal calculations.

    Ordinarily, it is the latter source of quantization errors that poses a serious design

    problem. In particular, there are two basic issues of concern: numerical stability and

    numerical accuracy. Numerical stability is an inherent characteristic of an adaptive

    filtering algorithm. Numerical accuracy, on the other hand, is determined by the

    number of bits (i.e., binary digits) used in the numerical representation of data

    samples and filter coefficients. An adaptive filtering algorithm is said to be

    numerically robust when it is insensitive to variations in the wordlength used in its

    digital implementation. [2]

    3.2 Adaptive filters using block diagram

    An adaptive filter is a computational device that attempts to model the relationship

    between two signals in real time in an iterative manner. Adaptive filters are often realized

    either as a set of program instructions running on an arithmetical processing device such as a

    microprocessor or DSP chip, or as a set of logic operations implemented in a field-

    programmable gate array (FPGA) or in a semicustom or custom VLSI integrated circuit.

    However, ignoring any errors introduced by numerical precision effects in these

    implementations, the fundamental operation of an adaptive filter can be characterized

    independently of the specific physical realization that it takes. For this reason, we shall focus

    on the mathematical forms of adaptive filters as opposed to their specific realizations in

    software or hardware. Descriptions of adaptive filters as implemented on DSP chips and on a

    dedicated integrated circuit can be found as follows:

    An adaptive filter is defined by four aspects:

    1. The signals being processed by the filter.

  • 8/3/2019 GCET-EC-MAA-20-44-48

    24/40

    2. The structure that defines how the output signal of the filter is computed

    from its input signal.

    3. The parameters within this structure that can be iteratively changed to alter

    the filters input-output relationship.

    4. The adaptive algorithm that describes how the parameters are adjusted from

    one time instant to the next.

    By choosing a particular adaptive filter structure, one specifies the number and type of

    parameters that can be adjusted. The adaptive algorithm used to update the parameter values

    of the system can take on a myriad of forms and is often derived as a form of optimization

    procedure that minimizes an error criterion that is useful for the task at hand.

    In this section, we present the general adaptive filtering problem and introduce the

    mathematical notation for representing the form and operation of the adaptive filter. We then

    discuss several different structures that have been proven to be useful in practical

    applications. We provide an overview of the many and varied applications in which adaptive

    filters have been successfully used. Finally, we give a simple derivation of the least-mean-

    square (LMS) algorithm, which is perhaps the most popular method for adjusting the

    coefficients of an adaptive filter, and we discuss some of this algorithms properties.

    As for the mathematical notation used throughout this section, all quantities are

    assumed to be real-valued. Scalar and vector quantities shall be indicated by lowercase (e.g.,

    x) and uppercase-bold (e.g., X) letters, respectively. We represent scalar and vector

    sequences or signals as x(n) and X(n), respectively, where n denotes the discrete time or

    discrete spatial index, depending on the application [2]

  • 8/3/2019 GCET-EC-MAA-20-44-48

    25/40

    Figure shows a block diagram in which a sample from a digital input signal x(n) is fed

    into a device, called an adaptive filter, that computes a corresponding output signal sample

    y(n) at time n. For the moment, the structure of the adaptive filter is not important, except for

    the fact that it contains adjustable parameters whose values affect how y(n) is computed. The

    output signal is compared to a second signal d(n), called the desired response signal, by

    subtracting the two samples at time n. This difference signal, given by: e(n) = d(n) y(n) is

    known as the error signal. The error signal is fed into a procedure which alters or adapts theparameters of the filter from time n to time (n + 1) in a well-defined manner. This process of

    adaptation is represented by the oblique arrow that pierces the adaptive filter block in the

    figure. As the time index n is incremented, it is hoped that the output of the adaptive filter

    becomes a better and better match to the desired response signal through this adaptation

    process, such that the magnitude of e(n) decreases over time. In this context, what is meant

    by better is specified by the form of the adaptive algorithm used to adjust the parameters of

    the adaptive filter.

    In the adaptive filtering task, adaptation refers to the method by which the parameters

    of the system are changed from time index n to time index (n + 1). The number and types of

    parameters within this system depend on the computational structure chosen for the system.

    [2]

    3.3 Approaches to the development of linear adaptive filtering algorithms

    Fig3.1Basic block diagram of Adaptive Filter

  • 8/3/2019 GCET-EC-MAA-20-44-48

    26/40

  • 8/3/2019 GCET-EC-MAA-20-44-48

    27/40

    3.4 How to Choose an Adaptive Filter

    Given the wide variety of adaptive filters available it is a choice to be made for an

    application of interest. The filter used should be based on project type and its computational

    cost, performance, and robustness are the issues which are to be considered while designing

    adaptive filters.

    The use of computer simulation provides a good first step in undertaking a detailed

    investigation of these issues. We begin by using the LMS algorithm as an adaptive filtering

    tool for the project. The LMS algorithm is relatively simple to implement. Yet it is powerful

    enough to evaluate the practical benefits that may result from the application of adaptivity to

    the problem at hand. Moreover, it provides a practical frame of reference for assessing any

    further improvement that may be attained through the use of more sophisticated adaptive

    filtering algorithms.

    Practical applications of adaptive filtering are very diverse, with each application

    having peculiarities of its own. The solution for one application may not be suitable for

    another. Nevertheless, to be successful we have to develop a physical understanding of the

    environment in which the filter has to operate and thereby relate to the realities of the

    application of interest.

    3.5 Applications of adaptive filters

    1) System Identification

    This is used to identify the response of system. The adaptive filters try to

    determine the best linear model that describes the input-output relationship of the unknown

    system.

  • 8/3/2019 GCET-EC-MAA-20-44-48

    28/40

    Let d(n) represent the output of the unknown system with x(n) as its input.

    Then, the desired response signal in this model is d(n)= d(n)+x(n). Here, the task of the

    adaptive filter is to accurately represent the signal d(n) at its output. If y(n)= d(n) then the

    adaptive filter has accurately modeled or identified the portion of the unknown system that is

    driven by x(n).

    Since the model typically chosen for the adaptive filter is a linear filter, the

    practical goal of the adaptive filter is to determine the best linear model that describes the

    input-output relationship of the unknown system. Such a procedure makes the most sense

    when the unknown system is also a linear model of the same structure as the adaptive filter,

    as it is possible that y(n)= d(n) for some set of adaptive filter parameters.

    The system identification task is at the heart of numerous adaptive filtering

    applications. We list some of its applications here.

    Channel Identification

    In communication systems, useful information is transmitted from one point to

    another across a medium such as an electrical wire, an optical fiber, or a wireless radio link.

    Nonidealities of the transmission medium or channel distort the fidelity of the transmitted

    signals, making the deciphering of the received information difficult. In such cases, an

    adaptive filter can be used to model the effects of the channel ISI for purposes of deciphering

    the received information in an optimal manner. In this problem scenario, the transmitter sends

    to the receiver a sample sequence x(n) that is known to both the transmitter and receiver. The

    Fig3.2Block diagram of System Identification

  • 8/3/2019 GCET-EC-MAA-20-44-48

    29/40

    receiver then attempts to model the received signal d(n)using an adaptive filter whose input is

    the known transmitted sequence x(n) After a suitable period of adaptation, the parameters of

    the adaptive filter in W(n) are fixed and then used in a procedure to decode future signals

    transmitted across the channel.

    Plant Identification

    In many control tasks, knowledge of the transfer function of a linear plant is

    required by the physical controller so that a suitable control signal can be calculated and

    applied. In such cases, we can characterize the transfer function of the plant by exciting it

    with a known signal x(n) and then attempting to match the output of the plant d(n) with a

    linear adaptive filter. After a suitable period of adaptation, the system has been adequately

    modeled, and the resulting adaptive filter coefficients in W(n) can be used in a control

    scheme to enable the overall closed-loop system to behave in the desired manner. [2]

    2) Linear Prediction

    In this system, the input signal x(n) is derived from the desired response signal

    as x(n)= d(n ), Where is an integer value of delay. In effect, the input signal serves as the

    desired response signal, and for this reason it is always available. In such cases, the linear

    adaptive filter attempts to predict future values of the input signal using past samples, giving

    rise to the name linear prediction for this task.

    If an estimate of the signal x(n + ) at time n is desired, a copy of the

    adaptive filter whose input is the current sample x(n) can be employed to compute this

    quantity. However, linear prediction has a number of uses besides the obvious application

    of forecasting future events, as described in the following two applications. [2]

    Fig3.3Block diagram of Linear Prediction

  • 8/3/2019 GCET-EC-MAA-20-44-48

    30/40

    3) Noise cancellation

    A signal of interest is linearly mixed with other extraneous noises which

    introduce unacceptable errors in the measurements. Noise cancellation is thus used to

    cancel this interference from a primary signal.

    Adaptive noise cancelling has been used for several applications. One of the first

    was a medical application that enabled the electroencephalogram (EEG) of the fetal

    heartbeat of an unborn child to be cleanly extracted from the much-stronger

    interfering EEG of the maternal heartbeat signal [2]

    4) echo cancellation

    The term echo cancellation is used in telephony to describe the process of

    removing echo from a voice communication in order to improve voice quality on

    a telephone call. In addition to improving subjective quality, this process increases the

    capacity achieved through silence suppression by preventing echo from traveling

    across a network.

    Echo cancellation involves first recognizing the originally transmitted signal that

    re-appears, with some delay, in the transmitted or received signal. Once the echo is

    recognized, it can be removed by 'subtracting' it from the transmitted or received

    signal. This technique is generally implemented using a digital signal

    processor(DSP), but can also be implemented in software. Echo cancellation is done

    using eitherecho suppressors or echo cancellers, or in some cases both. [2]

    Fig3.4Block diagram of Noise Cancellation

    http://en.wikipedia.org/wiki/Telephonyhttp://en.wikipedia.org/wiki/Echo_(phenomenon)http://en.wikipedia.org/wiki/Telephone_callhttp://en.wikipedia.org/wiki/Silence_suppressionhttp://en.wikipedia.org/wiki/Telecommunications_networkhttp://en.wikipedia.org/wiki/Digital_signal_processorhttp://en.wikipedia.org/wiki/Digital_signal_processorhttp://en.wikipedia.org/wiki/Softwarehttp://en.wikipedia.org/wiki/Echo_suppressorhttp://en.wikipedia.org/wiki/Telephonyhttp://en.wikipedia.org/wiki/Echo_(phenomenon)http://en.wikipedia.org/wiki/Telephone_callhttp://en.wikipedia.org/wiki/Silence_suppressionhttp://en.wikipedia.org/wiki/Telecommunications_networkhttp://en.wikipedia.org/wiki/Digital_signal_processorhttp://en.wikipedia.org/wiki/Digital_signal_processorhttp://en.wikipedia.org/wiki/Softwarehttp://en.wikipedia.org/wiki/Echo_suppressor
  • 8/3/2019 GCET-EC-MAA-20-44-48

    31/40

    Chapter 4

    LMS algorithm

    4.1 Introduction to LMS algorithm

    Least mean squares (LMS) algorithms are a class ofadaptive filter used to mimic a

    desired filter by finding the filter coefficients that relate to producing the least mean squares

    of the error signal (difference between the desired and the actual signal). It was invented in

    1960 by Stanford University professor Bernard Widrow and his first Ph.D. student, Ted Hoff.

    The LMS algorithm is an important member of the family of stochastic gradient

    algorithms. The term "stochastic gradient" is intended to distinguish the LMS algorithm from

    the method of steepest descent that uses a deterministic gradient in a recursive computation ofthe Wiener filter for stochastic inputs. A significant feature of the LMS algorithm is its

    simplicity.

    Moreover, it does not require measurements of the pertinent correlation functions, nor

    does it require matrix inversion. Indeed, it is the simplicity of the LMS algorithm that has

    made it the standard against which other adaptive filtering algorithms are benchmarked. [6]

    4.2 Problem formulation

    Most linear adaptive filtering problems can be formulated using the block diagram

    above. That is, an unknown system is to be identified and the adaptive filter attempts

    Fig4.1Least Mean Square Filter using Adaptive Algorithm

    http://en.wikipedia.org/wiki/File:Lms_filter.png
  • 8/3/2019 GCET-EC-MAA-20-44-48

    32/40

    to adapt the filter to make it as close as possible to , while using only observable

    signals x(n), d(n) and e(n); but y(n), v(n) and h(n) are not directly observable. Its solution is

    closely related to theWiener filter.

    Definition of symbols

    d (n) = y (n) + (n)

    Idea

    The idea behind LMS filters is to usesteepest descentto find filter weightswhich minimize a cost function.[6]

    4.3 Overview of the least-mean-square algorithm

    The least-mean-square (LMS) algorithm is a linear adaptive filtering algorithm that

    consists of two basic processes:

    1. A filtering process, which involves (a) computing the output of a transversal filter

    produced by a set of tap inputs, and (b) generating an estimation error by comparing this

    output to a desired response.

    2. An adaptive process, which involves the automatic adjustment of the tap weights

    of the filter in accordance with the estimation error.

    Thus, the combination of these two processes working together constitutes a feedback

    loop around the LMS algorithm, as illustrated in the block diagram of Figure. First, we have a

    transversal filter, around which the LMS algorithm is built; this component is responsible for

    performing the filtering process. Second, we have a mechanism for performing the adaptive

    control process on the tap weights of the transversal filter, hence the designation "adaptive

    weight-control mechanism" in Figure (a). Details of the transversal filter component are

    presented in Figure (b). The tap inputs u(n), u(n - 1) ....., u(n - M + 1) form the elements of

    the M-by-1 tap-input vector u(n), where M - 1 is the number of delay elements; these tap

    inputs span a multidimensional space denoted by . Correspondingly, the tap weights

    form the elements of the M-by- 1 tap-weight vector W(n). The

    value computed for the tap-weight vector W(n) using the LMS algorithm represents an

    http://en.wikipedia.org/wiki/Wiener_filterhttp://en.wikipedia.org/wiki/Wiener_filterhttp://en.wikipedia.org/wiki/Wiener_filterhttp://en.wikipedia.org/wiki/Steepest_descenthttp://en.wikipedia.org/wiki/Steepest_descenthttp://en.wikipedia.org/wiki/Cost_functionhttp://en.wikipedia.org/wiki/Cost_functionhttp://en.wikipedia.org/wiki/Wiener_filterhttp://en.wikipedia.org/wiki/Steepest_descenthttp://en.wikipedia.org/wiki/Cost_function
  • 8/3/2019 GCET-EC-MAA-20-44-48

    33/40

    estimate whose expected value approaches the Wiener solution W0 (for a wide-sense

    stationary environment) as the number of iterations n approaches infinity.

    During the filtering process the desired response d(n) is supplied for processing,

    alongside the tap-input vector u(n). Given this input, the transversal filter produces an output

    used as an estimate of the desired response d(n). Accordingly, we may define an

    estimation error e(n) as the difference between the desired response and the actual filter

    output, as indicated in the output end of Figure (b). The estimation error e(n) and the tap-

    Fig4.2Block diagram of Transversal Filter

  • 8/3/2019 GCET-EC-MAA-20-44-48

    34/40

    input vector u(n) are applied to the control mechanism, and the feedback loop around the tap

    weights is thereby closed.

    A scalar version of the inner product of the estimation error e(n) and the tap input u(n

    - k) is computed for k = 0, 1, 2 ..... M - 2, M - 1. The result so obtained defines the correction

    Wk(n) applied to the tap weight Wk(n) at iteration n + 1. The scaling factor used in this

    computation is denoted by , is called the step-size parameter.

    The LMS algorithm uses the product u(n - k)e*(k) as an estimate of element k in the

    gradient vector J(n) that characterizes the method of steepest descent. In other words, the

    expectation operator is missed out from all the paths. Accordingly, the recursive computation

    of each tap weight in the LMS algorithm suffers from a gradient noise.

    Earlier we stated that the LMS algorithm involves feedback in its operation, which

    therefore raises the related issue of stability. In this context, a meaningful criterion is to

    require that

    J(n) J( ) as n ,

    Where J(n) is the mean-squared error produced by the LMS algorithm at time n, and its final

    value j( ) is a constant. An algorithm that satisfies this requirement is said to be convergent

    in the mean square. For the LMS algorithm to satisfy this criterion, the step-size parameter

    has to satisfy a certain condition related to the Eigen structure of the correlation matrix of thetap inputs.

    The difference between the final value J( ) and the minimum value attained by

    the Wiener solution is called the excess mean-squared error . This difference

    represents the price paid for using the adaptive (stochastic) mechanism to control the tap

    weights in the LMS algorithm in place of a deterministic approach as in the method of

    steepest descent. The ratio of to is called the misadjustment, which is a measure

    of how far the steady-state solution computed by the LMS algorithm is away from the Wiener

    solution. It is important to realize, however, that the Misadjustment, M is under the designer's

    control. In particular, the feedback loop acting around the tap weights behaves like a low-pass

    filter, whose "average" time constant is inversely proportional to the stepsize parameter .

    Hence, by assigning a small value to the adaptive process is made to progress slowly, and

    the effects of gradient noise on the tap weights ate largely filtered out. This, in turn, has the

    effect of reducing the Misadjustment.

  • 8/3/2019 GCET-EC-MAA-20-44-48

    35/40

    Therefore the LMS algorithm is simple in implementation, yet capable of delivering

    high performance by adapting to its external environment. To do so, however, we have to pay

    particular attention to the choice of a suitable value for the step-size parameter . [6]

    4.4 Least-mean-square adaptation algorithm

    If it were possible to make exact measurements of the gradient vector J(n) at each

    iteration n, and if the step-size parameter is suitably chosen, then the tap-weight vector

    computed by using the steepest-descent algorithm would indeed converge to the optimum

    Wiener solution. In reality, however, exact measurements of the gradient vector are not

    possible since this would require prior knowledge of both the correlation matrix R of the tap

    inputs and the cross-correlation vector P between the tap inputs and the desired response.

    Consequently, the gradient vector must be estimated from the available data

    To develop an estimate of the gradient vector J(n), the most obvious strategy is to

    substitute estimates of the correlation matrix R and the cross-correlation vector P in the

    formula:

    The simplest choice of estimators for R and p is to use instantaneous estimates that

    are based on sample values of the tap-input vector and desired response, as defined by,

    respectively,

    And

    Correspondingly, the instantaneous estimate of the gradient vector is

    J(n) = - 2u(n)d*(n) + 2u(n) (n)W(n)

    Generally speaking, this estimate is biased because the tap-weight estimate vector

    W(n) is a random vector that depends on the tap-input vector u(n). Note that the estimate

    J(n) may also be viewed as the gradient operator applied to the instantaneous squared error

    Substituting the estimate of above equation, for the gradient vector J(n) in the

    steepest descent algorithm, we get a new recursive relation for updating the tap-weight

    vector:

    W(n + 1) =W(n) + u(n)[d*(n) - (n)W(n)]

  • 8/3/2019 GCET-EC-MAA-20-44-48

    36/40

    Here we have used a hat over the symbol for the tap-weight vector to distinguish it

    from the value obtained by using the steepest-descent algorithm. Equivalently, we may write

    the result in the form of three basic relations as follows:

    1. Filter output:

    2. Estimation error:

    3. Tap-weight adaptation:

    The (1) and (2) equations define the estimation error e(n), the computation of which is

    based on the current estimate of the tap-weight vector, W(n). Note also that the second term, u(n)e*(n), on the right-hand side of Eq. (3) represents the correction that is applied to the

    current estimate of the tap-weight vector, W(n). The iterative procedure is started with an

    initial guess W(0).

    The algorithm described by Equations (1) to (3) is the complex form of the adaptive

    least-mean-square (LMS) algorithm. At each iteration or time update, it also requires

    knowledge of the most recent values: u(n), d(n), and W(n). The LMS algorithm is a member

    of the family of stochastic gradient algorithms. In particular, when the LMS algorithmoperates on stochastic inputs, the allowed set of directions along which we "step" from one

    iteration cycle to the next is quite random and cannot therefore be thought of as being true

    gradient directions.

  • 8/3/2019 GCET-EC-MAA-20-44-48

    37/40

    The above figure shows a signal-flow graph representation of the LMS algorithm in the

    form of a feedback model. This model bears a close resemblance to the feedback model of

    the steepest-descent algorithm. The signal-flow graph of above figure clearly illustrates the

    simplicity of the LMS algorithm. In particular, we see from this figure that the LMS

    algorithm requires only 2M + 1 complex multiplications and 2M complex additions per

    iteration, where M is the number of tap weights used in the adaptive transversal filter. In

    other words, the computational complexity of the LMS algorithm is O(M).

    The instantaneous estimates of R and P have relatively large variances. At first sight,

    it may therefore seem that the LMS algorithm is incapable of good performance since the

    algorithm uses these instantaneous estimates. However, we must remember that the LMS

    algorithm is recursive in nature, with the result that the algorithm itself effectively averages

    these estimates, in some sense, during the course of adaptation. [6]

    Fig4.3 Signal flow graph representation of LMS algorithm

  • 8/3/2019 GCET-EC-MAA-20-44-48

    38/40

  • 8/3/2019 GCET-EC-MAA-20-44-48

    39/40

    FUTURE IMPLEMENTATION

    We are going to implement the adaptive filters in MATLAB, which can process onsound and then we will go for image.

    Then we are going to implement it on hardware (DSP kit TI-DSK 6713).

    We will try to implement this adaptive filter on some real time application.

    CONCLUSION

    Through this project we learnt various concepts of Digital Signal Processing. We

    also learnt in detail about adaptive filters and its application. Adaptive filters are those who

    adapt themselves according to required situation.

    Adaptive filters use LMS algorithm for error detection. LMS algorithm is simple to

    understand and easy to implement.

    Thus we have satisfactorily completed our first part of project.

  • 8/3/2019 GCET-EC-MAA-20-44-48

    40/40

    REFERENCES

    Books:

    1. Proakis, John G. and Manolakis, Dimitris G. Digital Signal Processing: Principles,Algorithms, and Applications, 3rd Edition. Prentice Hall. Upper Saddle River,

    NJ,1996

    2. Vijay K. Madisetti and Douglas B. Williams Boca Raton (1999)

    Introduction to Adaptive FiltersDigital Signal Processing Handbook

    3. Oppeheim, Schafer, Buck Pearson education publication, 2nd Edition (2003)

    Discrete Time Signal Processing

    4. IEEE Transactions on Signal Processing, Vol. 39,No. 10, (1991)

    5. E.Ifeachor and B.Jervis, 2nd edition Prentice Hall (1998) Digital Signal Processing: A

    practical Approach

    6. Simon Hayking, 3rd Edition Filter Theory

    Papers:

    1. John J. Shynk, IEEE SP Magazine Frequency Domain and multirate Adaptive

    Filtering

    2. Christian Feldbauer, Franz Pernkopf, and Erhard Rank , Signal Processing and Speech

    Communication Laboratory, Inffeldgasse 16c Adaptive Filters Tutorial