Applied Adaptive Signal Processing Report
-
Upload
main-uddin-al-hasan -
Category
Education
-
view
37 -
download
1
Transcript of Applied Adaptive Signal Processing Report
A MATLAB Simulation Software for Key
Adaptive Algorithms and Applications
Project 2
Written by
Group 18
Main Uddin-Al-Hasan, 8901011836
M.Sc. in Electrical Engineering with emphasis on Signal Processing
Blekinge Institute of Technology, Karlskrona, Sweden
Abstract
Adaptive signal processing algorithms are very useful in Active Noise Cancellation
(ANC), Adaptive Line Enhancement (ALE) and System Identification (SI). Therefore, A
MATLAB software is developed for the simulation of MATLAB pre-implemented Least-
Mean-Square (LMS), Recursive-Least-Square (RLS), Affine Projection (AP), Frequency
Domain (FD), Lattice (L) based 30 signal processing adaptive algorithms but we have
theoretically studied only most common variants of LMS Based adaptive algorithms in this
project. The developed software reduces simulation time through assembling all mentioned
adaptive algorithms into one software interface.
The LMS Based Algorithms are mainly studied in the project of which LMS, NLMS,
LLMS are studied with emphasis. These algorithms are studied with different step size and
filter order. The benefit of stochastic LMS algorithms in compare to Least-Square Adaptive
algorithms is also studied in the project. The learning curve (LC) of the adaptive algorithms
are also studied in relation to their step size and filter order. The learning curve parameters
Convergence, Local convergence, Global convergence, Steady State Error (SSE) showed
exactly right adaptive learning behaviour in accordance with Adaptive Filter Theory. The
learning curve behaviour and graphical presentation of the LC and its different parameters is
studied. Moreover, the adaptive algorithm performance assessment criteria is also studied.
The developed MATLAB software is written programmatically and have GUI features
such as popup-menu, algorithm parameter input, signal data input, loaded data display, filtered
signal and learning curve data display. The software can store processed data in run-time and
later can be re-plotted in a new figure window and can be played to check filtered signals audio
quality. The implemented algorithms can be tested with some default parameter. Moreover,
slider control is implemented in the software to update algorithm parameters easily.
Acknowledgement
I would like to give thanks to all scientists and professors specially Simon Haykin, B. Farhang-
Boroujeny, John G. Proakis, Dimitris G. Manolakis and Monson H. Hayes whose books nicely
explains the complex adaptive signal processing concepts in an easy way. Moreover, I would
like to thank my supervisor Irina Gertsovich at BTH for her precise information and
supervision of the project which helped me to complete the project. Furthermore, I would like
to also give thanks to my family for their continuous support and for providing aspirations to
complete my education.
Contents Abstract ..................................................................................................................................... 3
Acknowledgement .................................................................................................................... 5
List of Figures ......................................................................................................................... 10
List of Acronyms .................................................................................................................... 13
Chapter 1 .................................................................................................................................. 14
Introduction ............................................................................................................................ 14
1.1 Project Scope ............................................................................................................. 17
1.2 Problem formulation and Project Outline ................................................................. 17
Chapter 2 .................................................................................................................................. 19
Research Methodology and Requirement Analysis ............................................................ 19
2.1 Functional requirements ................................................................................................. 19
2.2 Non-functional requirements.......................................................................................... 19
Chapter 3 .................................................................................................................................. 20
Adaptive Signal Processing Filters and Applications ......................................................... 20
3.1 Structure of Adaptive Filter............................................................................................ 20
3.1.1 Spatial Structure or Block Diagram......................................................................... 20
3.1.2 Functional structure ................................................................................................. 21
3.2 Adaptive Filter Performance .......................................................................................... 23
3.2.1 Learning Curve ........................................................................................................ 24
3.2.2 Convergence Speed ................................................................................................. 26
3.2.3 Steady State Error (SSE) ......................................................................................... 30
3.3 Adaptive Filter Groups ................................................................................................... 30
3.4 Application Classes ........................................................................................................ 30
3.5 Difference between MSE and LSE ................................................................................ 31
Chapter 4 .................................................................................................................................. 32
Literature Review .................................................................................................................. 32
Chapter 5 .................................................................................................................................. 33
Least-Mean-Square Adaptive Filters and Applications ..................................................... 33
5.2 Least-Mean-Square (LMS) Adaptive Filters.................................................................. 33
5.2.1 Some Common Variants of LMS Algorithm .......................................................... 35
5.3 Implemented Adaptive Filter Applications................................................................ 37
5.3.1 Adaptive Noise Cancellation (ANC) ....................................................................... 37
5.3.2 Adaptive Line Enhancement (ALE) or FIR Linear Prediction ................................ 38
5.3.3 System Identification or Modelling (SI) .................................................................. 40
Chapter 6 .................................................................................................................................. 42
MATLAB and Development Tools ....................................................................................... 42
6.1 MATLAB GUI Design Methodology ............................................................................ 42
6.1.1 Compact data representation ................................................................................... 42
6.1. 2 Aesthetical data representation ............................................................................... 42
6.1.3 GUI Development using “GUIDE” ......................................................................... 43
6.1.4 Programmatic GUI Development ............................................................................ 43
6.2 Structural GUI Design Tools.......................................................................................... 44
6.2.1 Nested Panels ........................................................................................................... 44
6.3 Used Functions ............................................................................................................... 45
Chapter 7 .................................................................................................................................. 46
Algorithm and Software Development................................................................................. 46
7.1 Graphical User Interface (GUI) Structure and Elements ............................................... 46
7.1.1 Main GUI Window or Figure .................................................................................. 46
7.1.2 Nested Panelling ...................................................................................................... 47
7.1.3 Popup Menu or Listing ............................................................................................ 50
7.1.4 Slider Control .......................................................................................................... 51
7.1.5 Application and Parameter Data Input .................................................................... 53
7.1.6 Data storage and retrieval ........................................................................................ 54
7.1.7 Data display axes ..................................................................................................... 56
7.1.8 A block of main plotter function ............................................................................. 56
7.1.9 An instance of functions for applications ................................................................ 58
7.1.10 Display results in a new figure .............................................................................. 61
7.1.11 Data representation, Listening data and Default Parameter Value ........................ 62
7.2 Software Execution Flow ............................................................................................... 64
Chapter 8 .................................................................................................................................. 65
Results of Adaptive Algorithms ............................................................................................ 65
8.1 Active Noise Cancellation (ANC).................................................................................. 65
8.2 Adaptive Line Enhancement (ALE) ............................................................................... 76
8.3 System Identification (SI) .............................................................................................. 87
Chapter 9 .................................................................................................................................. 98
Comparative Performance and Data Analysis .................................................................... 98
9.1 Comparative Performance .............................................................................................. 98
9.1.1 Adaptive Noise Cancellation (ANC) ....................................................................... 98
9.1.2 Adaptive Line Enhancement (ALE) ...................................................................... 100
9.1.3 System Identification (SI) ...................................................................................... 102
Chapter 10 .............................................................................................................................. 105
Summary and Conclusions ................................................................................................. 105
10.1 Future Work ............................................................................................................... 105
References ............................................................................................................................. 106
List of Figures
Figure 1: Original output from the filter .................................................................................. 15
Figure 2: Desired output from the filter ................................................................................... 15
Figure 3: Adaptive control using adaptive filter ...................................................................... 16
Figure 4: Signal approximation using adaptive filter .............................................................. 16
Figure 5: An N-tap transversal adaptive filter [3] .................................................................... 20
Figure 6: Adaptive Filter Functional Components .................................................................. 21
Figure 7: Convergence Speed and SSE ................................................................................... 23
Figure 8: Local Convergence and Global Convergence .......................................................... 23
Figure 9: Learning Curve ......................................................................................................... 24
Figure 10: An error signal with associated LC ........................................................................ 25
Figure 11: System Identification with NLMS when step size µ= 0.1, order n = 20 and beta
β=1 ........................................................................................................................................... 27
Figure 12: System Identification with NLMS when step size µ= 0.01, order n = 20 and beta
β=1 ........................................................................................................................................... 28
Figure 13: ANC with filter order 30 ........................................................................................ 29
Figure 14: ANC with filter order 80 ........................................................................................ 29
Figure 15: Influence of step-size µ in convergence towards ᶓ 𝒎𝒊𝒏 [Google Search] ............ 34
Figure 16: Adaptive Noise Cancellation .................................................................................. 38
Figure 17: Adaptive Line Enhancement .................................................................................. 39
Figure 18: System Identification using Adaptive Filter ........................................................... 41
Figure 19: Developed GUI without data .................................................................................. 47
Figure 20: Main GUI window with some data ........................................................................ 47
Figure 21: Internal GUI Blocks ............................................................................................... 49
Figure 22: Popup menu execution flow ................................................................................... 51
Figure 23: Real-time slider control .......................................................................................... 52
Figure 24: Application data input consistency......................................................................... 54
Figure 25: Representation and Listening to Data .................................................................... 63
Figure 26: Software Execution Flow ....................................................................................... 64
Figure 27: ANC with LMS when µ = .01 and order 30 ........................................................... 65
Figure 28: ANC with LMS when µ = .001 and order 30 ......................................................... 66
Figure 29: ANC with NLMS when µ = .01 and order 30 ........................................................ 66
Figure 30: ANC with NLMS when µ = .001 and order 30 ...................................................... 67
Figure 31: ANC with LLMS when µ = .01, order 30 and leakage .8 ...................................... 67
Figure 32: ANC with LLMS when µ = .001, order 30 and leakage .8 .................................... 68
Figure 33: ANC with ADJLMS when µ = .001, order 30 ....................................................... 68
Figure 34: ANC with ADJLMS when µ = .00001, order 30 ................................................... 69
Figure 35: ANC with BLMS when µ = .01, order 30 .............................................................. 69
Figure 36: ANC with BLMS when µ = .001, order 30 ............................................................ 70
Figure 37: ANC with BLMSFFT when µ = .01, order 30 ....................................................... 70
Figure 38: ANC with BLMSFFT when µ = .001, order 30 ..................................................... 71
Figure 39: ANC with DLMS when µ = .01, order 30, delay = 11 ........................................... 71
Figure 40: ANC with DLMS when µ = .001, order 30, delay = 11 ......................................... 72
Figure 41: ANC with Filtered-x LMS when µ = .01, order 30 ................................................ 72
Figure 42: ANC with Filtered-x LMS when µ = .001, order 30 .............................................. 73
Figure 43: ANC with Sign-Data LMS when µ = .01, order 30 ............................................... 73
Figure 44: ANC with Sign-Data LMS when µ = .001, order 30 ............................................. 74
Figure 45: ANC with Sign-Error LMS when µ = .01, order 30 .............................................. 74
Figure 46: ANC with Sign-Error LMS when µ = .001, order 30 ............................................ 75
Figure 47: ANC with Sign-Sign LMS when µ = .01, order 30................................................ 75
Figure 48: ANC with Sign-Sign LMS when µ = .001, order 30.............................................. 76
Figure 49: ALE with LMS when µ = .01, order 30 ................................................................. 77
Figure 50: ALE with LMS when µ = .001, order 30 ............................................................... 77
Figure 51: ALE with LMS when µ = .01, order 30 ................................................................. 78
Figure 52: ALE with LLMS when µ = .001, order 30 ............................................................. 78
Figure 53: ALE with ADJLMS when µ = .001, order 30 ........................................................ 79
Figure 54: ALE with ADJLMS when µ = .0001, order 30 ...................................................... 79
Figure 55: ALE with BLMS when µ = .001, order 30............................................................. 80
Figure 56: ALE with BLMS when µ = .0001, order 30 .......................................................... 80
Figure 57: ALE with BLMSFFT when µ = .001, order 30 ...................................................... 81
Figure 58: ALE with BLMSFFT when µ = .0001, order 30 .................................................... 81
Figure 59: ALE with DLMS when µ = .001, order 30 ............................................................ 82
Figure 60: ALE with DLMS when µ = .0001, order 30 .......................................................... 82
Figure 61: ALE with Filtered-x LMS when µ = .0001, order 30 ............................................ 83
Figure 62: ALE with Filtered-x LMS when µ = .001, order 30 .............................................. 83
Figure 63: ALE with Sign-Data when µ = .001, order 30 ....................................................... 84
Figure 64: ALE with Sign-Data when µ = .0001, order 30 ..................................................... 84
Figure 65: ALE with Sign-Error when µ = .0001, order 30 .................................................... 85
Figure 66: ALE with Sign-Error when µ = .001, order 30 ...................................................... 85
Figure 67: ALE with Sign-Sign when µ = .001, order 30 ....................................................... 86
Figure 68: ALE with Sign-Sign when µ = .0001, order 30 ..................................................... 86
Figure 69: SI with LMS when µ = .001, order 30 ................................................................... 87
Figure 70: SI with LMS when µ = .0001, order 30 ................................................................. 87
Figure 71: SI with NLMS when µ = .01, order 30, beta 1 ....................................................... 88
Figure 72: SI with NLMS when µ = .1, order 30, beta 1 ......................................................... 88
Figure 73: SI with NLMS when µ = .01, order 30, leakage 1 ................................................. 89
Figure 74: SI with NLMS when µ = .001, order 30, leakage 1 ............................................... 89
Figure 75: SI with ADJLMS when µ = .00001, order 30, leakage 1 ....................................... 90
Figure 76: SI with ADJLMS when µ = .0001, order 30, leakage 1 ......................................... 90
Figure 77: SI with BLMS when µ = .001, order 30 ................................................................. 91
Figure 78: SI with BLMS when µ = .0001, order 30 ............................................................... 91
Figure 79: SI with BLMSFFT when µ = .001, order 30 .......................................................... 92
Figure 80: SI with BLMSFFT when µ = .0001, order 30 ........................................................ 92
Figure 81: SI with DLMS when µ = .001, order 30, Delay 20 ................................................ 93
Figure 82: SI with DLMS when µ = .0001, order 30, Delay 20 .............................................. 93
Figure 83: SI with Filtered-x LMS when µ = .001, order 30 ................................................... 94
Figure 84: SI with Filtered-x LMS when µ = .0001, order 30 ................................................. 94
Figure 85: SI with Sign-Data when µ = .001, order 30 ........................................................... 95
Figure 86: SI with Sign-Data when µ = .0001, order 30 ......................................................... 95
Figure 87: SI with Sign-Error when µ = .001, order 30........................................................... 96
Figure 88: SI with Sign-Error when µ = .01, order 30 ............................................................. 96
Figure 89: SI with Sign-Sign when µ = .0001, order 30 .......................................................... 97
Figure 90: SI with Sign-Sign when µ = .00002, order 30 ........................................................ 97
Figure 91: Comparative Learning Curves (LMS, NLMS, LLMS, BLMS, BLMSFFT, DLMS,
SD, SE) .................................................................................................................................... 98
Figure 92: Learning Curves ADJLMS ..................................................................................... 99
Figure 93: Learning Curves Filtered-xLMS ............................................................................ 99
Figure 94: Learning Curves SS .............................................................................................. 100
Figure 95: Comparative Learning Curves (LMS, NLMS, LLMS, BLMS, BLMSFFT, DLMS,
SD, SE) .................................................................................................................................. 100
Figure 96: Learning Curve ADJLMS .................................................................................... 101
Figure 97: Learning Curve Filt-xLMS ................................................................................... 101
Figure 98: Learning Curve SS ............................................................................................... 102
Figure 99: Comparative Learning Curves (LMS, NLMS, LLMS, BLMS, BLMSFFT, DLMS,
SD, SE) .................................................................................................................................. 102
Figure 100: Learning Curve ADJLMS .................................................................................. 103
Figure 101: Learning Curve Filt-xLMS ................................................................................. 103
Figure 102: Learning Curve SS ............................................................................................. 104
List of Acronyms
ADJLMS Adjoint Least Mean Square
BLMS Block Least Mean Square
BLMSFFT Block Least Mean Square FFT
CS Convergence Speed
DLMS Delayed Least Mean Square
DSP Digital Signal Processing
FILTXLMS Filtered X-LMS
FD Frequency Domain
GUI Graphical User Interface
LC Learning Curve
LMS Least-Mean-Squares
LLMS Leaky Least Mean Square
NLMS Normalized Least Mean Square
SD Sign-Data
SE Sign-Error
SS Sign-Sign
SSE Steady State Error
Chapter 1
Introduction
The goal of adaptive filters are to maintain or derive desired output signal characteristics from
a FIR or IIR filter. This goal is obtained via a feedback loop structure that feeds measure of
undesired signal characteristics (error) to the filter under consideration and subsequently the
filter updates its filter kernel with the fed coefficients to generate or maintain the desired output
signal characteristics. The calculation of new coefficients based on the error signal feedback
which is to be minimized is powered by some adapting algorithms. The error is defined as the
deviation of output signal from the desired signal characteristics, such that, where d(n) is the
desired signal, y(n) is the output signal and e(n) is the error signal, then the following formulas
holds.
𝑦(𝑛) = ∑ 𝑊𝑖(𝑛) 𝑥(𝑛 − 𝑖)
𝑁−1
𝑖=0
𝑦 (𝑛) 𝑖𝑠 𝑡ℎ𝑒 𝑜𝑢𝑡𝑝𝑢𝑡 𝑠𝑖𝑔𝑛𝑎𝑙 𝑠𝑒𝑞𝑢𝑒𝑛𝑐𝑒𝑠
𝑑(𝑛) 𝑖𝑠 𝑡ℎ𝑒 𝑑𝑒𝑠𝑖𝑟𝑒𝑑 𝑠𝑖𝑔𝑛𝑎𝑙 𝑠𝑒𝑞𝑢𝑒𝑛𝑐𝑒𝑠
𝑡ℎ𝑒𝑛, 𝑒(𝑛) = ‖𝑑(𝑛)‖ − ‖𝑦(𝑛)‖
𝑒(𝑛) 𝑖𝑠 𝑡ℎ𝑒 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 𝑏𝑒𝑡𝑤𝑒𝑒𝑛 𝑑𝑒𝑠𝑖𝑟𝑒𝑑 𝑠𝑖𝑔𝑛𝑎𝑙 𝑠𝑒𝑞𝑢𝑒𝑛𝑐𝑒𝑠 𝑑(𝑛) 𝑎𝑛𝑑 𝑜𝑢𝑡𝑝𝑢𝑡
𝑠𝑖𝑔𝑛𝑎𝑙 𝑠𝑒𝑞𝑢𝑒𝑛𝑐𝑒𝑠 𝑦(𝑛)
Source: [3] (Page 139 – 188)
We can see from the above derivation that 𝑒(𝑛) is the signal sequence which is needed
to be minimized and an adaptive filter’s ability to do that makes it separate from other types of
filters.
In the figure 1, an output signal is given. But instead of this output we want to have the output
as exactly as signal given in figure 1.2. To derive the desired signal from the system, we first
have to measure the error signal through finding out mathematical correlation between samples
of output signal and desired signal. In short, from a higher point of view, this error signal is
measured by subtracting the first signal from the latter signal. Then, this error signal is
optimally minimized via updating operating filter’s coefficients through a live feedback loop.
Figure 1: Original output from the filter
Figure 2: Desired output from the filter
The use of adaptive filters can be divided majorly into two groups. Firstly, to
continuously maintain the output signal unchanged from a running filter. Secondly, to
approximate a desired signal from the output signal of a filter. These both approach use the
same fundamental structure of the adaptive filter but they varies in terms of orientation and
applications. In figure 3, we can see that how adaptive control has been implemented using
adaptive filter and necessary error signal is computed. In figure 4, we can see that how a desired
signal is approximated using adaptive filter and necessary error signal is computed. Both figure
3 and figure 4 looks similar in terms of their execution sequence and operating FIR or IIR filter.
However, if we look carefully we will see that, there still exists a difference in associated error
signal computation orientation.
Input Signal Sequences
START
Does output signal deviated from
desired characteristics?
FIR or IIR Filter
Desired output Signal
Calculate Deviation
(Error Signal)
Reduce error signal
power in MSE sense
YES
If NO then Iterate
Calculate New
Coefficients
Send New Coefficents
To maintain desired output signal throughput
Figure 3: Adaptive control using adaptive filter
Input Signal Sequences
START
Does output signal approximates desired
signal within required level of accuracy?
FIR or IIR Filter
Output Signal
Calculate Deviation
(Error Signal)
Reduce error signal
power in MSE sense
NO
If YES then Iterate
Calculate New
Coefficients
Send New Coefficents
To approximate the desired signal
Desired Signal
Figure 4: Signal approximation using adaptive filter
1.1 Project Scope
The requirements of the project is to study and understand adaptive filter structure, LMS based
adaptive filters (mainly LMS, NLMS, and LLMS) and subsequently developing a user friendly
MATLAB software that facilitates the simulation of these algorithms. Therefore, the following
statement has been derived to summarize the project scope and goal.
“Development of a professional MATLAB Software that will offer a concise work
environment for the simulation of key adaptive signal processing algorithms and
applications in real-time and can be used in real-life”
1.2 Problem formulation and Project Outline
The development problems that arose and solved during the project are summarized as some
development questions as follows
1. How Adaptive Filter works and what is the functional role of sub-systems or sub-
blocks within it?
2. How new coefficients are calculated and which mathematical framework is used to
calculate the new coefficients?
3. Which adapting algorithms are used and how many of them are pre-implemented in
MATLAB?
4. Understanding the application of adaptive filters for ANC, ALE and SI and how they
are pre-implemented in MATLAB?
5. What type of software exists that offer concise work environment for simulation of
adaptive algorithms and applications?
6. How to develop a MATLAB App and standalone MATLAB software?
7. Which methodology is best to develop GUI in MATLAB? What are the advantages
and disadvantages of each methodology?
8. How to load data and store data during run-time in MATLAB App?
9. How to organize GUI blocks to have a user friendly, compact but coherent GUI?
10. What are the implementation alternatives of MATLAB GUI development and which
method best suits the project need?
11. How to preserve aesthetical properties of the software while not compensating
functional requirements?
12. How to integrate different components of the software into a single module?
In Chapter 2, we have mentioned about requirement analysis and research methodology. In
Chapter 3, we have dissected the adaptive signal processing filters and discussed about it. In
Chapter 4, the relevant existing works done by others are studied and discussed in terms of
what has been done and what is lacking? In Chapter 5, we have discussed about popular LMS
Based adaptive signal processing filters and applications. In Chapter 6, we have discussed
about different MATLAB GUI design methodology and different development tools. In
Chapter 7, we have discussed about algorithm and software development. In Chapter 8, we
have discussed about results obtained from different adaptive algorithms. In Chapter 9, we have
discussed about comparative performance of different adaptive algorithms and data analysis.
In Chapter 10, we have discussed about project summary and probable future work.
Chapter 2
Research Methodology and Requirement
Analysis
All types of software development requires a thorough requirement analysis. Requirements can
be divided into two parts, namely, functional requirements and non-functional requirements.
The functional requirements form the core part of the development and all requirements must
need to be meet in order develop a working software. On the other hand, non-functional
requirements are too important but not mandatory to have a working software. However, some
non-functional requirements are very important without which the software product may turn
into unusable and not user friendly.
2.1 Functional requirements
1. MATLAB implementation of Adaptive Algorithms
2. MATLAB implementation of Adaptive Applications
3. Comparative performance analysis of Adaptive Algorithms
4. Graphical User Interface (GUI)
5. Data Loading and Data Writing
6. Run-time Data Storage
7. Data Processing and Display
2.2 Non-functional requirements
1. User friendliness
2. Fast and Reliability
3. Compact data representation
4. Aesthetical data representation
Chapter 3
Adaptive Signal Processing Filters and
Applications
Adaptive filter can be literally understood as a filter that is able to take feedback and based on
that feedback it is able to adapt to produce or maintain desired signal output. An adaptive filter
has different parameters to facilitate the flexibility in dealing with optimal performance of
adaptive filters. The selection of different parameters for adaptive filters directly influences the
calculation filter coefficients. That is to say, we reduces the error through optimizing a
consistently designed performance function. This performance function can be designed either
in statistical framework or deterministic framework. The performance function in statistical
framework is the mean-square-value of the error signal. In deterministic framework the
frequent choice of performance function is a weighted sum of the squared error signal.
3.1 Structure of Adaptive Filter
Adaptive filters can be mainly structurally realized into two ways, namely, spatially
and functionally. Spatial structure discusses about the organization of filter components
without restricting corresponding filters desired functional output. On the other hand,
functional structure discusses about the functional role of the sub-systems of each adaptive
filter.
3.1.1 Spatial Structure or Block Diagram The most common used structure are direct form, cascade form, parallel form and
lattice. Transversal layout of adaptive filters are most commonly used, however, lattice layout
is also used when its advantages overrides the advantages of transversal layout.
Figure 5: An N-tap transversal adaptive filter [3]
3.1.2 Functional structure
Adaptive filters can be dissected into following major parts based on the functional role and
each of these part plays a major role in producing a working adaptive filter.
FIR/IIR Filter
Adaptive Control Algorithm
Input Signal: x(n)
Output Signal: y(n)
De
sired
Signal: d
(n)
Error Signal: e(n)
Updated Coefficients
Feedback Loop
Figure 6: Adaptive Filter Functional Components
3.1.2.1 Input Signal
Input signal is the data feeder or provider to the adaptive filter. This is the primary
signal that is needed to be updated or maintained at a constant level or needed to be
approximated to a desired signal characteristics. If we have input signal that is needed to be
maintained at a constant level than whenever input signal differs from desired level, we can
find out this deviation or error and subsequently minimizes it to maintain the constant desired
signal throughput. In other case, we can have an output signal from a filter which is needed to
be updated with the characteristics of a desired signal. In this case, we find out the difference
between output signal and desired signal and this difference is error. Subsequently, we calculate
new adaptive filter coefficient to reduce this error and these coefficients are used to update the
input signal.
3.1.2.2 FIR or IIR Filter
FIR or IIR filter is the main worker of the adaptive filter. Initially, the filter starts
producing output signal from the instantaneous input signal given to it. But after providing the
feedback (i.e. calculated filter coefficients to reduce the error power of the error signal), it
updates its output signal which approximates desired signal or reduces deviation from desired
signal.
3.1.2.3 Output Signal
Output signal is the initial output or updated output from FIR/IIR filter. Output signal
can be realized in two categories, namely, coarse output signal and fine output signal. The
coarse output signal represents the instantaneous output from FIR/IIR filter or the deviated
output signal from the desired condition. On the other hand, we obtain the fine output signal
when coarse output signal approximates to desired signal. That is to say that, fine output signal
is the end product of the coarse output signal when error is removed from it.
3.1.2.4 Desired Signal
Desired signal is the final expected signal from the adaptive filter. The approximated
desired signal is obtained from the adaptive filter when adaptive filter converges. We have to
say “approximated” because an adaptive filter converges 100% if and only if error signal
reduces to 0%. But in reality, this is always not the case, even after adaptive filter converges
there still an SSE exists. And, in this case, we say that, we have approximated the desired
signal. Moreover, desired signal can be also realized in two categories, namely, external-
reference-desired-signal, maintained-desired-signal. The external-reference-desired-signal is
a provided signal that is taken as reference to calculate the error and then through error removal
adaptive filter approximates that signal. On the other hand, maintained-desired-signal is the
instantaneous output of the FIR/IIR filter that is maintained in a stable state through error
removal whenever it deviates from the stability.
3.1.2.5 Error Signal
Error signal is the difference between output signal and desired signal. That is to say
that, error signal is the amount of signal component that adaptive filter optimally removes when
it converges and thus arriving at the desired condition.
3.1.2.6 Adaptive Control Algorithm
Adaptive control algorithm is the algorithm that adaptive filter uses to iteratively
calculate the new coefficients that optimally reduces the power of error signal. The choice of
adaptive control algorithm depends on the data class, memory resources, computational time,
energy requirements and overall cost. The L-MSE and LSE are two commonly used algorithm
to calculate the updated coefficients.
3.1.2.7 Feedback loop
The feeback loop is a conceptual realization just to indicate that, the re-measured
coefficients from the error signal is fed into FIR/IIR filter to produce the desired output.
However, even though conceptual, this is of particular importance as it turns a general FIR/IIR
filter into an adaptive filter.
3.2 Adaptive Filter Performance
The performance of adaptive filter can be evaluated using Learning Curve (LC),
Convergence Speed (CS), and Steady State Error (SSE). In the following figure of LC, CS and
SSE are shown. We can see that, the error power error signal quickly dropped since the
initialization of adaptive filter and this phenomenon is also reflected in the associated learning
curve. Beside, we can also see that, even though the filter converged very quickly, there still
exists a SSE in the produced output of the filter. Now, this SSE is acceptable or not depends
on the requirements of the application domain.
Figure 7: Convergence Speed and SSE
The goal of designing adaptive filter is to minimize the error signal power and hence when
provided with right parameters, the adaptive filter ought to converge. However, the question is
how fast or slow an adaptive filter converges? This convergence speed can be classified as very
fast, fast, higher average, average, lower average, slow, very slow etc.
Figure 8: Local Convergence and Global Convergence
Convergence can be realized into two categories, namely, local convergence and global
convergence. In the figure 8, the error signal power started converging but then suddenly raised
up and repeated slightly couple of times and then finally converged. So, the convergence before
sudden raise of error power is local convergence and final convergence is the global
convergence.
However, adaptive filter performance is a relative indicator and varies depending on
application and desired filter output. For example, minimal SSE could be the only indicator of
filter performance and indicator of filter output. On the other hand, CS could be the only
indicator filter performance and indicator of filter output. Moreover, there can be cases where
weighted measure of both CS and SSE could be the indicator of filter performance and indicator
of filter output quality measure. We can summarize the adaptive filter performance criteria as
follows:
Fast Convergence is important, optimal lower SSE is not important
Fast Convergence is important, optimally lower SSE is important
Fast Convergence is not important, optimally lower SSE is important
Fast Convergence is not important, standard SSE is important
Standard Convergence is enough, optimally lower SSE is important
Standard Convergence is enough, standard SSE is enough
Because of such criteria’s or such similar criteria, different adaptive filters and different
algorithm parameters are chosen and each of which offer different level of solution. Through
trial-and-error process the best adaptive filter with best parameters are chosen for a data
scenario.
3.2.1 Learning Curve
Learning Curve is literally a curve which is generated through plotting the time-varying
error power for all coefficients of adaptive filter. For a number of iterations, the error power
approximates to zero and plotting this decreasing error power in time domain creates a very
nice curve with gradually descent gradient. This curve provides a quick information on the
performance of LMS adaptive filter under consideration.
Figure 9: Learning Curve
In the figure 9, we can see a gradually descent curve which gradually approximates to zero.
The left the error power is higher but with increasing iterations of adaptive algorithm the error
power approximates to zero.
Figure 10: An error signal with associated LC
In the figure 10, the first plot is a gradually converging error signal and the second plot
is associated LC. From the first figure, we can see that, the error signal quickly converged and
this phenomenon is also reflected in the LC. This reflection happens, because it is the same
filter coefficients that produced the data which are used to create both plot. In other words, we
can say that, LC is just a different representation of how the error signal converges and is
visually more convenient to make decision of how adaptive filter is performing.
3.2.2 Convergence Speed
Convergence means gradually minimizing power of error signal and arriving at the
point that produces desired signal. Convergence speed or CS literally means how fast an
adaptive algorithm converges or reduces the error signal power. A slower CS means the
adaptive filter took long time to minimize the error power. Similarly, a faster CS means the
adaptive filter took short time to minimize the error power. Adaptive filters iteratively calculate
new coefficient to minimize the error power of error signal. CS substantially varies with
different algorithm parameters.
Moreover, the step size also greatly influences the CS speed of adaptive filters. A
smaller step size decreases the CS which means the adaptive filter takes more time to converge
when a smaller step size is used than the larger one. The phenomenon can be clearly seen from
the figure provided below. In figure, the convergence speed is fast when µ=.1 used but when
µ=.01 is used the convergence speed is dropped which is also reflected in the LC.
Figure 11: System Identification with NLMS when step size µ= 0.1, order n = 20 and beta
β=1
Figure 12: System Identification with NLMS when step size µ= 0.01, order n = 20 and beta
β=1
The higher the filter order the lesser the convergence speed. However, this filter order
verses convergence speed behaviour holds for a certain threshold and this threshold varies for
different data class. We have found the right filter order through trial-and-error process and
seen that higher filter order does always produce the best filter performance as well less one.
Therefore, if we can achieve the desired adaptive filter performance with less filter order that
always gives the benefit of less computational time and overall cost. Hence, the empirically
derived filter order is the best value which can ensure best filter performance for specific data
case as well as best value. This phenomenon is demonstrated in figure 13 and 14. We can see
that, even though higher filer order is used, the figure 14 consist more error power than figure
13. However, in this case of ANC it is acceptable and wanted, as error signal is the desired
speech signal with less noise. But this phenomenon exists also for other applications where less
error signal power is always desired and hence decreasing performance with increasing order
is never accepted positively.
Figure 13: ANC with filter order 30
Figure 14: ANC with filter order 80
3.2.3 Steady State Error (SSE)
In many cases, the error signal power never converges to zero even after adaptive filter
converges (i.e. filter coefficients arrives in a stability and do not show significant change in
value). This persisted error is called SSE error. In many applications, this error is not
significantly important while it can be important for some. Therefore, threshold of SSE
acceptability varies depending on application and thus it turns into a relative performance
indicator.
3.3 Adaptive Filter Groups
There are substantial amount of adaptive filters are available that varies in terms of learning
difficulty, applications and application data class. However, the common goal of all of these
adaptive algorithms is to adapt a coarse signal to a fine signal or to maintain a desired signal
output. To accomplish this task, the adaptive algorithms offers different level of flexibility for
different corresponding problem scenarios. Some of them are grouped [MATLAB] as follows.
Least-Mean-Square (LMS) Based: LMS, NLMS, LLMS, ADJLMS, BLMS,
BLMSFFT, DLMS, Filt-XLMS, SD, SE, SS
Recursive-Least-Square (RLS) Based: RLS, QRDRLS, HRLS, HSWRLS, SWRLS,
FTF, SWFTF
Affine Projection (AP) Based: AP, APRU, BAP
Frequency Domain (FD) Based: FDAF, PBFDAF, PBUFDAF, TDAFDCT, TDAFDFT,
UFDAF
Lattice (L) Based: GAL, LSL, QRDLSL
3.4 Application Classes
Adaptive filters are mostly used to process an input signal and using the updated
coefficients calculated from error signal, it approximates a desired signal or maintains a signal
to its original state. Based on this similarity, the application of adaptive filter can be grouped
into four categories [3], namely, modelling, inverse modelling, linear prediction and
interference cancellation. Some applications for each of these can be summarized as follows.
Modelling: System Identification (SI) etc.
Inverse Modelling: Channel Equalization, Magnetic Recording etc.
Linear Prediction: Auto regressive spectral analysis, Adaptive Line Enhancement
(ALE), Speech Coding etc.
Interference cancellation: Echo cancellation in telephone lines, Acoustic Echo
Cancellation, Active Noise Control (ANC), Beamforming etc.
3.5 Difference between MSE and LSE
Mean-Square-Error (MSE) and Least-Square-Error (LSE) may sound similar but they
are not same. MSE is an approach that follows statistical framework. On the other hand, LSE
is an approach that follows deterministic framework. If we define a cost or performance
function 𝐽 then MSE and LSE can be realized as follows.
Total squared Error (LSE) = 𝐽 = ∑ 𝑒2(𝑛)𝑁−1𝑛=0
Mean Squared Error (MSE) = 𝐽 = E{|𝑒(𝑛)|2}
Both MSE and LSE has their own advantages and disadvantages. The choice of MSE
or LSE approach depends filtering problem and associated computational cost. MSE deals with
mean value, which means, we define statistical sample with a convenient sample size and then
calculate the mean value for this sample. Clearly, this will results in a processing of less number
of samples, reciprocally less cost and yet preserving processed signal’s characteristics within a
satisfactory level. The different between LSE and MSE can be summarized as follows.
Property L-MSE L-SE
Framework Stochastic (i.e. statistical) Deterministic
Weighting criteria Sample Mean Total signal
Computational Cost Lower Higher
Memory requirements Lower Higher
Matrix operations No Yes
Accuracy Lower than LSE but robust
enough in many cases
Optimal
Performance Robust or Standard or Poor
(Input data dependent)
Robust
Chapter 4
Literature Review
The adaptive filters are very popular among scientists and engineers and thus a rich set of
literature are available for study. However, these literatures can be largely classified into
different categories based on their orientation such as general reference book, specialized
reference book, general articles, project result based articles etc. It is impossible to study all of
these references because of its sheer size and complexity. And, therefore an in depth literature
review is impractical to be accomplished. However, we have randomly studied different parts
of different books and skimmed through required chapters that are necessary for this project.
Subsequently, the literatures are reviewed from high level point of view and according to their
orientation.
The book Adaptive Filter Theory [1] written by Simon Haykin is one of the best book
that covers most important concepts of adaptive filters into a single book. Nevertheless, the
book progresses forward in accordance with foundation-to-generalization approach. That is to
say that, for example, we have to first understand Method of Steepest Descent and Wiener
Filters and as well as difference between stochastic (i.e. statistical) approach and deterministic
approach to be able to understand L-MSE and LSE adaptive control algorithms. Therefore, the
book first begins with basic introduction, then discusses about Stochastic Processes and
Models, Method of Steepest Descent and then writes about LMS. The progression of whole
book follows a convenient and pedagogically friendly approach that is very useful for a student
and readers.
The book Adaptive Filters: Theory and Applications [3] written by B. Farhang-
Boroujeny is another book that is written in a very legible and in an understandable way. The
book mainly focuses on LMS Based algorithms but discusses about other adaptive filtering
issues. Moreover, the introduction written in this book is very useful which provides a lot of
useful information in a short scope. The book Statistical Digital Signal Processing and
Modeling [2] written by Monson H. Hayes is also a good book for studying adaptive signal
processing. The book first discusses about necessary fundamental concepts to understand
adaptive filtering and then at the end of the book it consists a dedicated chapter about adaptive
filters. Furthermore, the books [4, 5, 6, 7, 8, 9, 10, 11, 12, and 13] are also good resource for
studying adaptive filters. Some of these books focuses on adaptive filtering fundamentals while
others focuses on a specifically oriented application of adaptive filters. The journal articles [14,
15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28] discusses about specific application of
particular adaptive filter. All of these papers clearly depicts the reliability, scalability and
overall adaptive performance of adaptive filters from various perspective angle. The usefulness
of various adaptive filter parameters are clearly understandable from the discussions of these
articles.
Chapter 5
Least-Mean-Square Adaptive Filters and
Applications
In this project, we have studied LMS, NLMS, and LLMS adaptive filters and also produced
results using other (i.e. ADJLMS, BLMS, BLMSFFT, DLMS, Filt-xLMS, Sign-Data, Sign-
Error, Sign-Sign) LMS Based adaptive filters. However, as there are good number of adaptive
filters are already implemented in MATLAB, we have also included those adaptive filters in
the developed software and generated results from some of those filters to understand the LMS
algorithms comparatively. The results from these algorithms are mentioned in the appendices.
5.2 Least-Mean-Square (LMS) Adaptive Filters
Least-Mean-Square (LMS) adaptive filters reduces the signal error power in a mean-
square sense and therefore literally called LMS adaptive filters. Moreover, in short, when we
have stationary input and desired signal, the LMS adaptive filter just turns into a practical
implementation of optimal wiener filter in a MSE perspective. In other way, we achieve
optimal wiener filter when its cost function is controlled by MSE. Another important
foundation of LMS filter is the steepest descent algorithm. To mention, steepest descent is not
an adaptive filter by itself but it is the basis for calculating updated new coefficients when
signal statistics are known and thus serves as a fundamental basis of LMS adaptive filter. The
steepest descent algorithm is given below.
Initialize filter coefficients with a start value, 𝑾𝒏=𝟎(𝟎)
Gradient 𝛁ᶓ(𝒏) is determined that points in the direction of where the cost function
increased maximally, 𝛁ᶓ(𝒏) = −𝟐𝐩 + 𝟐𝐑𝐰(𝐧)
Updated coefficient 𝑤(𝑛 + 1) is adjusted in the opposite direction to the gradient, but
using step-size µ the adjustment is weighted down, 𝒘(𝒏 + 𝟏) = 𝒘(𝒏) + 𝟏
𝟐 µ [−𝛁ᶓ(𝒏)]
The LMS algorithm is the stochastic or random realization of steepest descent algorithm. That
is to say that the LMS algorithm updates signal statistics continuously while steepest descent
algorithm works in a deterministic way. In short, the LMS algorithm is one of the stochastic
gradient methods and the steepest descent is one of the deterministic gradient methods. The
steepest descent algorithm uses deterministic cost function ᶓ = 𝐸[𝑒2(𝑛)] while the LMS
algorithm uses stochastic or coarsely estimated cost function ᶓ̂ = 𝑒2(𝑛). The stochastic or
coarse estimate of cost function results in a faster processing, reciprocally less computational
overhead and at the same time ensures the ability to track the signal characteristics. Thus, the
error signal reduction of general LMS adaptive filter is based on the following relationships.
𝑤(𝑛 + 1) = 𝑤(𝑛) − 𝜇 ∇ 𝑒2(𝑛)
𝐻𝑒𝑟𝑒 𝑤(𝑛) = [𝑤0(𝑛),𝑤1(𝑛)……… 𝑤𝑁−1(𝑛)]𝑇, 𝜇 𝑖𝑠 𝑡ℎ𝑒 𝑠𝑡𝑒𝑝 − 𝑠𝑖𝑧𝑒 𝑝𝑎𝑟𝑎𝑚𝑒𝑡𝑒𝑟 𝑜𝑓 𝑡ℎ𝑒 𝑎𝑙𝑔𝑜𝑟𝑖𝑡ℎ𝑚 𝑎𝑛𝑑 ∇ 𝑖𝑠 𝑡ℎ𝑒 𝑔𝑟𝑎𝑑𝑖𝑒𝑛𝑡 𝑜𝑝𝑒𝑟𝑎𝑡𝑜𝑟
∇ 𝑒2(𝑛) = −2𝑒(𝑛)𝑥(𝑛)
𝐻𝑒𝑟𝑒, 𝑥(𝑛) = [𝑥(𝑛) 𝑥(𝑛 − 1) … 𝑥(𝑛 − 𝑁 + 1)]𝑇
𝑇ℎ𝑒𝑟𝑒𝑓𝑜𝑟𝑒, 𝑤𝑒 𝑔𝑒𝑡 𝑎𝑠 𝑓𝑜𝑙𝑙𝑜𝑤𝑠 𝑏𝑦 𝑠𝑢𝑏𝑠𝑡𝑖𝑡𝑢𝑖𝑛𝑔 𝑙𝑎𝑡𝑡𝑒𝑟 𝑖𝑛𝑡𝑜 𝑡ℎ𝑒 𝑓𝑖𝑟𝑠𝑡 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛
𝑤(𝑛 + 1) = 𝑤(𝑛) − 𝜇 {−2 𝑒(𝑛) 𝑥(𝑛)}
𝐻𝑒𝑛𝑐𝑒, 𝑤𝑒 𝑔𝑒𝑡 𝑡ℎ𝑒 𝐿𝑀𝑆 𝑟𝑒𝑐𝑢𝑟𝑠𝑖𝑜𝑛 𝑎𝑠 𝑓𝑜𝑙𝑙𝑜𝑤𝑠
𝑤(𝑛 + 1) = 𝑤(𝑛) + 2 𝜇 𝑒(𝑛)𝑥(𝑛)
The step-size has major influence in convergence behaviour towards ᶓ ̂𝒎𝒊𝒏. In figure, we can
see that the smaller the step-size the smoother and fastest convergence we have towards
the ᶓ ̂𝒎𝒊𝒏.
Figure 15: Influence of step-size µ in convergence towards ᶓ ̂𝒎𝒊𝒏 [Google Search]
The basic components of the LMS algorithm can be written as follows in terms of input, output
and functional form.
𝑰𝒏𝒑𝒖𝒕
𝐼𝑛𝑖𝑡𝑖𝑎𝑙 𝑓𝑖𝑙𝑡𝑒𝑟 𝑐𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡 𝑣𝑒𝑐𝑡𝑜𝑟, 𝑤(𝑛)
𝐼𝑛𝑝𝑢𝑡 𝑠𝑖𝑔𝑛𝑎𝑙 𝑣𝑒𝑐𝑡𝑜𝑟, 𝑥(𝑛)
𝐷𝑒𝑠𝑖𝑟𝑒𝑑 𝑜𝑢𝑡𝑝𝑢𝑡 𝑣𝑒𝑐𝑡𝑜𝑟, 𝑑(𝑛)
𝑶𝒖𝒕𝒑𝒖𝒕
𝐹𝑖𝑙𝑡𝑒𝑟 𝑜𝑢𝑡𝑝𝑢𝑡, 𝑦(𝑛)
𝑈𝑝𝑑𝑎𝑡𝑒𝑑 𝑐𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡 𝑣𝑒𝑐𝑡𝑜𝑟, 𝑤(𝑛 + 1)
𝑭𝒖𝒏𝒄𝒕𝒊𝒐𝒏𝒂𝒍 𝒇𝒐𝒓𝒎
𝐼𝑛𝑝𝑢𝑡 − 𝑜𝑢𝑡𝑝𝑢𝑡 𝑟𝑒𝑙𝑎𝑡𝑖𝑜𝑛, 𝑦(𝑛) = 𝑤𝑇(𝑛) 𝑥(𝑛)
𝐸𝑟𝑟𝑜𝑟 𝑟𝑒𝑙𝑎𝑡𝑖𝑜𝑛, 𝑒(𝑛) = 𝑑(𝑛) − 𝑦(𝑛)
𝐶𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡 𝑢𝑝𝑑𝑎𝑡𝑒 𝑟𝑒𝑙𝑎𝑡𝑖𝑜𝑛, 𝑤(𝑛 + 1) = 𝑤(𝑛) + 2 𝜇 𝑒(𝑛)𝑥(𝑛)
𝑊ℎ𝑒𝑟𝑒, 2𝜇𝑒(𝑛)𝑥(𝑛) 𝑖𝑠 𝑡ℎ𝑒 𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑖𝑜𝑛 𝑡𝑒𝑟𝑚
The basic reason for the popularity of LMS adaptive filter is because of its computational
simplicity. The computational overhead of LMS adaptive filter can be summarized as follows.
𝟐𝐍 + 𝟏 𝐦𝐮𝐥𝐭𝐢𝐩𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬 & 𝟐𝐍 + 𝟏 𝐚𝐝𝐝𝐢𝐭𝐢𝐨𝐧𝐬
𝑭𝒐𝒓 𝒄𝒂𝒍𝒄𝒖𝒍𝒂𝒕𝒊𝒏𝒈 𝒕𝒉𝒆 𝒐𝒖𝒕𝒑𝒖𝒕 𝒚(𝒏): 𝑁 𝑚𝑢𝑙𝑡𝑖𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛𝑠
𝑭𝒐𝒓 𝒐𝒃𝒕𝒂𝒊𝒏𝒊𝒏𝒈 (𝟐𝝁) ∗ 𝒆(𝒏): 1 𝑚𝑢𝑙𝑡𝑖𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛
𝑭𝒐𝒓 𝒔𝒄𝒂𝒍𝒂𝒓 − 𝒃𝒚 − 𝒗𝒆𝒄𝒕𝒐𝒓 𝒎𝒖𝒍𝒕𝒊𝒑𝒍𝒊𝒄𝒂𝒕𝒊𝒐𝒏 𝟐𝝁𝒆(𝒏) ∗ 𝒙(𝒏):𝑁 𝑚𝑢𝑙𝑡𝑖𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛𝑠
5.2.1 Some Common Variants of LMS Algorithm
In practice, three common LMS algorithm variants are standard LMS (SLMS), normalized LMS
(NLMS) or time-varying step size LMS and leaky LMS (LLMS). All these three variants have
almost same design structure except with some differences in update equation. The standard
LMS algorithm has the following update equation.
Standard LMS (SLMS)
�⃗⃗� (𝑛 + 1) = �⃗⃗� (𝑛) + 𝜇 𝑒(𝑛) 𝜇 (𝑛)
𝐻𝑒𝑟𝑒, �⃗⃗� (𝑛 + 1) 𝑖𝑠 𝑡ℎ𝑒 𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑒𝑑 𝑐𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡
𝜇 𝑖𝑠 𝑡ℎ𝑒 𝑠𝑡𝑒𝑝 𝑠𝑖𝑧𝑒 𝑜𝑓 𝑡ℎ𝑒 𝑎𝑙𝑔𝑜𝑟𝑖𝑡ℎ𝑚
𝑒(𝑛) 𝑖𝑠 𝑡ℎ𝑒 𝑒𝑟𝑟𝑜𝑟 𝑠𝑖𝑔𝑛𝑎𝑙, 𝜇 (𝑛) 𝑖𝑠 𝑡ℎ𝑒 𝑖𝑛𝑝𝑢𝑡 𝑣𝑒𝑐𝑡𝑜𝑟 𝑜𝑓 𝑡ℎ𝑒 𝑓𝑖𝑙𝑡𝑒𝑟
The basic difference between standard LMS algorithm and normalized algorithm is in the
characteristics of their step size. The unique characteristic of the step size of NLMS is that it is
time-varying in compare to SLMS. The NLMS has the following update equation.
Normalized LMS (NLMS)
�⃗⃗� (𝑛 + 1) = �⃗⃗� (𝑛) + 𝜇 𝑒(𝑛) �⃗⃗� (𝑛)
‖�⃗⃗� (𝑛)‖2
𝑊𝑒 𝑐𝑎𝑛 𝑟𝑒𝑤𝑟𝑖𝑡𝑒 𝑎𝑏𝑜𝑣𝑒 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 𝑎𝑠 𝑓𝑜𝑙𝑙𝑜𝑤𝑠
�⃗⃗� (𝑛 + 1) = �⃗⃗� (𝑛) +𝜇
‖�⃗⃗� (𝑛)‖2 𝑒(𝑛) 𝜇 (𝑛)
𝑇ℎ𝑒𝑟𝑒𝑓𝑜𝑟𝑒, 𝑤𝑒 𝑔𝑒𝑡 �⃗⃗� (𝑛 + 1) = �⃗⃗� (𝑛) + 𝜇(𝑛)𝑒(𝑛)𝜇 (𝑛), 𝑤ℎ𝑒𝑟𝑒 𝜇
‖�⃗⃗� (𝑛)‖2 = 𝜇(𝑛)
The LLMS has similar update equation except that it includes a leaky factor. The leaky factor
has a range (0, 0.1) and has direct relation with steady state error (SSE). If leaky factor is
increased, the SSE increases and the leaky factor decreases the SSE decreases. The LLMS has
the following cost function and update equation.
Leaky LMS (LLMS)
𝐽(𝑛) = 𝑒2(𝑛) + 𝛼 ∑ 𝑊𝑘2(𝑛)
𝑁−1
𝑘=0
�⃗⃗� (𝑛 + 1) = (1 − 𝜇𝛼). �⃗⃗� (𝑛) + 𝜇 𝑒(𝑛) 𝜇 (𝑛)
We can see that the cost function includes both error signal and filter coefficients along with a
leaky factor. Therefore, LLMS is able to reduce the coefficient overflow problem. In the update
equation, if 𝛼 = 0, the update equation turns into the same update equation as standard LMS.
The LMS algorithm is often implemented in digital signal processors. As DSP’s often has
limited computational resource and LMS computational overhead is crucially important in DSP
implementation. Therefore, computationally simpler version of standard LMS algorithm are
Sign-Error LMS, Sign-Data LMS and Sign-Sign LMS and they require fewer multiplication
operation in compare to standard LMS. The simplification from standard LMS to sign LMS is
done using the following equation.
𝑠𝑔𝑛(𝑥) = {1, 𝑥 > 00, 𝑥 = 0
−1, 𝑥 < 0
�⃗⃗� (𝑛 + 1) = �⃗⃗� (𝑛) + 𝜇 . 𝑠𝑔𝑛(𝑒(𝑛)) . 𝜇 (𝑛) : Sign-Error LMS Algorithm
�⃗⃗� (𝑛 + 1) = �⃗⃗� (𝑛) + 𝜇 . 𝑒(𝑛) . 𝑠𝑔𝑛( 𝜇 (𝑛)) : Sign-Data LMS Algorithm
�⃗⃗� (𝑛 + 1) = �⃗⃗� (𝑛) + 𝜇 . 𝑠𝑔𝑛(𝑒(𝑛)). 𝑠𝑔𝑛(𝜇 (𝑛)) : Sign-Sign LMS Algorithm
We can clearly see from the above equations that, the convergence speed for Sign-LMS
algorithms are slower in compare to standard LMS and the SSE using Sign-LMS will be larger
than standard-LMS. Therefore, Sign-LMS algorithms are useful where computational
resources are important than performance. In ANC, we often have large input signal vector
and at the same time real-time processing of adaptive filter is required for real-time
performance. In this case, BLMSFFT can be used which offers fewer computational overhead
through fewer multiplication than standard LMS. In BLMSFFT, the input signal is first
transformed into frequency domain and filter coefficients are updated in the frequency domain.
In standard LMS filter, filter coefficients are updated based on sample by sample processing
which is better for performance but increases computational overhead as well takes more time.
In the BLMSFFT adaptive filter, the block size and filter length is same and coefficients are
updated based on block processing.
5.3 Implemented Adaptive Filter Applications
We have discussed earlier about the applications of adaptive filters. However, in this project,
we have implemented the following applications.
5.3.1 Adaptive Noise Cancellation (ANC)
In adaptive noise cancellation, we have a measured signal that contains primary noise from the
same signal source. In addition, we have reference noise available that is knowingly or
unknowingly correlated with the primary noise that are contained within the measured signal.
The reason of using reference noise is that we want to adaptively estimate how much undesired
noise is contained within the primary measured signal. Because of adaptive reference noise,
the necessary noise reduction can be estimated through real-time experiment to ensure the best
quality of desired signal.
𝑖𝑓 𝑥(𝑛) 𝑖𝑠 𝑡ℎ𝑒 𝑝𝑟𝑖𝑚𝑎𝑟𝑦 𝑚𝑒𝑎𝑠𝑢𝑟𝑒𝑚𝑒𝑛𝑡 𝑠𝑖𝑔𝑛𝑎𝑙 𝑤ℎ𝑖𝑐ℎ 𝑐𝑜𝑛𝑡𝑎𝑖𝑛𝑠 𝑏𝑜𝑡ℎ 𝑑𝑒𝑠𝑖𝑟𝑒𝑑 𝑠𝑖𝑔𝑛𝑎𝑙 𝑠(𝑛)
𝑎𝑛𝑑 𝑛𝑜𝑖𝑠𝑒 𝑣(𝑛) 𝑓𝑟𝑜𝑚 𝑡ℎ𝑒 𝑠𝑎𝑚𝑒 𝑠𝑖𝑔𝑛𝑎𝑙 𝑠𝑜𝑢𝑟𝑐𝑒, 𝑡ℎ𝑒𝑛,
𝑥(𝑛) = 𝑠(𝑛) + 𝑣(𝑛)
𝑖𝑓 𝑤𝑒 ℎ𝑎𝑣𝑒 𝑎 𝑟𝑒𝑓𝑒𝑟𝑒𝑛𝑐𝑒 𝑛𝑜𝑖𝑠𝑒 𝑔(𝑛) 𝑤ℎ𝑖𝑐ℎ 𝑖𝑠 𝑐𝑜𝑟𝑟𝑒𝑙𝑎𝑡𝑒𝑑 𝑤𝑖𝑡ℎ 𝑡ℎ𝑒 𝑛𝑜𝑖𝑠𝑒 𝑣(𝑛), 𝑡ℎ𝑒𝑛,
𝑒(𝑛) = {𝑠(𝑛) + 𝑣(𝑛)} − 𝑔(𝑛)
𝑒(𝑛) ≈ 𝑠(𝑛)
In the following figure, a reference noise is extracted from a measured signal to obtain error
signal and this error signal is the approximated desired signal.
FIR Filter
Adaptive Control Algorithm
desired error signal e(n) = x(n) - y(n) = s(n)
Updated Coefficients
Feedback Loop
y(n)
measurement signal x(n) that contains signal s(n) with noise v(n)x(n) = s(n) + v(n)
correlated noise g(n)
Figure 16: Adaptive Noise Cancellation
5.3.2 Adaptive Line Enhancement (ALE) or FIR Linear Prediction
Adaptive Line Enhancement is done when a narrowband desired signal is mixed with wideband
undesired noise and at the same time we do not have any knowledge about wideband noise. In
this scenario, we slightly delay the received signal but large enough to de-correlate the
wideband noise and then use a FIR linear predictor to estimate the desired narrowband signal.
Then we subtract this estimated narrowband signal from the primary signal and obtain the
estimated error and reduce this error to obtain the enhanced desired narrowband signal.
Therefore, the quality of desired enhanced narrowband signal depends on better performance
of the FIR linear predictor.
𝐹𝑟𝑜𝑚 𝑎 𝑟𝑒𝑐𝑒𝑖𝑣𝑒𝑑 𝑠𝑖𝑔𝑛𝑎𝑙 𝑣(𝑛),𝑤ℎ𝑒𝑟𝑒 𝑤𝑖𝑑𝑒𝑏𝑎𝑛𝑑 𝑛𝑜𝑖𝑠𝑒 𝑤(𝑛) 𝑚𝑎𝑠𝑘𝑠 𝑡ℎ𝑒 𝑑𝑒𝑠𝑖𝑟𝑒𝑑 𝑛𝑎𝑟𝑟𝑜𝑤
𝑏𝑎𝑛𝑑 𝑠𝑖𝑔𝑛𝑎𝑙 𝑥(𝑛), 𝑤𝑒 𝑤𝑎𝑛𝑡 𝑡𝑜 𝑒𝑛ℎ𝑎𝑛𝑐𝑒 𝑡ℎ𝑒 𝑛𝑎𝑟𝑟𝑜𝑤𝑏𝑎𝑛𝑑 𝑑𝑒𝑠𝑖𝑟𝑒𝑑 𝑠𝑖𝑔𝑛𝑎𝑙 𝑥(𝑛). 𝑇ℎ𝑒𝑛,
𝑣(𝑛) = 𝑥(𝑛) + 𝑤(𝑛)
𝑥(𝑛)̅̅ ̅̅ ̅̅ = ∑ ℎ(𝑘) 𝑣(𝑛 − 𝐷 − 𝑘)
𝑀−1
𝑘=0
𝑒(𝑛) = 𝑣(𝑛) − 𝑥(𝑛)̅̅ ̅̅ ̅̅ = 𝑤(𝑛)̅̅ ̅̅ ̅̅ ̅
𝑇𝑜 𝑔𝑒𝑡 𝑡ℎ𝑒 𝑜𝑝𝑡𝑖𝑚𝑎𝑙 𝐹𝐼𝑅 𝑙𝑖𝑛𝑒𝑎𝑟 𝑝𝑟𝑒𝑑𝑖𝑐𝑡𝑜𝑟 𝑐𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡𝑠
∑ ℎ(𝑘) 𝑟𝑣𝑣(𝑙 − 𝑘) = 𝑟𝑣𝑣(𝑙 + 𝐷), 𝑙 = 0,1, ……… ,𝑀 − 1
𝑀−1
𝑘=0
The expected value of the right hand side of the above equation is the statistical autocorrelation
of the narrowband signal 𝑥(𝑛) which can be seen as follows.
𝑟𝑣𝑣(𝑙 + 𝐷) = ∑ 𝑣(𝑛) 𝑣(𝑛 − 𝑙 − 𝐷)
𝑁
𝑛=0
= ∑[𝑤(𝑛) + 𝑥(𝑛)][𝑤(𝑛 − 𝑙 − 𝐷) + 𝑥 (𝑛 − 𝑙 − 𝐷)]
𝑁
𝑛=0
= 𝑟𝑤𝑤(𝑙 + 𝐷) + 𝑟𝑥𝑥(𝑙 + 𝐷) + 𝑟𝑤𝑥(𝑙 + 𝐷) + 𝑟𝑥𝑤(𝑙 + 𝐷)
= 0 + 𝑟𝑥𝑥(𝑙 + 𝐷) + 0 + 0 (𝐴𝑠𝑠𝑢𝑚𝑒𝑑)
= 𝑟𝑥𝑥(𝑙 + 𝐷) = 𝛾𝑥𝑥(𝑙 + 𝐷)
In the following figure, we have delayed the primary signal to de-correlate the wideband noise
and then fed into a linear FIR predictor to best estimate the narrowband desired signal 𝑥(𝑛)
and then this estimation is used to estimate the wideband noise error. Subsequently, the error
is reduced and enhanced narrowband desired signal 𝑥(𝑛) is obtained.
FIR Filter
Adaptive Control Algorithm
Estimated Wideband Error Signal e(n) =
Updated Coefficients
Feedback Loop
Enhanced Narrowband
Output
Decorrelation Delay v (n-D)
Estimated Narrowband
Wideband Noise w(n) that masks Narrowband x(n)
v(n) = x(n) + w(n)
Figure 17: Adaptive Line Enhancement
5.3.3 System Identification or Modelling (SI)
System identification is the modelling or extraction of the impulse response of an unknown
system through replicating the similar impulse response in an adjacent FIR filter. The input
signal sequence 𝑥(𝑛) is fed into both unknown system and adjacent FIR filter. The output
signal sequence �̂� of the FIR filter is subtracted from the unknown system’s output signal
sequence 𝑦(𝑛) and error signal sequence 𝑒(𝑛) is obtained. The new coefficients for FIR filter
are now selected from the error signal sequence and minimized to get the corrected new
coefficients. The optimally minimized coefficients replicates or approximates the impulse
response of the unknown system. Thus the unknown system’s impulse response is modelled
without any prior knowledge through using adaptive FIR filter.
𝑇𝑜 𝑚𝑜𝑑𝑒𝑙 𝑎 𝑢𝑛𝑘𝑛𝑜𝑤𝑛 𝑠𝑦𝑠𝑡𝑒𝑚 𝑤𝑖𝑡ℎ 𝑎𝑛 𝑀 𝑎𝑑𝑗𝑢𝑠𝑡𝑎𝑏𝑙𝑒 𝑐𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡 𝐹𝐼𝑅 𝑓𝑖𝑙𝑡𝑒𝑟, 𝑡ℎ𝑒𝑛,
𝐹𝐼𝑅 𝑓𝑖𝑡𝑙𝑒𝑟 𝑤𝑖𝑡ℎ 𝑀 𝑐𝑜𝑒𝑓𝑓𝑖𝑐𝑒𝑛𝑡, 𝑦(𝑛) = ∑ ℎ(𝑘) ∗ 𝑥(𝑛 − 𝑘)
𝑀−1
𝑘=0
𝑈𝑛𝑘𝑛𝑜𝑤𝑛 𝑠𝑦𝑠𝑡𝑒𝑚′𝑠 𝑜𝑢𝑡𝑝𝑢𝑡, 𝑑(𝑛)
𝐸𝑟𝑟𝑜𝑟 𝑠𝑒𝑞𝑢𝑒𝑛𝑐𝑒, 𝑒(𝑛) = 𝑑(𝑛) − 𝑦(𝑛)
𝑁𝑜𝑤, 𝑡𝑜 𝑔𝑒𝑡 𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒𝑑 𝑜𝑟 𝑜𝑝𝑡𝑖𝑚𝑖𝑧𝑒𝑑 𝑐𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡𝑠 ℎ(𝑘) 𝑤𝑖𝑡ℎ 𝑁 + 1 𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑡𝑖𝑜𝑛𝑠,
ᶓ𝑀 = ∑ [𝑑(𝑛) − ∑ ℎ(𝑘) 𝑥(𝑛 − 𝑘)
𝑀−1
𝑘=0
]
2𝑁
𝑛=0
ᶓ𝑀 = ∑ [𝑑(𝑛) − ∑ ℎ(𝑘) 𝑟𝑥𝑥(𝑙 − 𝑘) = 𝑟𝑦𝑥(𝑙)
𝑀−1
𝑘=0
]
2
𝑁
𝑛=0
𝑊ℎ𝑒𝑟𝑒, 𝑙 = 0,1, …… .𝑀 − 1
𝑡ℎ𝑒 𝑎𝑢𝑡𝑜𝑐𝑜𝑟𝑟𝑒𝑙𝑎𝑡𝑖𝑜𝑛 𝑜𝑓 𝑡ℎ𝑒 𝑠𝑒𝑞𝑢𝑒𝑛𝑐𝑒 𝑥(𝑛) = 𝑟𝑥𝑥(𝑙)
𝑡ℎ𝑒 𝑐𝑟𝑜𝑠𝑠𝑐𝑜𝑟𝑟𝑒𝑙𝑎𝑡𝑖𝑜𝑛 𝑜𝑓 𝑡ℎ𝑒 𝑠𝑦𝑠𝑡𝑒𝑚 𝑜𝑢𝑡𝑝𝑢𝑡 𝑤𝑖𝑡ℎ 𝑡ℎ𝑒 𝑖𝑛𝑝𝑢𝑡 𝑠𝑒𝑞𝑢𝑒𝑛𝑐𝑒, 𝑟𝑦𝑥(𝑙)
In the figure, we can clearly see that, the input signal is provided to both FIR filter and
unknown system. The FIR filter is initialized with some best guessed coefficients. Then, from
the error signal, we can measure the deviation of default coefficients from the desired
coefficients through calculating new corrected coefficients.
FIR/IIR Filter
Adaptive Control Algorithm
Input Signal: x(n)
Output Signal: y(n)
Error Signal: e(n)
Updated Coefficients
Feedback Loop
Unknown Time-variant System
Desired Signal: d(n)
Figure 18: System Identification using Adaptive Filter
Chapter 6
MATLAB and Development Tools
6.1 MATLAB GUI Design Methodology
MATLAB is resource rich and offers several development alternatives to develop a software
in MATLAB. For an example, to develop a GUI in MATLAB we can either use GUI preform
GUIDE or we can write the GUI programmatically. Moreover, for run-time data storage, we
can either use “guidata()” function or “setappdata()/getappdata()” function. Furthermore, for
function management we can either use “multiple-function” or “nested-function” approach. In
addition, for GUI structural block we can either use “single panel” or “nested panels”
approach. Each of these alternatives have their own trade-off and need to be used according to
the software need. Some of these alternatives are discussed with more details in the following
sections.
6.1.1 Compact data representation
The goal of compact data representation is to optimally utilize the spatial spaces available
within a data display and to reuse the same space to display multiple data. In MATLAB this
can be easily accomplished using function property “Visible”. When the “Visible” property is
“on”, the corresponding GUI elements will be visible and vice versa. Therefore, a set of GUI
elements can be made invisible and visible in an execution instance using this property and this
flexibility can be used to contain multiple GUI element in the same spatial coordinate and can
be made visible when needed.
6.1. 2 Aesthetical data representation
The overall aesthetics of software workspace is important as like as physical workspace
aesthetics are important to concentrate on work. This aesthetical matter always influences
humans because human mind drives human brain and our mind always likes beauty. Therefore,
most used data need to be placed on the focal point of the convenient eye focus. Data need to
be represented with pleasant but eye-friendly colors. Moreover, in a GUI, data need to be spread
in a coherent manner so that there should be less congestion in visibility even with more data.
All of these aesthetical aspects were attempted to be maintained in the developed software.
6.1.3 GUI Development using “GUIDE”
In MATLAB, “GUIDE” is a GUI development form which is pre-developed. It allows it’s user
to place GUI elements in the GUI using drag and drop method. Besides, it also allows user to
extend the functionality of GUI elements using further programming. However, there are both
advantages and disadvantages using this approach and these are discussed as follows:
6.1.3.1 Advantages:
Less time-consuming
Best for prototyping
Best for short-term use
Best for simpler GUI
Easy solution for newbie computing professional or engineers
6.1.3.2: Disadvantages:
Does not offer full understanding on GUI construction
There are cases where it can take more time to fix GUI error issues in compare to
programmatic implementation
Needs to keep track of two files i.e. “.m” and “.fig” for every GUI
GUIDE generated codes are messy and large in size
Little changes in GUI causes substantial reordering of the corresponding GUI code
hence it is not worthy to keep track of the code through source code control system
(e.g. CVS)
6.1.4 Programmatic GUI Development
In MATLAB, a GUI can be developed programmatically. This approach has huge advantages
but as well contains some drawbacks. However, the advantages overcome its drawbacks and
therefore, we have used we have the developed the GUI in these project programmatically. The
advantages and disadvantages are discussed as follows.
6.1.4.1 Advantages:
Faster from an overall consideration if implemented with good experience and
expertise
Best for applications that will be used for Long-term
Best for applications that will evolve with more complexity in the future
Allows to make use of nested functions
Hand-coding GUI results in lucid, simpler and easy-to-follow code
Easy deployment; for example it easier to upgrade and update the GUI when there are
fewer files and less codes
Best solution for competent or advanced computing professionals, engineers, scientist
and researchers
GUI layout can be controlled programmatically and hence appropriate adaptability
with various screen sizes becomes possible
GUI related code can be reused
Easy to keep track of the changes that is made to the earlier version of the code
through source code control system (e.g. CVS)
6.1.4.2 Disadvantages
Longer learning curve
Have to start from scratch
Take more time to create a simple GUI in compare to GUIDE
6.2 Structural GUI Design Tools
The structure of GUI depends on the extent and type of GUI elements are used to construct it.
We can formulate the GUI structure in two categories, namely, “skin” structure and “code”
structure. For skin structure, two notions are important in the development of GUI, these are:
1. GUI elements 2. How these elements are placed within GUI. We have used “nested panels”
in this project that has shaped both “skin” and “code” structure of the GUI. Moreover, we have
also used “nested functions” in this project that has mostly shaped the “code” structure. Both
“nested panels” and “nested functions” have their own trade-offs and are discussed as follows.
6.2.1 Nested Panels
“Nested panels” means putting several panels within a single parent panel. A parent panel can
have several level of child panels based on the degree of nesting. In other words, we can say
that, a parent panel can have child panels and grand-child panels which in turns result in several
parent panels within a grandparent panel. There are both advantages and disadvantages of using
“nested panels” and are discussed as follows. In this project, we have used “nested panels”
because its advantages overcomes its disadvantages.
6.2.1.1 Advantages
Realignment only impact within child panel and GUI elements within outer panels stays
intact
Offers locked GUI elements within a certain GUI area and therefore prevents any
accidental realignment
All components within a parent panel can be easily relocated with 100% same alignment
ratio
Facilitates moduler GUI development
Facilitates re-use of code in another symmetric panel with same alignment ratio
6.2.1.2 Disadvantages
If parent panels needed to be reorganized, then whole GUI layout needed to be re-
implemented
6.2.2 Nested Functions
“Nested functions” means putting several or hundreds of child functions within a single
parent function. However, there are advantages and disadvantages for this approach and are
discussed as follows.
6.2.1.1 Advantages
It is possible to use variables that are not explicitly passed as input arguments, namely
externally scoped variables from the parent function.
A handle created in parent function can be used for data storage purpose from the nested
function.
6.2.1.2 Disadvantages
When a code become larger, a function and several hundreds of nested functions within it
creates inconvenience to programmer.
6.3 Used Functions
In MATLAB, there are cases which can be only solved using a unique function and there are
no alternatives available. However, there are also cases which can be solved using several
alternative functions and a user need to make choice based on need and convenience.
Main GUI window: using “figure” function.
GUI element handling: using “function handle” of each GUI element
GUI element customization: using each function’s associated “Property” and “Values”.
GUI elements: “uimenu”, “uitoolbar”, “uipushtool”, “uipanel”, “uicontrol”, “axes”,
“getappdata”, “uitable”, “uigetfile”
Run-time data storage: “guidata”, “setappdata”
Callback event execution: “Callback” and associatively directed functions
Data Loading: “dlmwrite”, “fileparts”
Learning Curve Calculation: “msesim” function is used
Chapter 7
Algorithm and Software Development
7.1 Graphical User Interface (GUI) Structure and Elements
The Graphical User Interface (GUI) is composed of several elements such as menubar, menus,
toolbar, pushbutton, popup menu, slider, axes, text, edit and as well as design structures such
as panels etc. In the previous chapter, we have briefly mentioned about it. All of these elements
are placed in the coordinate of the main parent figure. In another word, the whole MATLAB
GUI is a figure function instance which contains various sub components to accomplish the
tasks of the software.
7.1.1 Main GUI Window or Figure In MATLAB, the whole GUI is realized within a single function called “figure”. The
function is called along with desired arguments and in turn it generates a blank GUI window
in accordance with the passed on properties. This blank GUI window has horizontal coordinate
and vertical coordinate. Then, we have placed several GUI elements into this blank GUI
window through using this coordinates. After declaration of the “figure” function it returns the
handle to that function, reciprocally, to the blank GUI window. We have used this handle for
placing other GUI elements to the blank parent GUI window. In the following code, we can
see that, first we have declared the main parent “figure” function and then placed menubar,
menus and toolbar into the generated main GUI window.
myHandle=figure('Visible','off','HandleVisibility','callback','NumberTitle'
,'off','MenuBar','None','Resize','off','Name','A MATLAB Simulation Software
for Key Adaptive Algorithms and Applications, Developed By Main Uddin-Al-
Hasan','units','normalized','outerposition',[0 0 1 1],'Visible','on');
myMenu1=uimenu(myHandle,'Label','File'); addItem2=uimenu(myMenu1,'Label','Load Data','Callback',@loadData); addItem4=uimenu(myMenu1,'Label','Close','Callback',@closeFigure);
myToolbar=uitoolbar(myHandle); img1 = imread('new.png'); img11 = imresize(img1,[25,25]); tool1 =
uipushtool(myToolbar,'CData',img11,'Separator','on','TooltipString','Load
Data','HandleVisibility','off','ClickedCallback',@loadData);
In figure 16, we can see the structure of the developed GUI. The main parent figure contains
all GUI elements and panels.
Figure 19: Developed GUI without data
In the figure 16, from the middle to left there are four panels of dissimilar sizes. The
top 2 panels are child panel within a parent panel. The bottom two panels are individual panels
that are positioned into main parent figure coordinate. And, from the middle to right, we have
four display panels and each of which are locked into another display parent panel. This parent
display panel is locked into the main parent figure coordinate.
7.1.2 Nested Panelling
Figure 20: Main GUI window with some data
In figure 17, the bottom left panel of the main GUI window is populated with several child
panels and each panel is populated with several GUI elements. In the following code, first we
have declared four parent panels. All other GUI elements are placed into these four parent
panels. This nested panelling offer modular software development such that if we want to swap
between left half and right half of the above GUI then we just need to change four coordinate
values of corresponding four parent panels and can disregard coordinate locations of all other
GUI elements. That is to say that when we move a parent panel, we move all other child panels
within it and their internal location consistency stays unchanged.
% Creating Parent Panels DataAndSelection=uipanel(myHandle,'BorderType','none','BackgroundColor','wh
ite','Position',[.0 .70 .5 .30]); AlgorithmParameter=uipanel(myHandle,'BorderType','none','BackgroundColor','
white','Position',[.0 .0 .3 .70]); titleData=uicontrol(AlgorithmParameter,'Style','text','String','Algorithm
Paramters','BackgroundColor',[.5 .5 1],... 'Units','normalized','FontSize',12,'Position',[.0 .95 1 .05]); LoadedDataDisplay=uipanel(myHandle,'BorderType','none','Position',[.3 .0 .2
.70]); SignalDisplay=uipanel(myHandle,'BorderType','none','Position',[.5 .0 .5
1]);
In the following code, we have created two child panels. In the first child panel, we have placed
popup menus, default data load option and execution push button. In the second child panel,
we have placed GUI elements for ALE and SI application data input.
% Creating child panels for Data&Selection AlgorithmsAndApplications=uipanel(DataAndSelection,'BorderType','line','Hig
hlightColor',[.5 .5 1],'ShadowColor',[.5 .5 1],... 'FontSize',12,'FontWeight','normal','Position',[.0 .0 .35 1]); titleData=uicontrol(AlgorithmsAndApplications,'Style','text','String','Algo
rithms & Applications','BackgroundColor',[.5 .4 1],... 'Units','normalized','FontSize',12,'Position',[.0 .876 1 .124]);
ApplicationData=uipanel(DataAndSelection,'Visible','off','BorderType','line
','FontSize',12,'HighlightColor',[.5 .6 1],... 'ShadowColor',[.5 .6 1],'Position',[.35 .0 .65 1]); titleData=uicontrol(ApplicationData,'Style','text','String','Application
Data','BackgroundColor',[.5 .7 1],... 'Units','normalized','FontSize',12,'Position',[.0 .876 1 .124]);
In the following code, we have created child panels for each class of algorithms. Then, in each
child panel for each class, we have placed grand-child panels for each type of individual
algorithm.
% Creating child panels for each Algorithm Type LMSAlgorithmParameter=uipanel(AlgorithmParameter,'Visible','off','BorderTyp
e','none','Position',[.0 .0 1 .95]); RLSAlgorithmParameter=uipanel(AlgorithmParameter,'Visible','off','BorderTyp
e','none','Position',[.0 .0 1 .95]); APAlgorithmParameter=uipanel(AlgorithmParameter,'Visible','off','BorderType
','none','Position',[.0 .0 1 .95]); FDAlgorithmParameter=uipanel(AlgorithmParameter,'Visible','off','BorderType
','none','Position',[.0 .0 1 .95]); LBAlgorithmParameter=uipanel(AlgorithmParameter,'Visible','off','BorderType
','none','Position',[.0 .0 1 .95]);
In the following code, we have created several grand-child panels for each type of LMS based
algorithms. After that, we have populated each child panel with corresponding algorithm
properties.
% Creating child panels for LMS Based Algorithms lms=uipanel(LMSAlgorithmParameter,'Title','LMS','Position',[.0 .66 .333
.33]); nlms=uipanel(LMSAlgorithmParameter,'Title','NLMS','Position',[.333 .66 .333
.33]); llms=uipanel(LMSAlgorithmParameter,'Title','LLMS','Position',[.666 .66 .333
.33]); adjlms=uipanel(LMSAlgorithmParameter,'Title','ADJLMS','Position',[.0 .33
.333 .33]); blms=uipanel(LMSAlgorithmParameter,'Title','BLMS','Position',[.333 .33 .333
.33]); blms_fft=uipanel(LMSAlgorithmParameter,'Title','BLMS-FFT','Position',[.666
.33 .333 .33]); dlms=uipanel(LMSAlgorithmParameter,'Title','DLMS','Position',[.0 .0 .333
.33]); filtxlms=uipanel(LMSAlgorithmParameter,'Title','FILT-XLMS','Position',[.333
.0 .333 .33]); sDESlms=uipanel(LMSAlgorithmParameter,'Title','SD/SE/SS','Position',[.666
.0 .333 .33]);
In the figure, we can see the internal blocks of the resultant GUI. The position of each block
in this figure exactly similar to the corresponding developed GUI.
Main Parent FigureMenubar: menus, sub-menus,
Toolbar
Parent Panel: Selection, Execution and Application Data
Parent Panel: Algorithm Parameters
Parent Panel: Data Display
Child Panel: Select Applications and
Algorithms and Execute
Child Panel: Enter ALE and SI Data
Child Panel 1(Parameters)
Child Panel 2(Parameters)
Child Panel 3(Parameters)
Parent Panel: Loaded Data
DisplayChild Panel 4(Parameters)
Child Panel 5(Parameters)
Child Panel 6(Parameters)
Child Panel 7(Parameters)
Child Panel 8(Parameters)
Child Panel 9(Parameters)
Child Panel: Original Signal
Child Panel: All Learning Curve
Grand Child Panel: Axis
Customization and Listening
Child Panel: All Estimated Signal Grand Child Panel: Axis
Customization and Listening
Child Panel: All Error Signal Grand Child Panel: Axis
Customization and Listening
Figure 21: Internal GUI Blocks
The benefit of modular GUI management is clearly understandable from the figure 18. For an
example, if we want to swap between “Child Panel 1” and “Child Panel 2”, we just need to
change the “Position” property coordinate. All of the GUI elements that are contained within
these two child panels will stay unchanged.
7.1.3 Popup Menu or Listing
Menubar is a common element of modern software GUI. The common standard to use
this element is at the top of the software. However, there are shortage of spaces there and popup
menu is a good alternative to show a listing. Moreover, multiple popup menu can be locked
into a single place and then can be conveniently accessed using the “visible” property of GUI.
We have used this property to show several popup menu in a small place. A small block of the
code related to popup menu is given blow. Here, we have first declared the list and then created
the popup menu and assigned the list to the “String” property of popup function. After that, we
have fetched the currently selected value and associated string value from second column of
the list. This fetched string value is later used to decide which configuration of function is
called.
popupLMSClass ={... % LMS Based Algorithms '',''; 'LMS FIR' 'LMS'; 'Normalized LMS FIR' 'NLMS'; 'Leaky LMS FIR' 'LLMS'; 'Adjoint LMS FIR' 'ADJLMS'; 'Block LMS FIR' 'BLMS'; 'FFT-based Block LMS FIR' 'BLMSFFT'; 'Delayed LMS FIR' 'DLMS'; 'Filtered-x LMS FIR' 'FILTXLMS'; 'Sign-Data LMS FIR (SD)' 'SD'; 'Sign-Error LMS FIR (SE)' 'SE'; 'Sign-Sign LMS FIR (SS)' 'SS'};
selectLMSClass =
uicontrol(AlgorithmsAndApplications,'Visible','off','Style','popupmenu','Un
its','normalized','String',popupLMSClass(:,1),'HandleVisibility','callback'
,'Position',[.05 .44 .83 .1],'Callback',@AlgCustomizedVisibility);
whatLMSAlgorithm = popupLMSClass{get(selectLMSClass,'Value'), 2};
In total, we have created three visible popup menu at an execution instance and they
need to be selected in a descending order to be able to use it correctly. That is to say to mean
that, when an option is selected from the first popup menu, the second popup menu is displayed
based on the first selection and similarly based second selection third popup menu is displayed.
The first popup menu shows the applications, second popup menu shows the algorithm class
types and the third popup menu shows the individual algorithms.
Popup Menu 1: Select Applications
1. Adaptive Noise Cancellation (ANC)
2. Adaptive Line Enhancement (ALE)
3. System Identification (SI)
START
Popup Menu 2: Select Algorithm Group or
Comparison
1. Run & Compare Algorithms
2. LMS Based FIR Filter
3. RLS Based FIR Filter
4. Affine Projection Based FIR Filter
5. Frequency Domain Based FIR Filter
6. Lattice Base FIR Filter
Is ANC/ALE/SI Chosen?
Is Option 4
Chosen?
Is Option 3
Chosen?
Is Option 2
Chosen?
Is Option 1
Chosen?
Is Option 5
Chosen?
Is Option 6
Chosen?
YES
Popup Menu 3(1):Run and Compare
Algorithms->
1. All LMS Based Algorithms
2. All RLS Based Algorithms
3. All AP Based Algorithms
4. All FD Based Algorithms
5. All Lattice Based Algorithms
6. LMS Based Algorithms in Group
7. RLS Based Algorithms in Group
8. AP Based Algorithms in Group
9. FD Based Algorithms in Group
10. Lattice Based Algorithms in Group
YES
Popup Menu 3(2): LMS Based Algorithms->
1. LMS FIR 2. NLMS FIR 3. LLMS FIR
4. ADJLMS FIR 5. BLMS FIR 6. BLMSFFT
FIR 7. DLMS FIR 8. FILTXLMS FIR 9. SD FIR
10. SE FIR 11. SS FIR
YES
YES
Popup Menu 3(3): RLS Based Algorithms->
1. RLS FIR 2. QRDRLS FIR 3. HRLS FIR 4.
HSWRLS FIR 5. SWRLS FIR 6. FTF FIR
YES Popup Menu 3(4): AP Based Algorithms->
1. AP 2. APRU 3. BAP
YES
Popup Menu 3(5): FD Based Algorithms->
1. PBFDAF 2. PBUFDAF 3. TDAFDCT 4.
TDAFDFT 5. UFDAF
Popup Menu 3(6): Lattice Based Algorithms->
1. GAL 2. LSL 3. QRDLSL
YES
Figure 22: Popup menu execution flow
In the figure 19, the orderly execution of popup menu is given along with the content
of each popup menu. The first popup menu location has a single popup menu that shows the
type of application. The second popup menu location also has a single popup menu that shows
the class of algorithms and comparison mode. But, we have placed six popup menu in the third
popup menu location and each of these menu is connected with the corresponding entry in the
popup menu of second popup menu location.
7.1.4 Slider Control We have used sliders in the developed GUI. The user input value for the variable
parameters (i.e. step-size, filter order) of each algorithm can be easily and conveniently
controlled using these sliders. The sliders works in real-time and that is to say to mean that
when slider position changes it also changes the associated value for corresponding parameter
and when corresponding parameter value is changed the associated slider position is updated.
This auto update is accomplished through using “Callback” property of both “edit” and “slider”
GUI elements. When there is a change in a “edit” box it also executes the associated “Callback”
function. And, we have fetched current “edit” box value and used this value to update the slider
position inside this associated “Callback” function. And, when there is a change in a “slider”,
it also executes the associated “Callback” function and in a similar way updates the
corresponding value in the “edit” box. In the following code, the first function is executed when
there is a change in the corresponding “edit” box and the second function is executed when
there is a change in the corresponding “slider”. Similarly, the third and fourth function works
for the order parameters of the algorithm.
function editLMSmu(hObject,evendata)
set(lmsMuSl1,'Value',str2double(get(lmsDF1,'string')));
end
function sliderLMSmu(hObject, eventdata)
sliderValue=get(lmsMuSl1,'Value'); set(lmsDF1,'string',sliderValue);
end
function editLMSorder(hObject,eventdata)
set(lmsOrderSl1,'Value',str2double(get(lmsDF2,'string'))); end
function sliderLMSorder(hObject,eventdata)
sliderValue=get(lmsOrderSl1,'Value'); set(lmsDF2,'string',sliderValue); end
In the following figure, we can see how the “edit” box and “slider” interact with each-other to
update the corresponding value in real-time.
START
Change parameter value
Update parameter value accordingly
Execute associated callback function
Update slider position accordingly
Change slider position
Execute associated callback function
Figure 23: Real-time slider control
7.1.5 Application and Parameter Data Input
In the developed software, we have two types of user input, namely, application data
input for ALE and SI and variable parameter data input for each algorithm. In the following
code, first we have created the text label using “text” for corresponding data and then used
“edit” box to insert data.
% Data Fields for Signal 1 AmplitudeS1=uicontrol(Signal1,'Style','text','String','Amplitude','units','
normalized','Position',[.1 .80 .3 .15]); SignalFreqS1=uicontrol(Signal1,'Style','text','String','Frequency','units',
'normalized','Position',[.09 .6 .3 .15]); SampleTimeS1=uicontrol(Signal1,'Style','text','String','Sample
Time','units','normalized','Position',[.07 .4 .3 .15]); SamplingRateS1=uicontrol(Signal1,'Style','text','String','Sampling
Rate','units','normalized','Position',[.0 .2 .4 .15]); PhaseS1=uicontrol(Signal1,'Style','text','String','Phase','units','normaliz
ed','Position',[.13 .0 .3 .15]);
AmplitudeDFS1=uicontrol(Signal1,'Style','edit','string',2,'BackgroundColor'
,'white','units','normalized','Position',[.45 .79 .4 .15]); SignalFreqDFS1=uicontrol(Signal1,'Style','edit','string',1200,'BackgroundCo
lor','white','units','normalized','Position',[.45 .59 .4 .15]); SampleTimeDFS1=uicontrol(Signal1,'Style','edit','string',3000,'BackgroundCo
lor','white','units','normalized','Position',[.45 .39 .4
.15],'Callback',@updateSampleTimeForOtherSignal1); SamplingRateDFS1=uicontrol(Signal1,'Style','edit','string',1000,'Background
Color','white','units','normalized','Position',[.45 .19 .4 .15]); PhaseDFS1=uicontrol(Signal1,'Style','edit','string',2,'BackgroundColor','wh
ite','units','normalized','Position',[.45 .01 .4 .15]);
In the following code, we have created text label using “text” for both “edit” and corresponding
sliders and then used “edit” to insert data for varying algorithm parameters and used sliders to
conveniently increase or decrease that data.
% Data Fields for LMS lmsT1=uicontrol(lms,'Style','text','String','mu','units','normalized','Posi
tion',[.14 .8 .2 .15]); lmsT2=uicontrol(lms,'Style','text','String','order','units','normalized','P
osition',[.1 .59 .21 .15]); lmsDF1=uicontrol(lms,'Style','edit','BackgroundColor','white','units','norm
alized','Position',[.4 .8 .5 .15],'Callback',@editLMSmu); lmsDF2=uicontrol(lms,'Style','edit','BackgroundColor','white','units','norm
alized','Position',[.4 .59 .5 .15],'Callback',@editLMSorder); lmsT3=uicontrol(lms,'Style','text','String','mu','units','normalized','Posi
tion',[.14 .34 .2 .15]); lmsT4=uicontrol(lms,'Style','text','String','order','units','normalized','P
osition',[.1 .14 .21 .15]); lmsMuSl1=uicontrol(lms,'Style','slider','Min',0,'Max',5,'SliderStep',[0.05
0.1],'units','normalized','Position',[.4 .35 .5
.15],'Callback',@sliderLMSmu); lmsOrderSl1=uicontrol(lms,'Style','slider','Min',0,'Max',1000,'SliderStep',
[.001 .005],'units','normalized','Position',[.4 .15 .5
.15],'Callback',@sliderLMSorder);
Change another
Signal’s Sample
Time Equally
Change Noise
Signal’s Sample
Time Equally
START
Is Sample Time for
One Signal Changed?If Changed
Fetch Default
Sample Time
If not Changed
Change Signal One
Sample Time
Equally
Change Signal
Two Sample Time
Equally
START
Is Sample Time for
Noise Signal Changed?If Changed
Fetch Default
Sample Time
If not Changed
Figure 24: Application data input consistency
In the application data input for ALE and SI, the sample time for signal 1, signal 2 and
additive noise must be same in order to be computed correctly. Therefore, we have used similar
method that we have used in “edit-slider” to maintain automatic consistency among these data
types. For an example, if we change “Signal 1” sample time, then sample time for both “Signal
2” and “Noise” will automatically turn similar to “Signal 1”. The same thing holds for “Signal
2” and “Noise” and when sample time from one of them is changed then the sample time for
other two will also change.
7.1.6 Data storage and retrieval In the developed software, the use of data can be realized into two categories. Firstly,
loaded data or external data. Secondly, software generated data after processing. The external
speech data or loaded data is stored in the guidata() storage function of main GUI handle for
further processing. On the other hand, the software generated data such as estimated signal,
error signal, learning curve are stored in the axis handle of corresponding display axis using
setappdata() function. The software generated data is stored so that processed signals can be
played whenever needed after processing or can be displayed in a new figure. In the following
code, we have loaded the speech data for ANC and saved it in the guidata() function of main
figure handle.
function loadData(hObject, eventdata)
[filename,filepath] = uigetfile('*.*','All Files','Select your Data or
Files');
[path,name,ext] = fileparts(filename); if(strcmp(ext,'.mat'))
data=matfile(filename); dlmwrite('inputData.dat',[data.d data.x]); myData=load('inputData.dat'); guidata(myHandle,myData); setappdata(AncData,'SignalWithNoise',data); updateDataTable();
else
myData=load(filename); guidata(myHandle,myData); updateDataTable();
end
end
In the following code, we have fetched back the loaded and stored data and displayed in the
“uitable” function generated table. This “uitable” GUI element is placed into the third main
parent panel.
function updateDataTable(hObject,eventdata)
% Setting uitable in Statistical and Data Analysis
columnFormat = {'numeric', 'numeric'}; columnEdit = [true true]; columnWidth = {60 60}; inputRawData=guidata(myHandle); colnames={'1','2','3'};
inputDataTable =
uitable(StatisticalAndDataAnalysis,'Units','normalized','Position',[.0 .0 1
.95],'Data',inputRawData,... 'ColumnName',colnames,'ColumnFormat',
columnFormat,'ColumnWidth', columnWidth,'ColumnEditable', columnEdit,... 'ToolTipString','Loaded Signal Data'); end
In the following code, we have fetched back stored software generated data (e.g. estimated
signal) to be played. Similarly, error signal and learning curve data can be also fetched and be
listened or displayed respectively.
function playEstimatedSound(hObject,eventdata)
sEs=getappdata(EstimatedSignal,'EstimatedSignalData'); sound(sEs); end
7.1.7 Data display axes We have displayed the processed data in their corresponding axes and each axis is
placed into the corresponding child panels. In the following code, we have created four axes
for four types of data. Then, we have given one example of how “No Data Available” status is
shown. This status data is not a functional requirement for the software but good to use for
aesthetical purpose and also to correctly show the status of the software.
% Setting axes in Panels of Signal Display OriginalSignal=axes('Parent',DisplayPanel1,'Units','normalized','Position',
[.0 .0 .9 .85]); EstimatedSignal=axes('Parent',DisplayPanel2,'Position',[.0 .0 .9 .85]); ErrorSignal=axes('Parent',DisplayPanel3,'Position',[.0 .0 .9 .85]); LearningCurve=axes('Parent',DisplayPanel4,'Position',[.0 .0 .9 .85]);
noDataNotice1 = uicontrol(DisplayPanel1,'Style', 'text','Units',
'normalized','Position',[.25 .3 .4 .2],'String','No Data Available',... 'FontWeight', 'bold','ForegroundColor', [.8 .8 .8],'BackgroundColor', 'w');
% Setting axis cutomization child panels in SignalDisplay AxisCustomizationPanelOS=uipanel(DisplayPanel1,'BorderType','none','Backgro
undColor',[.5 .8 .5],'Position',[.9 .0 .1 .85]); AxisCustomizationPanelES=uipanel(DisplayPanel2,'BorderType','none','Backgro
undColor',[.5 .8 .5],'Position',[.9 .0 .1 .85]); AxisCustomizationPanelER=uipanel(DisplayPanel3,'BorderType','none','Backgro
undColor',[.5 .8 .5],'Position',[.9 .0 .1 .85]); AxisCustomizationPanelLC=uipanel(DisplayPanel4,'BorderType','none','Backgro
undColor',[.5 .8 .5],'Position',[.9 .0 .1 .85]);
7.1.8 A block of main plotter function
In the developed software, the task of plotting data according to the chosen option from
the popup menu is implemented within a single function and but multiple conditional
statements are used. At the beginning of the function, we have fetched the chosen option from
the popup menu and these options are used as decision making parameters for the conditional
statements. When the conditions are met then corresponding “callback” functions are called.
The called functions returns the processed data and this processed data is then plotted in the
corresponding axes. After plotting the generated data, the data is saved using setappdata() to
be fetched back again for listening and showing in new figure. In the following code, we have
first fetched the chosen options from the popup menu and then these chosen options are used
to make decision for plotting.
function dataPlot(hObject, eventdata)
whatApplication = popupApplications{get(selectApplications,'Value'), 2};
%% Getting which application is selected whatAlgorithmType =
popupAlgorithmTypes{get(selectAlgorithmTypes,'Value'),2}; %%
Getting which Algorithm Class is selected whatRunCompare = popupRunCompare{get(selectRunCompare,'Value'),2}; whatLMSAlgorithm = popupLMSClass{get(selectLMSClass,'Value'), 2};
%% Getting which adapting algorithm is selected whatRLSAlgorithm = popupRLSClass{get(selectRLSClass,'Value'), 2}; whatAPAlgorithm = popupAPClass{get(selectAPClass,'Value'), 2}; whatFDAlgorithm = popupFDClass{get(selectFDClass,'Value'), 2}; whatLBAlgorithm = popupLBClass{get(selectLBClass,'Value'), 2};
if (strcmp(whatApplication,'ANC')&& strcmp(whatAlgorithmType,'RC') &&
strcmp(whatRunCompare,'LMSALL'))
% Corresponding code for above condition is written here.
elseif (strcmp(whatApplication,'ANC')&& strcmp(whatAlgorithmType,'LMST') &&
strcmp(whatLMSAlgorithm,'LMS'))
% Erasing Previous Plotted Data plot(OriginalSignal,cla); plot(EstimatedSignal,cla); plot(ErrorSignal,cla); plot(LearningCurve,cla);
% % Getting Original Singal, Error Signal and Estimated Signal [OS,ErS,EsS,lc] = applicationAnc_lms();
% % Plotting fetched data set(noDataNotice1, 'Visible', 'off'); plot(OriginalSignal,OS,'b'); title(OriginalSignal,'Orignal Signal'); set(noDataNotice2, 'Visible', 'off'); plot(EstimatedSignal,EsS,'g'); title(EstimatedSignal,'Estimated Signal'); set(noDataNotice3, 'Visible', 'off'); plot(ErrorSignal,ErS,'r'); title(ErrorSignal,'Error Signal'); xlabel(ErrorSignal,'iteration'); set(noDataNotice4, 'Visible', 'off'); plot(LearningCurve,lc); title(LearningCurve,'Learning Curve');
% % Storing data which can be retrieved to play setappdata(OriginalSignal,'OriginalSignalData',OS); setappdata(EstimatedSignal,'EstimatedSignalData',EsS); setappdata(ErrorSignal,'ErrorSignalData',ErS); setappdata(LearningCurve,'LearningCurveData',lc); setappdata(myHandle,'CurrentAlgorithms','LMS');
end
end
7.1.9 An instance of functions for applications We have implemented function for each implemented adaptive filter in accordance with their
application. This separated implementation of functions results in larger code but reduces data
congestion. We have given example of three functions below that are implemented using LMS
algorithm for ANC, ALE and SI application.
7.1.9.1 ANC with LMS
In the following code, we have first fetched the loaded speech data and the separated
them as original signal and noise signal. After that, we have fetched algorithm parameters for
corresponding algorithm and used these parameters to call the algorithm to generate
corresponding transfer function. Then, we have used “filter” function to process input signals
using the created transfer function. Subsequently, the filter function returns the output and error
signal which are then return to the caller function. In this function, we have also called
“msesim” function using created filter and input signals which in turn returns four types of
values. We have used one value out of these four values to plot the learning curve of the
corresponding algorithm.
% % ANC with LMS function [d,E,y,lc] = applicationAnc_lms()
% Getting loaded data SignalPlusNoise=getappdata(AncData,'SignalWithNoise'); x=SignalPlusNoise.x; d=SignalPlusNoise.d;
meu=str2double(get(lmsDF1,'string')); % Getting meu from data field nthOrder=str2double(get(lmsDF2,'string')); % Getting filter order from data
field
filt=adaptfilt.lms(nthOrder,meu); [y,E]=filter(filt,x,d); [mse,meanw,w,lc]=msesim(filt,x,d);
end
7.1.9.2 ALE with LMS
In the following code, we have first fetched the “Signal 1”, “Signal 2” and “Noise” data
to be used for generating corresponding signals and noise. After that, we have added both
signal, namely, “Signal 1” and “Signal 2” and added noise to it. We have then fetched step-size
and filter order size and delay. After that, we have created transfer function using these
parameter values and algorithm function. Afterwards, we have similarly used filter function
and msesim function like the previous function to do the same task.
% Adaptive Line Enhancement (ALE) with LMS function [d,E,y,lc] = applicationAle_lms(hObject, eventdata)
Amplitude1 =str2double(get(AmplitudeDFS1,'string'));%Amplitude for Signal 1 Amplitude2 =str2double(get(AmplitudeDFS2,'string'));%Amplitude for Signal 2 f1 =str2double(get(SignalFreqDFS1,'string')); % Signal 1 Frequency f2 =str2double(get(SignalFreqDFS2,'string')); % Signal 2 Frequency N1=str2double(get(SampleTimeDFS1,'string')); % Sampling Time for Signal 1 N2=str2double(get(SampleTimeDFS2,'string')); % Sampling Time for Signal 2 n1 = 0:1:N1-1; n2 = 0:1:N2-1; sampling_rate1 =str2double(get(SamplingRateDFS1,'string'));% Sampling rate
for Signal 1 sampling_rate2 =str2double(get(SamplingRateDFS2,'string'));% Sampling rate
for Signal 2 theta1 =str2double(get(PhaseDFS1,'string')); % Phase for Signal 1 theta2 =str2double(get(PhaseDFS2,'string')); % Phase for Signal 2 sample_interval1=1/sampling_rate1; % Sampling period or sample interval sample_interval2=1/sampling_rate2; % Sampling period or sample interval
% Generating Signal 1 w1 = 2*pi*f1; s1 = Amplitude1*sin(w1.*n1*sample_interval1+theta1);
% Generating Signal 2 w2 = 2*pi*f2; s2 = Amplitude2*sin(w2.*n2*sample_interval2+theta2);
% Generating gaussian noise mGauss=str2double(get(NoiseRow,'string')); nGauss=str2double(get(NoiseColumn,'string')); pGauss=str2double(get(NoisePower,'string')); v = 0.15*wgn(mGauss,nGauss,pGauss);
% Adding Two Signal s = s1+s2;
% Adding observation noise to the Signal d = s + v;
% Getting meu and filter order meu=str2double(get(lmsDF1,'string')); nthOrder=str2double(get(lmsDF2,'string'));
% Choosing delay delay=str2double(get(DelayDF,'string'));
x=[zeros(1,delay) d(1,1:N1-delay)]; % X=conv_mat(x,nthOrder);
% [A,E,y]= lms_algorithm(X,d,meu,nthOrder); % Calling LMS algorithm
filt=adaptfilt.lms(nthOrder,meu); [y,E]=filter(filt,x,d); [mse,meanw,w,lc]=msesim(filt,x,d);
end
7.1.9.3 SI with LMS
In the following code for SI, we have done almost similar procedure like the previous
function for ALE. The exception is that after adding the two signal we have reshaped it. And,
one more dissimilarity with ALE function is that, we have created an unknown “fir” system
and then used this unknown “fir” system along with previously “added and reshaped” signal
and noise to generate the desired signal. This desired signal is then used with transfer function
and “added and reshaped” signal to get the function output. We have also used “msesim”
function like the previous functions.
% System Identification (SI) with LMS function [s,E,y,lc] = applicationSi_lms(hObject, eventdata)
Amplitude1 =str2double(get(AmplitudeDFS1,'string')); % Amplitude
for Signal 1 Amplitude2 =str2double(get(AmplitudeDFS2,'string')); % Amplitude
for Signal 2 f1 =str2double(get(SignalFreqDFS1,'string')); % Signal 1
Frequency f2 =str2double(get(SignalFreqDFS2,'string')); % Signal 2
Frequency N1=str2double(get(SampleTimeDFS1,'string')); % Sampling
Time for Signal 1 N2=str2double(get(SampleTimeDFS2,'string')); % Sampling
Time for Signal 2 n1 = 0:1:N1-1; n2 = 0:1:N2-1; sampling_rate1 =str2double(get(SamplingRateDFS1,'string'));
% Sampling rate for Signal 1 sampling_rate2 =str2double(get(SamplingRateDFS2,'string'));
% Sampling rate for Signal 2 theta1 =str2double(get(PhaseDFS1,'string'));
% Phase for Signal 1 theta2 =str2double(get(PhaseDFS2,'string'));
% Phase for Signal 2 sample_interval1=1/sampling_rate1;
% Sampling period or sample interval sample_interval2=1/sampling_rate2;
% Sampling period or sample interval
% Generating Signal 1 w1 = 2*pi*f1; s1 = Amplitude1*sin(w1.*n1*sample_interval1+theta1);
% Generating Signal 2 w2 = 2*pi*f2; s2 = Amplitude2*sin(w2.*n2*sample_interval2+theta2);
% Generating gaussian noise mGauss=str2double(get(NoiseRow,'string')); nGauss=str2double(get(NoiseColumn,'string')); pGauss=str2double(get(NoisePower,'string')); v = 0.15*wgn(mGauss,nGauss,pGauss);
% Adding Two Signal and reshaping
sig = s1+s2; s=reshape(sig,[N1,mGauss]);
% Getting meu and filter order meu=str2double(get(lmsDF1,'string')); nthOrder=str2double(get(lmsDF2,'string'));
unknownFIR = fir1(nthOrder-1,0.5); % The Unknown FIR System which
will be identified desired = filter(unknownFIR,1,s)+v'; % Desired signal filt = adaptfilt.lms(nthOrder,meu); [y,E] = filter(filt,s,desired); [mse,meanw,tracek,lc]=msesim(filt,s,desired);
end
7.1.10 Display results in a new figure In MATLAB, the figure window offers a very rich set of software functions that can be
used to manipulate data or to post process generated data conveniently. Therefore, displaying
some data in a new figure window is particularly important to conveniently and productively
work with MATLAB. Therefore, we have implemented functions that fetches the software
generated data and shows it in a new figure. In the following code, we have fetched running
application type and algorithm type to use with title text of new figure. After that, we have
fetched generated data for estimated signal and then declared the new figure with appropriate
and convenient properties. The new figure is then used to plot the fetched estimated signal with
right xlabel, ylabel, title and legend.
function showLargeGraphEstimated(hObject,eventdata)
whatApplication = popupApplications{get(selectApplications,'Value'), 2}; CurrentAlgorithms=getappdata(myHandle,'CurrentAlgorithms');
EstimatedSigData=getappdata(EstimatedSignal,'EstimatedSignalData'); newFigure=figure('Name','Estimated_Signal','NumberTitle','off','units','nor
malized','outerposition',[.05 .1 .9 .9]); newAxes=axes; set(newFigure,'CurrentAxes',newAxes); plot(newAxes,EstimatedSigData,'g'); if(strcmp(whatApplication,'ANC')) xlabel(newAxes,'Samples'); else xlabel(newAxes,'Sample Time'); end ylabel(newAxes,'Amplitude'); whatTitle=[whatApplication,' with ',CurrentAlgorithms,' (Estimated
Signal)']; title(newAxes,whatTitle); legend(newAxes,CurrentAlgorithms); end end
7.1.11 Data representation, Listening data and Default Parameter Value
A GUI has limitation of space but often required to represent more data than it can
normally visualize. This problem can be mitigated by placing several GUI elements into the
same coordinate of main parent figure function generated GUI.
function whichAlgorithmType(hObject,eventdata)
whatApplication = popupApplications{get(selectApplications,'Value'), 2}; whatAlgorithmType =
popupAlgorithmTypes{get(selectAlgorithmTypes,'Value'),2}; whatComparisonType = popupRunCompare{get(selectRunCompare,'Value'),2}; set(noDataNotice5,'Visible','off');
if (strcmp(whatApplication,'ANC') && strcmp(whatAlgorithmType,'RC'))
set(selectRunCompare,'Visible','on'); set(selectLMSClass,'Visible','off'); set(selectRLSClass,'Visible','off'); set(selectAPClass,'Visible','off'); set(selectFDClass,'Visible','off'); set(selectLBClass,'Visible','off'); set(selectEmptyPopup,'Visible','off');
set(LMSAlgorithmParameter,'Visible','off'); set(RLSAlgorithmParameter,'Visible','off'); set(APAlgorithmParameter,'Visible','off'); set(FDAlgorithmParameter,'Visible','off'); set(LBAlgorithmParameter,'Visible','off');
elseif (strcmp(whatApplication,'ANC') && strcmp(whatAlgorithmType,'LMST'))
set(selectLMSClass,'Visible','on'); set(selectRLSClass,'Visible','off'); set(selectAPClass,'Visible','off'); set(selectFDClass,'Visible','off'); set(selectLBClass,'Visible','off'); set(selectEmptyPopup,'Visible','off');
set(LMSAlgorithmParameter,'Visible','on'); set(RLSAlgorithmParameter,'Visible','off'); set(APAlgorithmParameter,'Visible','off'); set(FDAlgorithmParameter,'Visible','off'); set(LBAlgorithmParameter,'Visible','off'); end
end
START
Load Data or Get Application Data
Plot results in their corresponding axes
Storage of original signal
axes
Storage of estimated signal
axes
Storage of error signal axes
Storage of learning curve
axes
Save Original Signal
Save Error Signal
Save Learning Curve
Save Estimated Signal
Execute callback functions for selected options
Get data and Play
YES
Play?
Show data in figure?
Get data and display
YES
END
Figure 25: Representation and Listening to Data
In the figure, the flow chart shows the detail sequential flow of playing processed data and
displaying processed data into a new figure.
Moreover, adaptive filters requires trial-and-error based approach to find out optimal
algorithm parameters for the adaptive filter. Therefore, some default parameter values that are
easily loadable into software workspace makes it more convenient to work with adaptive filters.
Therefore, in the developed software, we have implemented functions to be able load default
data for the adaptive control algorithms of adaptive filter and a small block of code from that
function is given below.
function loadDefaultData(hObject, eventdata)
%% LMS Based
% LMS set(lmsDF1,'string',.001); set(lmsDF2,'string',20); % NLMS set(nlmsDF1,'string',.001); set(nlmsDF2,'string',20); set(nlmsDF3,'string',1);
end
7.2 Software Execution Flow In the figure, the total software execution flow is given. This flow chart summarizes all the
main functionality of the developed software that we have discussed above.
START
Load Data or Set Data
Select Application, Algorithm Type and Algorithm
Execute/Run
Data StorageOr
Data Input Panel
Store Data
Which Application, Algorithm Type and Individual Algorithm selected?
Get Selection Options
Call Data Plotter Function
An instance of selection(e.g. ANC && LMST && NLMS)
Or(e.g. ALE && RC && LMSALL)
Or(e.g. SI && RC && LMSGROUP)
Collection of functions
Functions for Each Application with individual algorithms
Process Data and Generate Results
Call CorrespodingFunction
Get Data
Data Display
Display each result in their corresponding
axes
Get Results
END
Play or Display data in external figure
Working with results
Figure 26: Software Execution Flow
Chapter 8
Results of Adaptive Algorithms
8.1 Active Noise Cancellation (ANC)
8.1.1 LMS Based
8.1.1.1 LMS
Figure 27: ANC with LMS when µ = .01 and order 30
Figure 28: ANC with LMS when µ = .001 and order 30
8.1.1.2 NLMS
Figure 29: ANC with NLMS when µ = .01 and order 30
Figure 30: ANC with NLMS when µ = .001 and order 30
8.1.1.3 LLMS
Figure 31: ANC with LLMS when µ = .01, order 30 and leakage .8
Figure 32: ANC with LLMS when µ = .001, order 30 and leakage .8
8.1.1.4 ADJLMS
Figure 33: ANC with ADJLMS when µ = .001, order 30
Figure 34: ANC with ADJLMS when µ = .00001, order 30
8.1.1.5 BLMS
Figure 35: ANC with BLMS when µ = .01, order 30
Figure 36: ANC with BLMS when µ = .001, order 30
8.1.1.6 BLMSFFT
Figure 37: ANC with BLMSFFT when µ = .01, order 30
Figure 38: ANC with BLMSFFT when µ = .001, order 30
8.1.1.7 DLMS
Figure 39: ANC with DLMS when µ = .01, order 30, delay = 11
Figure 40: ANC with DLMS when µ = .001, order 30, delay = 11
8.1.1.8 Filtered-x LMS
Figure 41: ANC with Filtered-x LMS when µ = .01, order 30
Figure 42: ANC with Filtered-x LMS when µ = .001, order 30
8.1.1.9 Sign-Data LMS
Figure 43: ANC with Sign-Data LMS when µ = .01, order 30
Figure 44: ANC with Sign-Data LMS when µ = .001, order 30
8.1.1.10 Sign-Error LMS
Figure 45: ANC with Sign-Error LMS when µ = .01, order 30
Figure 46: ANC with Sign-Error LMS when µ = .001, order 30
8.1.1.11 Sign-Sign LMS
Figure 47: ANC with Sign-Sign LMS when µ = .01, order 30
Figure 48: ANC with Sign-Sign LMS when µ = .001, order 30
8.2 Adaptive Line Enhancement (ALE)
8.2.1 LMS Based
8.2.1.1 LMS
Figure 49: ALE with LMS when µ = .01, order 30
Figure 50: ALE with LMS when µ = .001, order 30
8.2.1.3 LLMS
Figure 51: ALE with LMS when µ = .01, order 30
Figure 52: ALE with LLMS when µ = .001, order 30
8.2.1.3 ADJLMS
Figure 53: ALE with ADJLMS when µ = .001, order 30
Figure 54: ALE with ADJLMS when µ = .0001, order 30
8.2.1.4 BLMS
Figure 55: ALE with BLMS when µ = .001, order 30
Figure 56: ALE with BLMS when µ = .0001, order 30
8.2.1.5 BLMSFFT
Figure 57: ALE with BLMSFFT when µ = .001, order 30
Figure 58: ALE with BLMSFFT when µ = .0001, order 30
8.2.1.6 DLMS
Figure 59: ALE with DLMS when µ = .001, order 30
Figure 60: ALE with DLMS when µ = .0001, order 30
8.2.1.7 Filtered-x LMS
Figure 61: ALE with Filtered-x LMS when µ = .0001, order 30
Figure 62: ALE with Filtered-x LMS when µ = .001, order 30
8.2.1.8 Sign-Data
Figure 63: ALE with Sign-Data when µ = .001, order 30
Figure 64: ALE with Sign-Data when µ = .0001, order 30
8.2.1.9 Sign-Error
Figure 65: ALE with Sign-Error when µ = .0001, order 30
Figure 66: ALE with Sign-Error when µ = .001, order 30
8.2.1.10 Sign-Sign
Figure 67: ALE with Sign-Sign when µ = .001, order 30
Figure 68: ALE with Sign-Sign when µ = .0001, order 30
8.3 System Identification (SI) 8.3.1 LMS
Figure 69: SI with LMS when µ = .001, order 30
Figure 70: SI with LMS when µ = .0001, order 30
8.3.2 NLMS
Figure 71: SI with NLMS when µ = .01, order 30, beta 1
Figure 72: SI with NLMS when µ = .1, order 30, beta 1
8.3.3 LLMS
Figure 73: SI with NLMS when µ = .01, order 30, leakage 1
Figure 74: SI with NLMS when µ = .001, order 30, leakage 1
8.3.4 ADJLMS
Figure 75: SI with ADJLMS when µ = .00001, order 30, leakage 1
Figure 76: SI with ADJLMS when µ = .0001, order 30, leakage 1
8.3.5 BLMS
Figure 77: SI with BLMS when µ = .001, order 30
Figure 78: SI with BLMS when µ = .0001, order 30
8.3.6 BLMSFFT
Figure 79: SI with BLMSFFT when µ = .001, order 30
Figure 80: SI with BLMSFFT when µ = .0001, order 30
8.3.7 DLMS
Figure 81: SI with DLMS when µ = .001, order 30, Delay 20
Figure 82: SI with DLMS when µ = .0001, order 30, Delay 20
8.3.8 Filtered-x LMS
Figure 83: SI with Filtered-x LMS when µ = .001, order 30
Figure 84: SI with Filtered-x LMS when µ = .0001, order 30
8.3.9 Sign-Data
Figure 85: SI with Sign-Data when µ = .001, order 30
Figure 86: SI with Sign-Data when µ = .0001, order 30
8.3.10 Sign-Error
Figure 87: SI with Sign-Error when µ = .001, order 30
Figure 88: SI with Sign-Error when µ = .01, order 30
8.3.11 Sign-Sign
Figure 89: SI with Sign-Sign when µ = .0001, order 30
Figure 90: SI with Sign-Sign when µ = .00002, order 30
Chapter 9
Comparative Performance and Data
Analysis
9.1 Comparative Performance The performance evaluation of adaptive algorithms do not have any unified criteria that
globally applies to measure the performance of all types of adaptive algorithms. However, there
exists some criteria for mathematically similar type of adaptive algorithms based on which the
performance of adaptive algorithms can be compared. In the project, we have worked on LMS
based adaptive algorithms. All LMS based algorithms produces nice learning curves because
of stochastic (i.e. statistical) adaptive filtering. The learning curve property of convergence,
convergence speed, steady state error are common to all different variants of LMS adaptive
filters. Because of these commonalities the performance of LMS based adaptive algorithms are
comparable.
9.1.1 Adaptive Noise Cancellation (ANC)
Figure 91: Comparative Learning Curves (LMS, NLMS, LLMS, BLMS, BLMSFFT, DLMS,
SD, SE)
In the figure, the learning gradients of several LMS based algorithms are given which provides
a rough estimate of comparative performance of several algorithms. The plot does not provide
all consistent comparison for all parameters of learning curve. Instead, it gives a quick
comparison in terms of gradient nature of learning and a rough estimate of the SSE if one can
mentally draw a straight line from the last end of learning curve to the left. ADJLMS and
Filtered-xLMS have learning curve from different direction in compare to other adaptive
algorithms. Moreover, the learning curve of SS is quite unpredictable and produces slightly
different learning at different times. Therefore, the learning curves of these algorithms are
given below separately and they are plotted using the same data that are used to plot the
immediate previous comparison plot.
Figure 92: Learning Curves ADJLMS
Figure 93: Learning Curves Filtered-xLMS
Figure 94: Learning Curves SS
9.1.2 Adaptive Line Enhancement (ALE)
Figure 95: Comparative Learning Curves (LMS, NLMS, LLMS, BLMS, BLMSFFT, DLMS,
SD, SE)
Figure 96: Learning Curve ADJLMS
Figure 97: Learning Curve Filt-xLMS
Figure 98: Learning Curve SS
9.1.3 System Identification (SI)
Figure 99: Comparative Learning Curves (LMS, NLMS, LLMS, BLMS, BLMSFFT, DLMS,
SD, SE)
Figure 100: Learning Curve ADJLMS
Figure 101: Learning Curve Filt-xLMS
Figure 102: Learning Curve SS
Chapter 10
Summary and Conclusions
The basic “property” that makes the difference between adaptive filter and other filters is their
ability to adapt. In data processing, we always have to deal with new data and to deal with new
data we need filters that dynamically able to adapt with new situations. The adaptive filters are
dynamic filter and therefore, these types of filters are very useful.
10.1 Future Work
Adaptive algorithms are important for ANC, ALE and SI and therefore requires trial-
and-error approach for finding out optimal algorithm parameters. And, this trial-and-error
approach requires substantial amount of simulation. The developed software in this project is
not comprehensive in a sense that it can only handle particular type of data and particular type
of parameters. The MATLAB implemented adaptive algorithms are feature rich and therefore
a comprehensive adaptive algorithm simulation software will create convenience for scientists
and engineers when simulating adaptive filters.
References
[1] Haykin, Simon. Adaptive Filter Theory. Upper Saddle River, New Jersey, Prentice Hall,
2002.
[2] H. Hayes, Monson. Statistical Digital Signal Processing and Modeling. John Wiley &
Sons, 1996.
[3] Boroujeny, B. Farhang. Adaptive Filters: Theory and Applications. John Wiley & Sons,
1999.
[4] Widrow, Bernard, and Samuel D. Stearns. Adaptive Signal Processing. Upper Saddle
River, New Jersey, Prentice Hall, 2002.
[5] Adali, Tülay, and Simon S. Haykin, eds. Adaptive Signal Processing: Next Generation
Solutions. Hoboken, N.J: Wiley-IEEE, 2010.
[6] D. Poularikas, Alexander and Zayed M. Ramadan. Adaptive Filtering Primer with
MATLAB. Broken Sound Parkway NW, CRC Press, 2006.
[7] Widrow, Bernard, and Eugene Walach. Adaptive Inverse Control: A Signal Processing
Approach. Reissue ed. IEEE Press Series on Power Engineering. Piscataway, NJ :
Hoboken, N.J: IEEE Press ; Wiley-Interscience, 2008.
[8] Ibnkahla, Mohamed, ed. Adaptive Signal Processing in Wireless Communications.
Adaptation in Wireless Communications. Boca Raton, FL: CRC Press, 2009.
[9] Gustafsson, Fredrik. Adaptive Filtering and Change Detection. Chichester New York:
Wiley, 2000.
[10] Diniz, Paulo S. R. Adaptive Filtering. Boston, MA: Springer US, 2013.
http://link.springer.com/10.1007/978-1-4614-4106-9.
[11] Bellanger, Maurice, and Maurice Bellanger. Adaptive Digital Filters. 2nd ed., rev. and
expanded. Signal Processing and Communications 11. New York: Marcel Dekker, 2001.
[12] Artur Ferreira, and Paulo Marques. Echo Cancellation for Hands-Free Systems.
INTECH Open Access Publisher, 2011.
[13] Cichocki, Andrzej, and Shun-ichi Amari. Adaptive Blind Signal and Image Processing:
Learning Algorithms and Applications. Vol. 1. John Wiley & Sons, 2002.
[14] Griffiths, Lloyd J. “A Continuously-Adaptive Filter Implemented as a Lattice Structure.”
In Acoustics, Speech, and Signal Processing, IEEE International Conference on ICASSP
’77., 2:683–86, 1977. doi:10.1109/ICASSP.1977.1170162.
[15] Wan, Eric A. “Adjoint LMS: An Efficient Alternative to the Filtered-X LMS and
Multiple Error LMS Algorithms.” In , 1996 IEEE International Conference on
Acoustics, Speech, and Signal Processing, 1996. ICASSP-96. Conference Proceedings,
3:1842–45 vol. 3, 1996. doi:10.1109/ICASSP.1996.544227.
[16] Friedlander, Benjamin, and Julius O. Smith. “Analysis and Performance Evaluation of
an Adaptive Notch Filter.” IEEE Transactions on Information Theory 30, no. 2 (March
1984): 283–95. doi:10.1109/TIT.1984.1056887.
[17] Montazeri, M., and P. Duhamel. “A Set of Algorithms Linking NLMS and Block RLS
Algorithms.” In , 1993 IEEE International Conference on Acoustics, Speech, and Signal
Processing, 1993. ICASSP-93, 3:412–15 vol.3, 1993.
doi:10.1109/ICASSP.1993.319522.
[18] Isen, Forester W. “DSP for MATLABTM and LabVIEWTM IV: LMS Adaptive Filtering.”
Synthesis Lectures on Signal Processing 4, no. 1 (January 2009): 1–127.
doi:10.2200/S00164ED1V01Y200811SPR007.
[19] Yusuf, Wan Yusri Wan, Mohd Alauddin Mohd Ali, and Edmond Zahedi. “Enhancing
Physionet Electrocardiogram Records for Fetal Heart Rate Detection Algorithm.” In
Biomedical Engineering (ICoBE), 2015 2nd International Conference on, 1–6. IEEE,
2015. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=7235880.
[20] Shynk, J.J. “Frequency-Domain and Multirate Adaptive Filtering.” IEEE Signal
Processing Magazine 9, no. 1 (January 1992): 14–37. doi:10.1109/79.109205.
[21] Eneman, K., and M. Moonen. “Iterated Partitioned Block Frequency-Domain Adaptive
Filtering for Acoustic Echo Cancellation.” IEEE Transactions on Speech and Audio
Processing 11, no. 2 (March 2003): 143–58. doi:10.1109/TSA.2003.809194.
[22] Slock, D.T.M., L. Chisci, H. Lev-Ari, and T. Kailath. “Modular and Numerically Stable
Fast Transversal Filters for Multichannel and Multiexperiment RLS.” IEEE
Transactions on Signal Processing 40, no. 4 (April 1992): 784–802.
doi:10.1109/78.127952.
[23] Soo, J.-S., and K.K. Pang. “Multidelay Block Frequency Domain Adaptive Filter.” IEEE
Transactions on Acoustics, Speech and Signal Processing 38, no. 2 (February 1990):
373–76. doi:10.1109/29.103078.
[24] Slock, D.T.M., and T. Kailath. “Numerically Stable Fast Recursive Least-Squares
Transversal Filters.” In , 1988 International Conference on Acoustics, Speech, and
Signal Processing, 1988. ICASSP-88, 1365–68 vol.3, 1988.
doi:10.1109/ICASSP.1988.196850.
[25] Slock, D.T.M., and T. Kailath. “Numerically Stable Fast Transversal Filters for
Recursive Least Squares Adaptive Filtering.” IEEE Transactions on Signal Processing
39, no. 1 (January 1991): 92–114. doi:10.1109/78.80769.
[26] Tokhi, M.O., and M.A. Hossain. “Performance Evaluation of Parallel Architectures in
Real-Time Implementation of Adaptive Filtering Algorithms.” In IEE Colloquium on
Digital and Analogue Filters and Filtering Systems, 14/1–14/6, 1994.
[27] Kuo, Sen M., and Dennis R. Morgan. “Review of DSP Algorithms for Active Noise
Control.” In Control Applications, 2000. Proceedings of the 2000 IEEE International
Conference on, 243–48. IEEE, 2000.
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=897431.
[28] Lucky, R.W. “Techniques for Adaptive Equalization of Digital Communication
Systems.” Bell System Technical Journal, The 45, no. 2 (February 1966): 255–86.
doi:10.1002/j.1538-7305.1966.tb00020.x.
[29] “Google Search”,