Report 2k3[1]
-
Upload
rogerfederersenthil -
Category
Documents
-
view
222 -
download
0
Transcript of Report 2k3[1]
-
8/14/2019 Report 2k3[1]
1/64
1
CHAPTER 1
IMAGE PROCESSING
-
8/14/2019 Report 2k3[1]
2/64
2
1.1 INTRODUCTION
The limitations of commonly used transforms like FFT, Wavelet in
case of edge preservations and also in denoising with the noises such as
Additive White Gaussian Noise (AWGN), Speckle are well-known and
therefore we have used a transformation named Contourlet which is
better in edge preservations and denoising.
The approach in this transformation starts with the discrete domain
construction and then sparse expansion in the continuous domain. A discrete
domain multi-resolution and multi-directional expansion using non-
separable filter banks are constructed and this results in expansion in contour
segments and thus, the name. The main difference between Contourlet and
other transformations is that, in this new transformation Laplacian pyramid
along with the Directional Filter Banks are used. As a result, this not only
detects the edge discontinuities, but also converts all these discontinuities
into continuous domain. This is the advantage of this transformation when
compared with other transformations in case of decomposition process.
In denoising, a new algorithm is proposed based on the transformation
named Contourlet. A threshold value is being introduced and the resultant of
the decomposition process are compared with this threshold and determined
whether the image value (pixel) is corrupted or not. It is accordingly
modified or preserved and thus finally a conclusion is arrived that the
Contourlet transformation is better than the existing algorithms in image
processing in case of edge preservations and denoising even with the noises
-
8/14/2019 Report 2k3[1]
3/64
-
8/14/2019 Report 2k3[1]
4/64
4
1.2 DESCRIPTION OF THE PROJECT
FIG 1.1 Description of the Project
START
LOAD THE IMAGE
ADDING MULTIPLICATIVE NOISE
DENOISING
Wavelet Decomposition Contourlet Decomposition
Wavelet coefficients
Denoising Algorithm
Contourlet coefficients
Thresholding Algorithm
Wavelet Reconstruction Contourlet Reconstruction
A3
COMPARATIVE RESULTS
STOP
-
8/14/2019 Report 2k3[1]
5/64
5
1.3 IMAGE PROCESSING
An image may be defined as a two-dimensional function f(x, y), where
x and y are spatial coordinates, and the amplitude of f at any pair of
coordinates (x, y) is called the intensity orgray levelof the image at that
point. Whenx, y, and the amplitude values of f are all finite, discrete
quantities, we call the image a digital image.
A digital image is composed of a finite number of elements, each of
which has a particular location and value. These elements are referred to as
picture elements,image elementsand pixels. Pixelis the term most widely
used to denote the elements of a digital image. An image is an array, or a
matrix, of square pixels (picture elements) arranged in columns and rows.
Image processing modifies pictures to improve them (enhancement,
restoration), extract information (analysis, recognition), and change their
structure (composition, image editing). Images can be processed by optical,
photographic, and electronic means, but image processing using digital
computers is the most common method because digital methods are fast,
flexible, and precise.
An image can be synthesized from a micrograph of various cell
organelles by assigning a light intensity value to each cell organelle. The
sensor signal is digitized, and then converted to an array of numerical
values, each value representing the light intensity of a small area of the cell.
The digitized values are called picture elements, or pixels, and are stored
in computer memory as a digital image. A typical size for a digital image is
-
8/14/2019 Report 2k3[1]
6/64
6
an array of 512 by 512 pixels, where each pixel has value in the range of 0 to
255. The digital image is processed by a computer to achieve the desired
result.
Image enhancement improves the quality (clarity) of images for human
viewing. Removing blurring and noise, increasing contrast, and revealing
details are examples of enhancement operations. For example, an image
might be taken of an endothelial cell, which might be of low contrast and
somewhat blurred. Reducing the noise and blurring and increasing the
contrast range could enhance the image. The original image might have
areas of very high and very low intensity, which mask details. An adaptive
enhancement algorithm reveals these details. Adaptive algorithms adjust
their operation based on the image information (pixels) being processed. In
this case the mean intensity, contrast, and sharpness (amount of blur
removal) could be adjusted based on the pixel intensity statistics in various
areas of the image.
The various types of the image formats are
GIF Graphics Interchange Format; an 8-bit (256 colour), non-
destructively compressed bitmap format. Mostly used for web. Have several
sub-standards one of which is the animated GIF.
JPEG Joint Photographic Experts Group; a very efficient (i.e. much
information per byte) destructively compressed 24 bit (16 million colours)
bitmap format. Widely used, especially for web and Internet (bandwidth-
limited).
-
8/14/2019 Report 2k3[1]
7/64
7
TIFF Tagged Image File Format; the standard 24 bit publication
bitmap format. Compresses non-destructively with, for instance, Lempel-
Ziv-Welch (LZW) compression.
PS Postscript, a standard vector format. Has numerous sub-standards
and can be difficult to transport across platforms and operating systems.
PSD a dedicated Photoshop format that keeps all the information in an
image including all the layers.
BMP Windows Bitmap;1-bit, 8-bit, 24-bit uncompressed images
In electrical and computer science engineering, image processing is
any form of signal processing for which the input is an image, such as
photographs or frames of video; the output of image processing can be either
an image or a set of characteristics or parameters related to the image. Most
image-processing techniques involve treating the image as a two-
dimensional signal and applying standard signal-processing techniques to it.
Image processing usually refers to digital image processing, but optical
and analog image processing are also possible.
-
8/14/2019 Report 2k3[1]
8/64
8
Other image processing operations are
Geometric transformations, such as enlargement, reduction, androtation.
Color corrections such as brightness and contrast adjustments,quantization, or conversion to a different color space.
Digital compositing orOptical compositing (combination of two ormore images). Used in filmmaking to make a "matte".
Interpolation and recovery of a full image from a raw image formatusing a Bayer filter pattern.
Image editing (e.g., to increase the quality of a digital image). Image registration (alignment of two or more images), differencing
and morphing.
Image segmentation, Extending dynamic range by combiningdifferently exposed images.
2-D object recognition with affine invariance.
-
8/14/2019 Report 2k3[1]
9/64
9
1.4 ABOUT MATLAB
MATLAB [6] is a software package for high-performance numerical
computation and visualization. It provides an interactive environment with
hundreds of built-in functions for technical computation, graphics and
animation. Best of all, it also provides easy extensibility with its own high-
level programming language. The name MATLAB stands for MATrix
LABoratory.
MATLABs built in functions provide excellent tools for linear algebra
computations, data analysis, signal processing, optimization, numerical
solution of ordinary differential equations and many other scientific
computations. There are also numerous functions for 2-D and 3-D graphics
as well as animation. Also, for those who cannot do with their FORTRAN or
C codes, MATLAB provides an external interface to run those programs
from within. MATLABs language is very easy to learn and to use.
There are also several optional Toolboxes available from the
developers of MATLAB. These toolboxes are collections of functions
written for special applications such as Symbolic computation, Image
processing, Statistics, Control system design and Neural networks etc.
The basic building block of MATLAB is the matrix. The fundamental
data-type is the array. Vectors, scalars, real matrices and complex matrices
are all automatically handled as special cases of the basic data-type. The
built in functions are optimized for vector operations and consequently
vectorized. Commands or codes run faster in MATLAB.
-
8/14/2019 Report 2k3[1]
10/64
10
1.5 NOISES IN IMAGE PROCESSING
In image processing, considering fields like remote sensing and
medical applications, we come across many noises like AWGN (Additive
White Gaussian Noise), Speckle (Multiplicative), Impulse etc which
affects the valuable features and important information.
1.5.1 ADDITIVE WHITE GAUSSIAN NOISE
In communications, the additive white Gaussian noise (AWGN)
channel model is one in which the information is given a single impairment
a linear addition of wideband or white noise with a constant spectral density
(expressed as watts per hertz of bandwidth) and a Gaussian distribution of
noise samples. The model does not account for the phenomena of fading,
frequency selectivity, interference, nonlinearity or dispersion. However, it
produces simple and tractable mathematical models which are useful for
gaining insight into the underlying behavior of a system before these other
phenomena are considered. Wideband Gaussian noise comes from many
natural sources, such as the thermal vibrations of atoms in antennas (referred
to as thermal noise or Johnson-Nyquist noise), shot noise, black body
radiation from the earth and other warm objects, and from celestial sources
such as the Sun.
The AWGN channel is a good model for many satellite and deep space
communication links. It is not a good model for most terrestrial links
because of multipath, terrain blocking, interference, etc. However for
terrestrial path modeling, AWGN is commonly used to simulate background
-
8/14/2019 Report 2k3[1]
11/64
11
noise of the channel under study, in addition to multipath, terrain blocking,
interference, ground clutter and self interference that modern radio systems
encounter in terrestrial operation.
1.5.1.1 EFFECTS OF NOISE IN TIME DOMAIN
FIG 1.2 Zero-Crossings of a Noisy Cosine
In serial data communications, the AWGN mathematical model is used
to model the timing error caused by random jitter (RJ).
The graph shown above shows an example of timing errors associated
with AWGN. The variable t represents the uncertainty in the zero crossing.
As the amplitude of the AWGN is increased, the Signal-to-noise ratio
decreases. This results in increased uncertainty t.
-
8/14/2019 Report 2k3[1]
12/64
12
When affected by AWGN, The average number of either positive
going or negative going zero-crossings per second at the output of a narrow
band pass filter when the input is a sine wave is:
Where,
f0 = the center frequency of the filter
B = the filter bandwidth
SNR = the signal-to-noise power ratio in linear terms
-
8/14/2019 Report 2k3[1]
13/64
13
1.5.1.2 EFFECTS OF NOISE IN PHASOR DOMAIN
1.5.1.2.1 AWGN CONTRIBUTIONS IN THE PHASOR DOMAIN
In modern communication systems, band limited AWGN cannot be
ignored. When modeling band limited AWGN in the phasor domain,
statistical analysis reveals that the amplitudes of the real and imaginary
contributions are independent variables which follow the Gaussian
distribution model. When combined, the resultant phasor's magnitude is a
Rayleigh distributed random variable while the phase is uniformly
distributed from 0 to 2.
FIG 1.3 The graph shows how band limited AWGN can affect a
coherent carrier signal
The instantaneous response of the Noise Vector cannot be precisely
predicted; however its time-averaged response can be statistically predicted.
As shown in the graph, we confidently predict that the noise phasor will
reside inside the 1 circle about 38% of the time; the noise phasor will
reside inside the 2 circle about 86% of the time; and the noise phasor will
reside inside the 3 circle about 98% of the time.
-
8/14/2019 Report 2k3[1]
14/64
14
1.5.2 MULTIPLICATIVE NOISE
Multiplicative noise [5] is a type of signal dependent noise. In this case
variance of the noise is a function of the signal amplitude. The figure below
shows a comparison between a one dimensional (1-D) step-like signal
corrupted with an additive white Gaussian noise AWGN and a multiplicative
noise. In the case of multiplicative noise, the noise variance is higher when
amplitude of the signal is higher. In relation to images, noise in bright
regions have higher variations and could be interpreted wrongly as features
in the original image. Thus it is harder and more complicated to smooth the
noise without degrading true image features.
FIG 1.4 Step-like signals (a) Original signal (b) Signal corrupted with
AWGN and (c) Signal corrupted with multiplicative noise
Multiplicative noise degrades the quality of the image and affects the
performance of important image processing techniques such as detection,
segmentation, and classification. Therefore an effective preprocessing filter
is desirable in these cases.
-
8/14/2019 Report 2k3[1]
15/64
15
Objectives of any filtering approach are
To effectively suppress the noise in uniform regions. To preserve and enhance edges and other similar image features. To provide a visually natural appearance.
Multiplicative noise is commonly found in many real world signal
processing applications. Unlike additive noise, this kind of noise is much
more difficult to be removed from the corrupted signal, mainly because of its
multiplicative nature. When such noise is present in a bright area in an
image, it will be multiplied by high intensity values, thus its random
variation will increase, or be magnified. On the other hand, if the noise is
introduced into a dark area, the change of the random variation may be much
less significant.
Since the noise variation greatly depends on the intensity levels of the
image pixels being corrupted, it is not easy to establish an appropriate
statistical model for the noise by simply examining the corrupted image.
-
8/14/2019 Report 2k3[1]
16/64
16
1.5.3 IMPULSE NOISE
A short burst of an acoustic energy consisting of either a single impulse
or a series of impulses. The pressure time history of a single impulse
includes a rapid rise to a peak pressure, followed by a slower decay of the
pressure envelope to ambient pressure, both occurring within 1 second.
When the intervals between impulses are less than 500 milliseconds, the
noise is considered continuous, with the exception of successive bursts of
automatic weapons fire, which is considered impulse noise.
FIG 1.5 Impulse Noise
mm pistol at shooters ear
1000
0
1000
000
0 5 10 15 0
Time, milliseconds
ressure,
ascals
-
8/14/2019 Report 2k3[1]
17/64
17
CHAPTER 2
TRANSFORMS USED IN IMAGE
PROCESSING
-
8/14/2019 Report 2k3[1]
18/64
18
2.1 VARIOUS EXISTING TRANSFORMS
To extract the information from the noisy corrupted image we use
many transforms like FFT, Wavelet etc.
2.1.1 FOURIER TRANSFORM
In mathematics, the Fourier transform is an operation that transforms
one complex-valued function of a real variable into another. The new
function, often called the frequency domain representation of the original
function, describes which frequencies are present in the original function.
This is in a similar spirit to the way that a chord of music can be described
by notes that are being played. In effect, the Fourier transform decomposes a
function into oscillatory functions.
The Fourier transform (FT) is similar to many other operations in
mathematics which make up the subject of Fourier analysis. In this specific
case, both the domains of the original function and its frequency domain
representation are continuous and unbounded. The term Fourier transform
can refer to both the frequency domain representation of a function or to the
process/formula that "transforms" one function into the other.
The expressions are:
for every real x &
-
8/14/2019 Report 2k3[1]
19/64
19
The problems with Fourier analysis arise with non-stationary signals.
The frequency components in signals can be found with Fourier techniques.
But these techniques does not tell us at what times these frequency
components occur. This is not a problem in stationary signals (signals whose
frequency content do not change in time are called stationary signals).
Because the answer is simply at all times. In non stationary signals we can
use short time Fourier transforms but it has a time-frequency resolution
problem. At this point we need another transform to solve this problem.
2.1.2 FAST FOURIER TRANSFORM (FFT)
A fast Fourier transform (FFT) is an efficient algorithm to compute the
discrete Fourier transform (DFT) and its inverse. There are many distinct
FFT algorithms involving a wide range of mathematics, from simple
complex-number arithmetic to group theory and number theory. Various
types of FFT algorithms are Cooley-Tukey algorithm, Prime-factor FFT
algorithm, Bruun's FFT algorithm, Rader's FFT algorithm, and
Bluestein's FFT algorithm.
Even though we have an advantage that FFT is very fast when
compared with other transforms, this is equally complex too and so we need
some simple but efficient transform.
2.1.3 WAVELET TRANSFORM
Fourier transform based spectral analysis is the dominant analytical
tool for frequency domain analysis. However, Fourier transform cannot
provide any information of the spectrum changes with respect to time.
Fourier transform assumes the signal is stationary, but PD signal is always
-
8/14/2019 Report 2k3[1]
20/64
20
non-stationary. To overcome this deficiency, a modified method-short time
Fourier transform allows representing the signal in both timeand frequency
domain through time windowing functions. The window length determines a
constant time and frequency resolution.
Thus, a shorter time windowing is used in order to capture the transient
behavior of a signal; we sacrifice the frequency resolution. The nature of the
real PD signals is non-periodic and transient. Such signals cannot easily be
analyzed by conventional transforms. So, an alternative mathematical tool-
wavelet transform must be selected to extract the relevant time-amplitude
information from a signal. Thus we go for wavelet.
A wavelet is a mathematical function used to divide a given function or
continuous-time signal into different scale components. Usually one can
assign a frequency range to each scale component. Each scale component
can then be studied with a resolution that matches its scale. A wavelet
transform is the representation of a function by wavelets. The wavelets arescaled and translated copies (known as "daughter wavelets") of a finite-
length or fast-decaying oscillating waveform (known as the "mother
wavelet"). Wavelet transforms have advantages over traditional Fourier
transforms for representing functions that have discontinuities and sharp
peaks, and for accurately deconstructing and reconstructing finite, non-
periodic and/or non-stationary signals.
In formal terms, this representation is a wavelet series representation of
a square integrable function with respect to either a complete, orthonormal
set of basis functions, or an over complete set of Frame of a vector space
-
8/14/2019 Report 2k3[1]
21/64
21
(also known as a Riesz basis), for the Hilbert space of square integrable
functions.
Wavelet transforms are classified into discrete wavelet transforms
(DWTs) and continuous wavelet transforms (CWTs). Note that both DWT
and CWT are continuous-time (analog) transforms. They can be used to
represent continuous-time (analog) signals. CWTs operate over every
possible scale and translation whereas DWTs use a specific subset of scale
and translation values or representation grid.
The word wavelet is due to Morlet and Grossmann in the early 1980s.
They used the French word ondelette, meaning "small wave". Soon it was
transferred to English by translating "onde" into "wave", giving "wavelet".
The WT of a signal x is calculated by passing it through a series of
filters. First the samples are passed through a low pass filter with impulse
response g resulting in a convolution of the two:
The signal is also decomposed simultaneously using a high-pass filter
h. These two filters results in wavelet coefficients.
-
8/14/2019 Report 2k3[1]
22/64
22
2.1.4 CONTOURLET TRANSFORM
The contourlet transform [2] is a directional transform which is capable
of capturing contour and fine details in an image. For one-dimensional
piecewise smooth signals, like scan lines of an image, wavelets have been
established as the right tool, because they provide an optimal representation
for these signals in a certain sense. In addition, the wavelet representation is
amenable to efficient algorithms; in particular it leads to fast transforms and
convenient tree data structures. These are the key reasons for the success of
wavelets in many signal processing and communication applications.
Although wavelet was good in 2-D in isolating the edge
discontinuities, itll not see the smoothness along the contours. In addition,
separable wavelets can capture only limited directional information an
important and unique feature of multidimensional signals. These
disappointing behaviors indicate that more powerful representations are
needed in higher dimensions.
To see how one can improve the 2-D separable wavelet transform for
representing images with smooth contours, the following scenario is
considered. Imagine that there are two painters, one with a wavelet-style
and the other with a new style, both wishing to paint a natural scene. Both
painters apply a refinement technique to increase resolution from coarse to
fine. Here, efficiency is measured by how quickly, that is with how few
brush strokes, and one can faithfully reproduce the scene.
-
8/14/2019 Report 2k3[1]
23/64
23
FIG 2.1 Wavelet versus contourlet: the successive refinement by the two
systems near a smooth contour, which is shown as a thick curve
separating two smooth regions
Consider the situation when a smooth contour is being painted, as
shown in Figure above. Because 2-D wavelets are constructed from tensor
products of 1-D wavelets, the wavelet- style painter is limited to using
square-shaped brush strokes along the contour, using different sizes
corresponding to the multiresolution structure of wavelets. As the resolution
becomes finer, we can clearly see the limitation of the wavelet style painter
who needs to use many fine dots to capture the contour. The new style
painter, on the other hand, explores effectively the smoothness of the
contour by making brush strokes with different elongated shapes and in a
variety of directions following the contour.
For the human visual system, it is well-known that the receptive fields
in the visual cortex are characterized as being localized, oriented, and band
pass. Furthermore, experiments in searching for the sparse components of
natural images produced basis images that closely resemble the
aforementioned characteristics of the visual cortex. This result supports the
hypothesis that the human visual system has been tuned so as to capture the
essential information of a natural scene using a least number of visual active
cells. More importantly, this result suggests that for a computational image
-
8/14/2019 Report 2k3[1]
24/64
24
representation to be efficient, it should be based on a local, directional, and
multiresolution expansion.
Inspired by the painting scenario and studies related to the human
visual system and natural image statistics, we identify a wish list for new
image representations:
Multi-resolution: The representation should allow images to besuccessively approximated, from coarse to fine resolutions.
Localization: The basis elements in the representation should belocalized in both the spatial and the frequency domains.
Critical sampling: For some applications (e.g., compression), therepresentation should form a basis, or a frame with small redundancy.
Directionality: The representation should contain basis elementsoriented at a variety of directions, much more than the few directions that
are offered by separable wavelets.
Anisotropy: To capture smooth contours in images, therepresentation should contain basis elements using a variety of elongated
shapes with different aspect ratios.
Among these properties, the first three are successfully provided by
separable wavelets, while the last two require new constructions. Moreover,
a major challenge in capturing geometry and directionality in images comes
from the discrete nature of the data: the input is typically sampled images
defined on rectangular grids. For example, directions other than horizontal
and vertical look very different on a rectangular grid. Because of
pixelization, the notion of smooth contours on sampled images is not
-
8/14/2019 Report 2k3[1]
25/64
25
obvious. For these reasons, unlike other transforms that were initially
developed in the continuous domain and then discretized for sampled data,
our approach starts with a discrete-domain construction and then studies its
convergence to an expansion in the continuous domain.
-
8/14/2019 Report 2k3[1]
26/64
26
CHAPTER 3
WAVELET AND CONTOURLET
DECOMPOSITION PROCESS
-
8/14/2019 Report 2k3[1]
27/64
27
3.1 WAVELET DECOMPOSITION
The Wavelet Transform is nothing but a system of filters. There are
two filters involved, one is the wavelet filter, and the other is the scaling
filter. The wavelet filter is a high pass filter, while the scaling filter is a
low pass filter.
FIG 3.1 Wavelet Transform Decomposition Process
Here both the outputs from the high and low pass filters are down
sampled and the output at each level result in wavelet coefficients.
The disadvantage of this method is that since the outputs are down
sampled, the lower frequency overlaps with the down sampled higher
frequency and this results in Frequency Scrambling.
-
8/14/2019 Report 2k3[1]
28/64
28
The general wavelet denoising procedure is as follows:
Apply wavelet transform to the noisy signal to produce the noisywavelet coefficients to the level which we can properly distinguish the PD
occurrence.
Select appropriate threshold limit at each level and threshold method(hard or soft thresholding) to best remove the noises.
Inverse wavelet transforms of the threshold wavelet coefficients toobtain a denoised signal.
3.2 CONTOURLET DECOMPOSITION
Comparing the wavelet scheme with the new scheme, the improvement
of the new scheme can be attributed to the grouping of nearby wavelet
coefficients, since they are locally correlated due to the smoothness of the
contours. Therefore, we can obtain a sparse expansion for natural images by
first applying a multiscale transform, followed by a local directional
transform to gather the nearby basis functions at the same scale into linear
structures. In essence, we first use a wavelet-like transform for edge
detection, and then a local directional transform for contour segment
detection.
With this insight, we considered a double filter bank [1] structure for
obtaining sparse expansions for typical images having smooth contours. In
this double filter bank, the Laplacian pyramid [4] is first used to capture the
point discontinuities, and then followed by a directional filter bank [1] to
link point discontinuities into linear structures. The overall result is an image
-
8/14/2019 Report 2k3[1]
29/64
29
expansion using basic elements like contour segments, and thus is named
contourlets. In particular, contourlets have elongated supports at various
scales, directions, and aspect ratios. This allows contourlets to efficiently
approximate a smooth contour at multiple resolutions in much the same way
as the new scheme.
FIG 3.2 Contourlet Transform Decomposition Process
In the frequency domain, the contourlet transform [2] provides a
multiscale and directional decomposition. We would like to point out that
the decoupling of multiscale and directional decomposition stages offers a
simple and flexible transform, but at the cost of a small redundancy (up to
33%, which comes from the Laplacian pyramid). The above figure illustrates
the Contourlet decomposition process, where LL (Low Low), LH (Low
High), HL (High Low) and HH (High High) are frequency components of
the input noisy image. Laplacian Pyramid [4] separates the Low frequencycomponents (LL) and the High frequency components using Low pass filters
and Band pass filters respectively. The output of the band pass filters are
processed by Directional filter banks [1].
-
8/14/2019 Report 2k3[1]
30/64
30
3.2.1 PYRAMID FRAMES
FIG 3.3 Laplacian pyramids (a) one level of decomposition. The outputs
are the coarse approximation a[n] and difference b[n] between the
original signal and the prediction (b) a new reconstruction scheme for
the Laplacian pyramid
One way to obtain a multiscale decomposition is to use the Laplacian
pyramid (LP) [4]. The LP decomposition at each level generates a down
sampled low pass version of the original and the difference between the
original and the prediction, resulting in a band pass image. The Laplacian
pyramid is for the multiscale decomposition [1], where H and G are called
(low pass) analysis and synthesis filters, respectively, and M is the sampling
matrix. The process can be iterated on the coarse (down sampled low pass)
signal. In multidimensional filter banks, sampling is represented by sampling
matrices; for example, down sampling x[n] by M yields xd[n] = x[Mn],
where M is an integer matrix. A drawback of the LP is the implicit over
sampling. However, in contrast to the critically sampled wavelet scheme, the
LP has the distinguishing feature that each pyramid level generates only one
-
8/14/2019 Report 2k3[1]
31/64
31
band pass image (even for multidimensional cases), and this image does not
have scrambled frequencies. This frequency scrambling happens in the
wavelet filter bank when a high pass channel, after down sampling, is folded
back into the low frequency band, and thus its spectrum is reflected. In the
LP, this effect is avoided by down sampling the low pass channel only.
3.2.2 ITERATED DIRECTIONAL FILTER BANKS
FIG 3.4 2-D Directional Filter Banks
Considering a 2-D directional filter bank (DFB) [3] that can be
maximally decimated while achieving perfect reconstruction. The DFB is
efficiently implemented via an l-level binary tree decomposition that leads to
2l sub bands with wedge-shaped frequency partitioning. The original
construction of the DFB involves modulating the input image and using
quincunx filter banks [3] with diamond-shaped filters. To obtain the desired
frequency partition, a complicated tree expanding rule has to be followed for
finer directional sub bands.
LAPLACIAN PYRAMID
INPUT IMAGE
DIRECTIONAL FILTER BANK
QUINCUNX FILTER
BANK
SHEARING OPERATOR
CONTOURLET
TRANSFORMED IMAGE
-
8/14/2019 Report 2k3[1]
32/64
32
FIG 3.5 Directional filter bank. Frequency partitioning where l = 3 and
there are 2 3 = 8 real wedge shaped frequency bands. Subbands 0-3
correspond to the mostly horizontal directions, while subbands 4-7
correspond to the mostly vertical directions
Considering a new construction for the DFB which avoids modulation
of input image and uses a simpler rule for expanding the decomposition tree.
This simplified DFB is intuitively constructed from two building blocks.
The first building block is a two-channel quincunx filter bank with fan filters
that divides a 2-D spectrum into two directions: horizontal and vertical.
FIG 3.6 Two-dimensional spectrum partition using quincunx filter
banks with fan filters. The black regions represent the ideal frequency
supports of each filter. Q is the quincunx sampling matrix
-
8/14/2019 Report 2k3[1]
33/64
33
The second building block of the DFB is a shearingoperator, which
amounts to just reordering of image samples. Figure below shows an
application of a shearing operator where a 45 direction edge becomes a
vertical edge.
FIG 3.7 Example of shearing operation that is used like a rotation
operation for the DFB decomposition. (a) The Cameraman image (b)
the Cameraman image after shearing operation
By adding a pair of shearing operator and its inverse (unshearing) to
before and after, respectively, a two channel filter bank, we obtain a
different directional frequency partition while maintaining perfect
reconstruction. Thus, the key in the DFB is to use an appropriate
combination of shearing operators together with two-direction partition of
quincunx filter banks at each node in a binary tree-structured filter bank, to
obtain the desired 2-D spectrum division.
Using multirate identities, it is instructive to view an l-level tree-
structured DFB equivalently as a 2l parallel channel filter bank with
equivalent filters and overall sampling matrices. These equivalent
(directional) synthesis filters are denoted as Dk(l)
, 0 k < 2l, which
-
8/14/2019 Report 2k3[1]
34/64
34
correspond to the sub bands. The corresponding overall sampling matrices
were shown to have the following diagonal forms
Which means sampling is separable. The two sets correspond to the
mostly horizontaland mostly verticalset of directions, respectively.
3.2.3 MULTISCALE AND DIRECTIONAL DECOMPOSITION
THE DISCRETE CONTOURLET TRANSFORM
Combining the Laplacian pyramid and the directional filter bank, the
combination into a double filter bank structure [3] can be described. Since
the directional filter bank (DFB) was designed to capture the high frequency
(representing directionality) of the input image, the low frequency content is
poorly handled. In fact, with the frequency partition, low frequency would
leak into several directional sub bands, hence the DFB alone does not
provide a sparse representation for images. This fact provides another reason
to combine the DFB with a multiscale decomposition, where low
frequencies of the input image are removed before applying the DFB.
-
8/14/2019 Report 2k3[1]
35/64
35
FIG 3.8 The contourlet filter bank: first, a multiscale decomposition
into octave bands by a Laplacian pyramid is computed, and then a
directional filter bank is applied to each band pass channel
This shows a multiscale and directional decomposition using a
combination of a Laplacian pyramid (LP) and a directional filter bank
(DFB). Band pass images from the LP are fed into a DFB so that directional
information can be captured. The scheme can be iterated on the coarse
image. The combined result is a double iterated filter bank structure, named
contourlet filter bank, which decomposes images into directional sub bands
at multiple scales.
Specifically, let a0 [n] be the input image. The output after the LP stage
is J band pass images bj [n], j = 1, 2. . . J (in the fine-to-coarse order) and a
low pass image aj [n]. That means, the j-th level of the LP decomposes the
image aj1 [n] into a coarser image aj [n] and a detail image bj [n]. Each
band pass image bj [n] is further decomposed by an lj -level DFB into 2lj
band pass directional images c(lj ) j, k [n], k = 0, 1, . . . , 2lj 1.
-
8/14/2019 Report 2k3[1]
36/64
36
3.2.4 THEOREM
In a contourlet filter bank, the following hold:
1) If the LP and the DFB use perfect-reconstruction filters, then the
discrete contourlet transform achieves perfect reconstruction, which means it
provides a frame operator.
2) If the LP and the DFB use orthogonal filters, then the discrete
contourlet transform provides a tight frame with frame bounds equal to 1.
3) The discrete contourlet transform has a redundancy ratio that is less
than 4/3.
4) Suppose an lj -level DFB is applied at the pyramidal level j of the
LP, then the basis images of the discrete contourlet transform (i.e. the
equivalent filters of the contourlet filter bank) have an essential support size
of width C2j
and length C2j+lj2
.
5) Using FIR filters, the computational complexity of the discrete
contourlet transform is O (N) for N-pixel images.
-
8/14/2019 Report 2k3[1]
37/64
37
3.2.5 PROOF
1) This is obvious as the discrete contourlet transform [2] is a
composition of perfect-reconstruction blocks.
2) With orthogonal filters, the LP [4] is a tight frame with frame
bounds equal to 1, which means it preserves the l2-norm,
or . Similarly, with orthogonal filters the DFB
is an orthogonal transform, which means .
Combining these two stages, the discrete contourlet transform satisfies thenorm preserving or tight frame condition.
3) Since the DFB is critically sampled, the redundancy of the discrete
contourlet transform is equal to the redundancy of the LP, which is
4) Using multirate identities, the LP band pass channel corresponding
to the pyramidal level j is approximately equivalent to filtering by a filter of
size about C12j
C12j, followed by down sampling by 2j1 in each
dimension. For the DFB, it is noted that after lj levels (lj 2) of tree-
structured decomposition, the equivalent directional filters have support of
width about C22 and length about C22lj1
. Combining these two stages, again
using multirate identities, into equivalent contourlet filter bank channels, we
see that contourlet basis images have support of width about C2j
and length
about C2j+lj2
.
-
8/14/2019 Report 2k3[1]
38/64
38
5) Let LP and Ld be the number of taps of the pyramidal and
directional filters used in the LP and the DFB, respectively (without loss of
generality we can suppose that low pass, high pass, analysis and synthesis
filters have same length). With a polyphase implementation, the LP filter
bank requires Lp/2+1 operation per input sample. Thus, for an N- pixel
image, the complexity of the LP stage in the contourlet filter bank is:
For the DFB, its building block two-channel filter banks require Ld
operations per input sample. With l-level full binary tree decomposition, the
complexity of the DFB multiplies by l. This holds because the initial
decomposition block in the DFB is followed by two blocks at half rate, four
blocks at quarter rate and so on. Thus, the complexity of the DFB stage foran N-pixel image is:
Thus we obtain the desired result. Since the multiscale and directionaldecomposition stages are decoupled in the discrete contourlet transform, we
can have a different number of directions at different scales, thus offering a
flexible multiscale and directional expansion. Moreover, the full binary tree
decomposition of the DFB in the contourlet transform can be generalized to
-
8/14/2019 Report 2k3[1]
39/64
39
arbitrary tree structures, similar to the wavelet packets generalization of the
wavelet transform. The result is a family of directional multiresolution
expansions, which we call contourlet packets. The examples of possible
frequency decompositions by the contourlet transform and contourlet packets
are shown. In particular, contourlet packets allow finer angular resolution
decomposition at any scale or direction, at the cost of spatial resolution. In
addition, from Theorem 1, it is noted that by altering the depth of the DFB
decomposition tree at different scales (and even at different orientations in a
contourlet packets transform), a rich set of contourlets with variety of support
sizes and aspect ratios are obtained. This flexibility allows the contourlet
transform and the contourlet packets to fit smooth contours of various
curvatures well.
FIG 3.9 Examples of possible frequency decomposition by the
contourlet transform and contourlet packets
-
8/14/2019 Report 2k3[1]
40/64
40
CHAPTER 4
CONTOURLET DENOISING
-
8/14/2019 Report 2k3[1]
41/64
41
4.1 CONTOURLET DENOISING
A common approach for image denoising is to convert the noisy image
into a transform domain such as the wavelet and Contourlet domain, and
then compare the transform coefficients with a fixed threshold. This
algorithm performs a hypothesis test to determine if a pixel is corrupted or
not, thus not depending on a fixed threshold.
As the first step, a noisy image is transformed into the contourlet
domain by the contourlet decomposition [5]. Then, for each coefficient,
variance for the associated Gaussian distribution is estimated. Subsequently,
hypothesis tests are performed with the variance noisy image. The
coefficients that are obtained are then compared with a given threshold and
then determined whether it is corrupted or not. If it is, then it is determined
to fall into a smooth region and may be processed for noise suppression. If it
is not, then it should stand for an image feature pixel and be preserved.
Afterwards, the processed contourlets are utilized to reconstruct the
image, which is the final denoised output.
-
8/14/2019 Report 2k3[1]
42/64
42
The general block of the denoising part can be represented as,
FIG 4.1 Denoising Process Using Contourlet Transformation
C T U T D C P ITI
I C TI TI
T DI G
F TU
P TI
I
UPP I
C T U T
C T UCTI
I I G
D I D I G
-
8/14/2019 Report 2k3[1]
43/64
43
4.2 ALGORITHM DESCRIPTION
1. The process starts with the noisy image.
2. This image is converted into CT domain, using the decomposition process. In Wavelet, we determine the coefficients using scaling and a
wavelet filter. But, in Contourlet, we construct discrete-domain
multiresolution and multi direction using non-separable filter banks, in the
same way wavelets are obtained from the filter banks. This construction
results in flexible multiresolution, local and directional image expansion
using contour sectors and so named Contourlet transform.
3. Thus from the decomposition process the coefficients are determined.
4. Then for each noisy image pixels, the variance is estimated.
5. The resultant values are then compared with a threshold value andthen determine whether the pixel is corrupted or not.
6. If the pixel is corrupted, it is suppressed or modified. Else it is notcorrupted, the pixels then preserved.
7. Then all the resultant coefficients are reconstructed which results indenoised image.
8. After reconstruction the SNR and the IEF values are calculated forthis algorithm.
-
8/14/2019 Report 2k3[1]
44/64
44
4.3 THRESHOLDING
Generally for denoising, the coefficients of the noisy image are compared
with the threshold value. These threshold values are either obtained by the trial
and error method, or by considering some standard method. Since human eyes
are very sensitive to intensity of neighboring pixel values, in image denoising
techniques the variance between the considered pixel and the neighboring pixel
values must be less. Considering the threshold values depending on the
variance, the noise level in the corrupted image still decreases.
In this algorithm, a threshold value is set based upon the variance of the
corrupted image. Based upon the results from various variance levels (nvar),
the threshold is fixed. The intensity of the noise being added to the image (th)
and the standard deviation of the noise less image (sigma) are also the deciding
factors in fixing the threshold values.
Based upon the various results obtained, we deduce that the threshold
values must be fixed depending upon the high noise level and low noise level.
In Speckle noise, the default variance level is 0.04, so considering Speckle
noise variance (nvar) above 0.05 as high noise level and below 0.05 as low
noise level, we introduced two threshold values separately.
Now to reconstruct the image, the coefficients above the threshold
values are retained for Contourlet reconstruction and the remaining noisy
coefficients are suppressed. The retained coefficients are reconstructed to
obtain the Denoised image. This algorithm is very simpler and effective
compared to other algorithms.
-
8/14/2019 Report 2k3[1]
45/64
45
CHAPTER 5
RESULTS AND DISCUSSION
-
8/14/2019 Report 2k3[1]
46/64
46
5.1 RESULTS AND DISCUSSION
5.1.1 RESULTS FOR MULTIPLICATIVE (SPECKLE) NOISE
Images SNR(dB) IEF
Wavelet Proposed
algorithm using
Contourlet
Transform
Wavelet Proposed
algorithm using
Contourlet
Transform
NOISE LEVEL=0.03dB
LENA 9.44 11.77 2.01 3.45
PEPPER 10.73 11.36 2.10 2.43
SATELLITE 8.15 10.22 1.83 2.95
MEDICAL 13.36 15.73 3.02 5.25
BARBARA 10.04 9.81 1.65 1.57
NOISE LEVEL=0.04dB
LENA 7.94 10.95 1.87 3.75
PEPPER 9.21 10.68 1.97 2.75
SATELLITE 6.65 9.34 1.69 3.16
MEDICAL 11.55 14.81 2.66 5.62
BARBARA 8.84 9.39 1.66 1.88NOISE LEVEL=0.06dB
LENA 5.75 10.01 1.66 4.44
PEPPER 7.02 9.65 1.75 3.19
SATELLITE 4.47 8.40 1.52 3.76
MEDICAL 8.84 14.56 2.13 8.03
BARBARA 6.99 8.67 1.58 2.33
NOISE LEVEL=0.1dB
LENA 3.15 8.20 1.47 4.75
PEPPER 4.44 8.56 1.55 3.97
SATELLITE 1.95 6.48 1.37 3.88
MEDICAL 5.65 12.00 1.71 7.36
BARBARA 4.56 7.66 1.45 2.96
TABLE 5.1 Comparison between Wavelet and Contourlet Transforms
-
8/14/2019 Report 2k3[1]
47/64
47
5.1.2 QUALITATIVE RESULTS
FIG 5.1 Results of Lena image for lower speckle noise level (.03)
The input image being selected here is the Lena image and when the
noise level of the speckle is 0.03, the results obtained are such that, the SNR
of the corrupted image is 6.41dB, while for wavelet denoising the SNR and
IEF obtained are 9.44dB and 2.01 respectively. But for the proposed
algorithm, the SNR and the IEF obtained are 11.74dB and 3.42 respectively,
which clearly proves that this new algorithm is more efficient comparatively.
If the SNR (Signal to Noise Ratio) value is high, it means that more
amount of information is extracted from the noisy image and from the
results we obtained for this corrupted image, SNR is high for the new
-
8/14/2019 Report 2k3[1]
48/64
48
algorithm using Contourlet transformation, and hence it extracts more
information.
If the IEF (Image Enhancement Factor) value is high, it means that
more edges are preserved and the information I those edge regions also can
be extracted and here too the new algorithm out performs the wavelet.
We can also infer that the image we got after Contourlet
transformation is better than Wavelet in Visual perspective as well.
-
8/14/2019 Report 2k3[1]
49/64
49
FIG 5.2 Results of Lena image for speckle noise level 0.04
The image to be denoised is the Lena image with the speckle noise
level 0.04 that is added and the SNR value of noise is 5.21dB. The SNR
and IEF values for wavelet denoising are 7.91dB and 1.86 respectively and
for the proposed algorithm, results are 10.09dB and 3.79 respectively
proving that Contourlet based new algorithm is better than the wavelet
algorithm.
-
8/14/2019 Report 2k3[1]
50/64
50
FIG 5.3 Results of Lena image for speckle noise level 0.06
The image to be denoised is the Lena image with the speckle noise
level 0.06 that is added and the SNR value of noise is 3.53dB. The SNR
and IEF values for wavelet denoising are 5.73dB and 1.66 respectively and
for the proposed algorithm, results are 9.99dB and 4.44 respectively
proving that Contourlet based new algorithm is better than the wavelet
algorithm.
-
8/14/2019 Report 2k3[1]
51/64
51
FIG 5.4 Results of Lena image for speckle with high noise level 0.1
The image to be denoised is the Lena image with the speckle noise
level 0.1 that is added and the SNR value of noise is 1.46dB. The SNR and
IEF values for wavelet denoising are 3.15dB and 1.47 respectively and for
the proposed algorithm, results are 8.08dB and 4.62 respectively proving
that Contourlet based new algorithm is better than the wavelet algorithm.
Visually too we can say that Contourlet Denoised image is far better than the
Wavelet transform.
-
8/14/2019 Report 2k3[1]
52/64
52
FIG 5.5 Results of Satellite image for speckle with noise level 0.03
Here we take Satellite image as the test image, in which contours
should be more clearer for gathering information and the qualitative results
are also equally important. Here also the proposed algorithm based on
Contourlet is better with the results, SNR and IEF are 10.32dB and 3.00
respectively while for wavelet SNR and IEF are 8.22dB and 1.85
respectively.
-
8/14/2019 Report 2k3[1]
53/64
-
8/14/2019 Report 2k3[1]
54/64
54
FIG 5.7 Results of Satellite image for speckle with noise level 0.06
Here we take Satellite image as the test image, in which contours
should be more clearer for gathering information and the qualitative results
are also equally important. Here also the proposed algorithm based on
Contourlet is better with the results, SNR and IEF are 8.40dB and 3.75
respectively while for wavelet SNR and IEF are 4.50dB and 1.52
respectively.
-
8/14/2019 Report 2k3[1]
55/64
55
FIG 5.8 Results of Satellite image for speckle with high noise level 0.1
Here we take Satellite image as the test image, in which contours
should be more clearer for gathering information and the qualitative results
are also equally important. Here also the proposed algorithm based on
Contourlet is better with the results, SNR and IEF are 6.42dB and 3.84
respectively while for wavelet SNR and IEF are 1.94dB and 1.36respectively.
So its been evident with the results that this proposed algorithm based
on Contourlet transform is best suit for Satellite image applications.
-
8/14/2019 Report 2k3[1]
56/64
56
FIG 5.9 Results of Pepper image for speckle with noise level 0.06
Here we take Pepper image as the test image, in which the edges are
more, where the edges after denoising should be preserved. Here also the
proposed algorithm based on Contourlet is better with the results, SNR and
IEF are 9.62dB and 3.17 respectively while for wavelet SNR and IEF are
7.06dB and 1.76 respectively. Since IEF factor is high in the proposed
algorithm, we deduce that this algorithm can effectively replace the wavelet
algorithm.
-
8/14/2019 Report 2k3[1]
57/64
57
FIG 5.10 Results of Barbara image for speckle with high noise level 0.1
Here we take Barbara image as the test image, which is one of the
standard images used in Image Processing. Considering the results obtained,
the proposed algorithm based on Contourlet is better with, SNR and IEF as
7.71dB and 3.00 respectively while for wavelet SNR and IEF are 4.56dB
and 1.45 respectively.
-
8/14/2019 Report 2k3[1]
58/64
58
FIG 5.11 Results of Medical image for speckle with low noise level 0.03
Here we take Medical image as the test image, which needs more
accuracy and high SNR in medical applications for saving a life. With this
new proposed algorithm based on Contourlet which has high SNR and IEF
as 15.77dB and 5.26 respectively where as for wavelet SNR and IEF are
13.35dB and 3.01 respectively.
So its been evident that the former transform will be more helpful in
medical applications as well.
-
8/14/2019 Report 2k3[1]
59/64
59
CHAPTER 6
MATLAB-GUI
-
8/14/2019 Report 2k3[1]
60/64
60
6.1 MATLAB FUNCTIONS
1. imreadloads the image.2. imshow displays the image.3. pdfbdec performs decomposition process.4. pdfb2vec converts into vector and this command results in
coefficients.
5. pdfbrec performs the reconstruction process.
6.2 MATLAB GUI (GRAPHICAL USER INTERFACE)
6.2.1 INTRODUCTION
The main reason GUIs are used is because it makes things simple for
the end-users of the program. If GUIs were not used, people would have to
work from the command line interface, which can be extremely difficult and
frustrating. A simple GUI that will add together two numbers, displaying the
answer in a designated text field is discussed.
FIG 6.1 Simple GUI that adds 2 numbers
-
8/14/2019 Report 2k3[1]
61/64
61
6.2.2 INITIALIZING GUIDE (GUI CREATOR)
Commandguide is typed in command window.
FIG 6.2 Commandguide is typed in command window
The first option Blank GUI is selected.
FIG 6.3The first option Blank GUI is selected
-
8/14/2019 Report 2k3[1]
62/64
62
The following screen will be shown.
FIG 6.4 The GUI Editor Screen
Before adding components blindly, it is good to have a rough idea
about how you want the graphical part of the GUI to look like so that itll be
easier to lay it out. The finished GUI will look like following
FIG 6.5 Sample GUI model
-
8/14/2019 Report 2k3[1]
63/64
63
CONCLUSION
Hence the new proposed algorithm based on the Contourlet
transformation is found to be more efficient than the wavelet algorithm in
Image Denoising particularly for the removal of speckle noise. This is
concluded based on considering test images like Lena, Barbara, Peppers
along with Satellite images and Medical images after corrupting with
Speckle noise which is a multiplicative noise of various noise levels like
0.03, 0.04, 0.06 and 0.1. Thus the obtained results in qualitative and
quantitative analysis shows that this proposed algorithm outperforms the
wavelet in terms of SNR, IEF values and visual perspective as well. The
algorithms are implemented using MATLAB 7.5 R2007b.
FUTURE WORK
It has been theoretically studied and proved that new algorithm using
the Contourlet Transformation is better in Denoising and other Image
Processing operations. This can be implemented using hardware also and the
feasible hardware is using VLSI with high memory.
-
8/14/2019 Report 2k3[1]
64/64
REFERENCES
1. Chan W.Y, N.F.Law, W.C.Siu Multiscale Feature Analysis usingDirectional Filter Bank.
2. M. N. Do and M. Vetterli, Contourlets: a Directional Multiresolutionimage representation, in Proceedings of 2002 IEEE International
Conference on Image Processing.
3. M. N. Do and M. Vetterli, Oct. 2001 Pyramidal directional filterbanks and curvelets, in Proc. IEEE Int. Conf. on Image Proc.,
4. P. J. Burt and E. H. Adelson, April 1983 The Laplacian pyramid as acompact image code,IEEETrans. Commun., vol. 31, no. 4, pp. 532
540.
5. Zhiling Longa,b and Nicolas H. Younana Denoising of images withMultiplicative Noise Corruption
6. www.mathworks.com