LosslessImg Cmprsn Report
-
Upload
reshma-kar -
Category
Documents
-
view
213 -
download
0
Transcript of LosslessImg Cmprsn Report
-
8/7/2019 LosslessImg Cmprsn Report
1/28
1
2/10/2011 ABSTRACT
mage compression is the application of data
compression on digital image. It is a technique bywhich image information can be represented by lessnumber of bits. In effect, the objective is to reduce theredundancy of image data in order to be able to store ortransmit data in efficient form. Image compression isthe application of Data compression on digital images. Ineffect, the objective is to reduce redundancy of the imagedata in order to be able to store or transmit data in anefficient form. Image compression can be lossy orlossless. Lossless compression is sometimes preferred forartificial images such as technical drawings, icons orcomics. This is because lossy compression methods,especially when used at low bit rates, introducecompression artifacts. Lossless compression methodsmay also be preferred for high value content, such asmedical imagery or image scans made for archivalpurposes. Lossy methods are especially suitable for
natural images such as photos in applications whereminor (sometimes imperceptible) loss of fidelity isacceptable to achieve a substantial reduction in bit rate.Various techniques of both lossy and losslesscompression are there .Some compression techniques,presented here, are derived from standard signal/Datacompression methodologies, while others are developedby exploiting the characteristics of digital images.
I
-
8/7/2019 LosslessImg Cmprsn Report
2/28
1
CONTENTS
Page
no.
INTRODUCTION
3
DIGITAL IMAGE REPRESENTATION4
DIGITIZING IMAGE
5
REDUNDANCY VS COMPRESSION RATIO
7
TYPES OF REDUNDANCY
9
IMAGE COMPRESSION MODELS
11
LOSSLESS IMAGE COMPRESSION ALGORITHMS
13
APPLICATIONS
21
-
8/7/2019 LosslessImg Cmprsn Report
3/28
1
CONCLUSION
22
REFERENCES
23
INTRODUCTION
An enormous amount of data is produced when a 2-D
light intensity function is sampled and quantized tocreate a digital image. In fact , the amount of data
produced may be so great that it results in impractical
storage, processing, and communication requirements.
For instance more than 25GB of data are required to
represent Encyclopedia Britannica in digital form.
Image compression addresses the problem of reducing
the amount of data required to represent a digital image. The underlying basis of the reduction process is the
removal of redundant data. From a mathematical point of
view, this amounts to transforming a 2-D pixel array into
-
8/7/2019 LosslessImg Cmprsn Report
4/28
1
a statistically uncorrelated data set. The transformation is
applied prior to storage or transmission of the image. At
some later time the compressed image is decompressed
to get the original image or an approximation to it.
WHY WE NEED IMAGE COMPRESSION
We need image compression for the following reasons:
1. Computational Expenses
2.Storage Requirement
3. Transmission time and cost
4. Utilizations of resourses
TYPES OF IMAGE COMPRESSION
There are two forms of compressionlossless and lossyand digital cameras use both forms.
Lossless compression. Lossless compressionuncompresses an image so its quality matches theoriginal sourcenothing is lost. Although losslesscompression sounds ideal, it doesnt provide muchcompression and files remain quite large. For this reason,lossless compression is used mainly where detail isextremely important, as it is when planning to make largeprints. Lossless compression normally providecompression ratio < 10:1.
-
8/7/2019 LosslessImg Cmprsn Report
5/28
1
Lossy compression. Because lossless compressionisnt practical in many cases, all popular digital camerasoffer a lossy compression. This process degrades imagesto some degree and the more they're compressed, the
more degraded they become. In many situations, such asposting images on the Web or making small to mediumsized prints, the image degradation isn't obvious. Heremaximum compression ratio is the function ofreconstruction quality. It provides compression ratio >10:1.
DIGITAL REPRESENTATION OF IMAGE
A digital image is an image f(x,y) that has been
discretized both in spatial coordinate and brightness .A
digital can be considered a matrix whose row and column
indices identify a point in the image and the
corresponding matrix element value identifies the gray
level at that point. The elements of such a digital array
are called image elements, picture elements, pixels or
pels.
-
8/7/2019 LosslessImg Cmprsn Report
6/28
1
DIGITIZATION OF IMAGE
First of all a physical device called image sensor
converts the visual image into electrical signal. Digitizer
then converts the electrical signal to digital form. Itinvolves two steps.
1.Sampling: Sampling is a process by which formed
over a patch in continuous domain is mapped into a
discrete point with integer coordinates.
A continuous image function can be sampled
using a discrete grid of sampling points in the plane.
These sampling points are ordered in the plane and
their geometric relation is called the grid. Grids used in
practice are mainly square or hexagonal
Origin
x
-
8/7/2019 LosslessImg Cmprsn Report
7/28
1
2.Quantization :
A magnitude of the sampled image is expressed as adigital value in image processing.
The transition between continuous values of theimage function (brightness) and its digital equivalentis called quantization.
The number of quantization levels should be highenough for human perception of fine shading detailsin the image.
The occurrence of false contours is the main problemin image which has been quantized with insufficientbrightness levels. This effect arises when the numberof brightness levels is lower than that which humanscan easily distinguish.
-
8/7/2019 LosslessImg Cmprsn Report
8/28
1
This number is dependent on many factors -- forexample, the average local brightness -- but displayswhich avoid this effect will normally provide a rangeof at least 100 intensity levels.
This problem can be reduced when quantization intointervals of unequal length is used; the size ofintervals corresponding to less probable brightnesses in the image is enlarged. These gray-scaletransformation techniques are considered in latersections.
Most digital image processing devices usequantization into k equal intervals.
If bits are used ... the number of brightness levels is.
Eight bits per pixel are commonly used, specializedmeasuring devices use 12 and more bits per pixel
COMPRESSION RATIO ANDREDUNDANCY
Compression ratio is the ratio between uncompressedand compressed image
Compression ratio (CR)= #bits in the original data
#bits in the compressed data
Data and information are not same. Data are the means
to convey information. Various amounts of data can
convey same information. If data provides no relevantinformation or simply restate what is already known, it is
called data redundancy. It can be defined as
-
8/7/2019 LosslessImg Cmprsn Report
9/28
1
Data Redundancy (RD) = 1 1/Compression ratio
0 < CR <
- < RD
< 1
-
8/7/2019 LosslessImg Cmprsn Report
10/28
1
-
8/7/2019 LosslessImg Cmprsn Report
11/28
1
TYPES OF REDUNDANCY
Coding redundancy
Interpixel redundancy
Psychovisual redundancy
CODING REDUNDANCY:
If the gray levels of an image are coded in a way that
uses more code symbols than absolutely necessary to
represent each gray level, the resulting image is said to
contain coding redundancy.
In general, coding redundancy is present when the codes
assigned to a set of events have not been selected to
make full advantages of probabilities of the events.
The underlying basis for coding redundancy is that image
are typically composed of objects that have a regular and
somewhat predictable morphology(shape) and are
sampled so that the objects being depicted are much
larger than the picture elements.
The natural consequence is that, in most images, certain
graylevels are more probable than others(that ishistograms of most images are not uniform).
-
8/7/2019 LosslessImg Cmprsn Report
12/28
1
PSYCHOVISUAL REDUNDANCY:
Psychovisual redundancy stem from the fact that the
human eye does not respond with equal intensity to all
visual information. The human visual system does not
rely on quantitative analysis of individual pixel values
when interpreting an image an observer searches for
distinct features and mentally combines them into
recognizable groupings
In this process certain information is relatively less
important than other this information is calledpsychovisually redundant
Psychovisually redundant image information can be
identified and removed a process referred to as
quantization
Quantization eliminates data and therefore results in
lossy data compressionReconstruction errors introduced by quantization can be
evaluated objectively or subjectively depending on the
application need & specifications
Quantization effects
-
8/7/2019 LosslessImg Cmprsn Report
13/28
1
(a) (b) (c)
(a) Original image
(b) Uniform quantization to 16 levels
(c) IGS quantization to 16 levels
INTERPIXEL REDUNDANCY:
Interpixel redundancy is defined as failure to identifyand utilize data relationshipsIf a pixel value can be reasonably predicted from itsneighboring (or preceeding/ following) pixels the image
is said to contain interpixel redundancyInterpixel redundancy depends on the resolution of theimage
-
8/7/2019 LosslessImg Cmprsn Report
14/28
1
The higher the (spatial) resolution of an image, themore probable it is that two neighboring pixels willdepict the same object The higher the frame rate in a video stream, the
more probable it is that the corresponding pixel in thefollowing frame will depict the same objectThese types of predictions are made more difficult by
the presence of noise
IMAGE COMPRESSION MODEL
We discussed individually three general techniques for
reducing or compressing the amount of data required torepresent an image. However, these techniques typically
are combined to form practical image compression
systems.
Source Encoder removes input redundancies
Channel Encoder increases noise immunity of
encoders output. If channel between encoder and
decoder is noisefree then channel encoder and
decoder are omitted.
SOURCE
ENCODE
CHANNEL
ENCODER
CHANNEL CHANNE
L
SOURCE
DECODE
x,y)f
ENCODER DECODER
MAPPER QUANTIZE
R
SYMBOL
ENCODERf(x,y
-
8/7/2019 LosslessImg Cmprsn Report
15/28
1
Symbol decoder and inverse mapper have inverse
function as that of symbol encoder and mapper
respectively.
LOSSLESS IMAGE COMPRESSION
SYMBOL
DECODER
INVERSE
MAPPER
SOURCE ENCODER
SOURCE DECODER
Chan
Channel f(x,y)
-
8/7/2019 LosslessImg Cmprsn Report
16/28
1
Data reduction can be achieved by encoding the pixel
values in such a way that the average number of bits b
used to represent each pixel be much less than b, the
number of bits used to represent each pixel originally.
Here we are not discarding any information, but we
represent more frequently occurring valuesby shorter
codes.
Average no. of bits used to represent each pixel (b)
Suppose the image contains L different graylevels
i=0,1,2L-1,
ni denotes frequency of occurrence of pixels having
graylevel i.
-
8/7/2019 LosslessImg Cmprsn Report
17/28
1
Then probability of occurrence of graylevel i in the image
is
Entropy : A measure of amount of information. It is the
degree of randomness in the occurrence of graylevels in
an image.L-1 L-1
H = pi log2ni b = pili
i=0 i=0
For lossless image compression b should approximate H
i.e. H is lower bound of b
LOSSLESS IMAGE COMPRESSION
ALGORITHMS:
Lossless means the reconstructed image does not
lose any information according to the original one.
There is a huge range of lossless data compression
technique.
The common techniques used are:
Huffman coding
ni
L-1
ni
i=0
-
8/7/2019 LosslessImg Cmprsn Report
18/28
1
Arithmetic coding
Run-length coding
Predictive coding
HUFFMAN CODING
Huffman coding provides a data representation with the
smallest possible number of code symbols (when coding
the symbols of an information source individually).This is
done by assigning fewer number of bits to the symbols
that appear more often and more number of bits to thesymbols that appear less often. It is more efficient when
occurrence probability vary widely.
Huffman code construction is done in two steps:
1) Source reductions
We create a series of source reductions by ordering
the probabilities of the symbols and then combine thesymbols of lowest probability to form new symbols
recursively.
2) Code assignment
When only two symbols remain, step 1) is retraced
and a code bit is assigned to symbol in each steps.
Encoding and decoding is done using these codes in alookup table manner.
Original Source Source Reduction
-
8/7/2019 LosslessImg Cmprsn Report
19/28
1
Symbol Probability 1 2 34
a2 0.4 0.4 0.4 0.40.6
a6 0.3 0.3 0.3 0.30.4a1 0.1 0.1 0.2 0.3a4 0.1 0.1 0.1a3 0.06 0.1a5 0.04
Original Source Source reductionSym. Prob. Code 1 2 3
a2 0.4 1 0.4 1 0.4 1 0.4 1 0.0a6 0.3 00 0.3 00 0.3 00 0.3 00 0
1a1 0.1 011 0.1 011 0.2 010 0.3 01
a4 0.1 0100 0.1 0100 0.1 011a3 0.06 01010 0.1 0101a5 0.04 01011
Lavg = 0.4*1 + 0.3*2 + 0.1*3 + 0.1*4 + 0.06*5 +
0.04*5= 2.2 bits/symbol
Huffman source reduction
Huffman code assignment
-
8/7/2019 LosslessImg Cmprsn Report
20/28
1
Entropy = 2.14 bits/symbol
Huffman coding efficiency is 0.973
ARITHMETIC CODING
Provides a data representation where the entire symbol
sequence is encoded as a single arithmetic code word
(which is represented as an interval of real numbers
between 0 and 1)
Arithmetic code construction is done in three steps:
1) Subdivide the half-open interval [0,1) based on
probabilities of the source symbols
2) For each source symbol recursively
a) Narrow the interval to the sub-interval designated by
the encoded symbol.
b) Subdivide the new interval among the source symbolsbased on probability
3) Append an end-of-message indicator.
Encoding is done by choosing any number in the interval
to represent the data.
Decoding is done by retracing the steps in the code
construction.
Encoding sequence
Arithmetic coding procedure
-
8/7/2019 LosslessImg Cmprsn Report
21/28
1
a1 a2 a3 a4 a5
1 a4 0.2 a4 0.08 a4 0.072 a4 0.0688 a4
a3 a3 a3 a3 0.06752a3
a2 a2 a2 a2 a2
a1 a1 a1 a1 a10 0 0.04 0.056 0.0624
In the above example three decimal digits are used to
represent the five-symbol message. This translates into0.6 decimal digits per source symbol and compares
favorably with the entropy of the source which is 0.58
digits per source symbol.
RUN-LENGTH CODING
Run-length encoding (RLE) is a very simple form of
data compression in which runs of data (that is,
sequences in which the same data value occurs in many
consecutive data elements) are stored as a single data
value and count, rather than as the original run. This ismost useful on data that contains many such runs: for
example, relatively simple graphic images such as icons,
line drawings, and animations.
http://en.wikipedia.org/wiki/Data_compressionhttp://en.wikipedia.org/wiki/Data_compression -
8/7/2019 LosslessImg Cmprsn Report
22/28
1
For example, consider a screen containing plain black
text on a solid white background. There will be many long
runs of white pixels in the blank space, and many short
runs of black pixels within the text. Let us take ahypothetical single scan line, with B representing a black
pixel and W representing white:
WWWWWWWWWWWWBWWWWWWWWWWWWBBBWWWWWWW
WWWWWWWWWWWWWWWWWBWWWWWWWWWWWWWW
If we apply the run-length encoding (RLE) data
compression algorithm to the above hypothetical scanline, we get the following:
12WB12W3B24WB14W
Interpret this as twelve W's, one B, twelve W's, three B's,
etc.
The run-length code represents the original 67 charactersin only 16. Of course, the actual format used for the
storage of images is generally binary rather than ASCII
characters like this, but the principle remains the same.
Another example :
http://en.wikipedia.org/wiki/Pixelhttp://en.wikipedia.org/wiki/Scan_linehttp://en.wikipedia.org/wiki/ASCIIhttp://en.wikipedia.org/wiki/Pixelhttp://en.wikipedia.org/wiki/Scan_linehttp://en.wikipedia.org/wiki/ASCII -
8/7/2019 LosslessImg Cmprsn Report
23/28
1
First row of image consists of 128 pixels
Instead of using 384 bytes (128 * 3)
Only 4 bytes (3 for color & 1 for count)
Works well on images with areas of flat
Color (not continuously blended tones)
PREDICTIVE CODING
Provides a data representation where code words expresssource symbol deviations from predicted values ( usually
values of neighboring pixels )
Predictive coding efficiently reduces interpixel
redundancies
1D & 2D pixels are predicted from neighboring pixels
Works well for all images with a high degree of interpixelredundancies
Works in the presence of noise (just not as efficiently)
-
8/7/2019 LosslessImg Cmprsn Report
24/28
1
I/P data
22222222222666666666666669999999999999999
Code
20000000000400000000000003000000000000000
Predictive coding can be used in both lossless & lossy
Compression schemes
Encoder circuit
Decoder circuit
The amount of compression achieved in lossless
predictive coding is related directly to the entropy
-
8/7/2019 LosslessImg Cmprsn Report
25/28
1
reduction that results from mapping the input image into
the prediction error sequence.
APPLICATIONS
In numerous applications error-free compression is the
only acceptable means of data reduction. Such
applications are:
Archival of medical and business documents, where
lossy compression usually prohibited for legal
reasons.
Processing of LANDSAT imagery, where both the use
and cost of collecting data makes any loss
undesirable.
Digital radiography, where loss of any informationcan compromise diagnostic accuracy.
-
8/7/2019 LosslessImg Cmprsn Report
26/28
1
Control of remotely piloted vehicles in military,
space, and hazardous waste control application.
CONCLUSION
Image compression techniques, by which image
information can be represented by less number of
bits, are very useful for image transmission from one
point to other and also for image archival purpose.
Image compression has been and continues to be
crucial to the growth of multimedia computing.
-
8/7/2019 LosslessImg Cmprsn Report
27/28
1
In addition to medical imaging, compression is one of
the few areas of image processing that has received
a sufficiently broad commercial appeal to warrant the
adoption of widely accepted standards.
In short, an ever expanding number of applications
depend on the efficient manipulation, storage and
transmission of binary, grayscale or color images.
REFERENCES:
[1] Rafael C. Gonzalez and Richard E. Woods, Digital
Image
Processing Pearson Prentice Hall, 2nd Edition, pp. 431
480, 2006
-
8/7/2019 LosslessImg Cmprsn Report
28/28
[2] B. Chanda and D. Dutta Majumdar, Digital Image
Processing, PHI Publication, pp. 145 165, 2006
[3] Kenneth R. Castleman, Digital Image Processing,Pearson Education, 1st Edition, 2007