64-516-1-PB_2
Transcript of 64-516-1-PB_2
-
7/28/2019 64-516-1-PB_2
1/5
Council for Innovative Research International Journal of Computers & Distributed Systems
www.cirworld.com Volum e 1, Issue 2, August, 2012
61 | P a g e
w w w . c i r w o r l d . c o m
A Novel Approach of Lossless Image Compression Using
Two Techniques
Tanureet KaurGNDEC, LudhianaPunjab, India
Amanpreet Singh BrarProf. & Head, GNDEC,Ludhiana, Punjab, India
ABSTRACT
With the advent in technology, there exist many compressionschemes, each being unique in itself. However, the decision touse one in a particular situation is dictated largely by the typeof data being compressed. The greater the compression ratiosthe more dependent the method tends to be. Several losslesstechniques have been proposed so far for Image Compression.
Here we study a lossless image compression using two
techniques i.e. Delta encoding and Run length. Delta encoding
is a simple compression scheme for audio, EEG, and generaltime series data files. Numerous pastresearches projects haveobserved that one can compress one object relative toanother one by computing the differences between the two,
delta-encoding systems have almost invariably requiredknowledge of a specific relationship between them mostcommonly. The idea of delta encoding to reduce
communication or storage costs is not new. Using lossless runlength encoding technique together we propose a newalgorithm for lossless image compression.
Keywords
Delta encoding, Run length encoding
1. INTRODUCTIONData transmission and storage cost money. The moreinformation being dealt with, the more it costs. In spite of this,most digital data are not stored in the most compact form.Rather, they are stored in whatever way makes them easiest to
use, such as: ASCII text from word processors, binary codethat can be executed on a computer, individual samples from a
data acquisition
System, etc. Typically, these easy-to-use encoding methodsrequire data files about twice as large as actually needed to
represent the information. Data compression is the generalterm for the various algorithms and programs developed toaddress this problem. A compression program is used to
convert data from an easy-to-use format to one optimized forcompactness. Likewise, an uncompressed program returns theinformation to its original form. Various techniques for data
compression are simple encoding techniques called run length del-ta and Huffman encoding.
1.1 Data Compression Strategies
The methods have been classified as either lossless or lossy. Alossless technique means that the restored data file is identicalto the original. This is absolutely necessary for many types of
data, for example: executable code, word processing files,
tabulated numbers, etc. You cannot afford to misplace even asingle bit of this type of information. In comparison, data filesthat represent images and other acquired signals do not have
to be keeping in perfect condition for storage or transmission.All real world measurements inherently contain a certain
amount of noise. If the changes made to these signals
resemble a small amount of additional noise, no harm is done.Compression techniques that allow this type of degradation
are called lossy. This distinction is important because lossytechniques are much more effective at compression thanlossless methods. The higher the compression ratio, the morenoise added to the data [3].
1.1.1. Delta encoding
Delta encoding can be used for data compression when thevalues in the original data are smooth, that is, there is
typically only a small change between adjacent values. This isnot the case for ASCII text and executable code; however, it is
very common when the file represents a signal there existmany compression schemes, the right one to use in a
particular situation depends largely on the type of data beingcompressed. The greater the compression ratios the moredependent the method tends to be to characteristics of thedata. So for example, greater compression ratios can beachieved if run length encoding is applied to black and white
drawings than if it is applied to colour photographs. JPEG
gives better compression ratios for photographs than a generalcompression method such as GZIP.
Many compression algorithms achieve very large compression
ratios by "changing" the data. The changes are often cleverlymade so that they aren't noticeable, for example, JPEGdegrades the image in ways that the human visual system isnot sensitive to. Some audio compression methodsapproximate the original signal with considerations of the
limitations in the target playback system.
A requirement with many recordings for scientific analysis isthe data must not be degraded in any way; this is oftenreferred to as a lossless compression method. In many timeseries derived from sampling continuous signals, the transition
between samples is often much less than the total rangeavailable for the samples. For example, an acquisition systemmight store each sample in 16 bits, however since each
subsequent sample usually changes slowly the difference
between two samples can be stored in fewer bits. This is theessence of delta coding; store the changes instead of theabsolute values. Of course the first sample needs to be storedin full resolution and occasionally there might be a largertransition. To cope with this a flag is normally used to indicatewhether or not the next sample is a delta or absolute value.One disadvantage of delta coding is common when the file
represents a signal. For instance, Fig. 1(a) shows a segment of
an audio signal, digitized to 8 bits, with each sample between-127 and 127. Figure 1(b) shows the delta encoded version ofthis signal. The key feature is that the delta encoded signal haslower amplitude than the original signal. In other words, deltaencoding has increased the probability that each sample'svalue will be near zero, and
Decreased the probability that it will be far from zero. Thisuneven probability is just the thing that Huffman encoding
needs to operate. If the original signal is not changing, or is
-
7/28/2019 64-516-1-PB_2
2/5
Council for Innovative Research International Journal of Computers & Distributed Systems
www.cirworld.com Volum e 1, Issue 2, August, 2012
62 | P a g e
w w w . c i r w o r l d . c o m
changing in a straight line, delta encoding will result in runsof samples having the same value.
FIGURE 1(a) and 1(b)
Example of delta encoding: Figure (a) is an audio signaldigitized to 8 bits. Figure (b) shows the delta encoded version
of this signal. Delta encoding is useful for data compression ifthe signal being encoded varies slowly from sample-to-sample.
1.1.2 Run-Length Encoding
Data files frequently contain the same character repeatedmany times in a row.
For example, text files use multiple spaces to separatesentences, indent paragraphs, format tables & charts, etc.Digitized signals can also have runs of the same value,
indicating that the signal is not changing. For instance, animage of the nighttimes sky would contain long runs of the
character or characters representing the black background.Likewise, digitized music might have a long run of zerosbetween songs. Run-length encoding is a simple method of
compressing these types of files. Figure 1.2 illustrates run-length encoding for a data sequence having frequent runs of
zeros. Each time a zero is encountered in the input data, twovalues are written to the output file. The first of these values isa zero, a flag to indicate that run-length compression isbeginning. The second value is the number of zeros in the run.If the average run-length is longer than two, compression will
take place. On the other hand, many single zeros in the data
can make the encoded file larger than the original.Manydifferent run-length schemes have been developed. Forexample, the input data can be treated as individual bytes, or
groups of bytes that represent something more elaborate, suchas floating point numbers. Run-length encoding can be used
on only one of the characters (as with the zero above), several
of the characters, or all of the characters. A good example of ageneralized run-length scheme is Pack Bits, created forMacintosh users. Each byte (eight bits) from the input file isreplaced by nine bits in the compressed file. The added ninthbit is interpreted as the sign of the number. That is, each
character read from the input file is between 0 to 255, whileeach character written to the encoded file is between -255 and
255. To understand how this is used, consider the input file:1,2,3,4,2,2,2,2,4 , and the compressed file generated by thePack Bits algorithm: 1,2,3,4,2,&3,4. The compression
program simply transfers each number from the input file tothe compressed file, with the exception of the run: 2,2,2,2.This is represented in the compressed file by the two numbers:
2,-3. The first number ("2") indicates what character the runconsists of. The second number ("-3") indicates the number ofcharacters in the run, found by taking the absolute value andadding one. For instance, 4,-2 means 4, 4, 4; 21,-4 means 21,
21,21,21,21, etc.
OriginalStream
Run-length encoded
FIGURE 1.2
Example of run-length encoding: Each run of zeros is
replaced by two characters in the compressed file: a zero toindicate that compression is occurring followed by the numberof zeros in the run.
An inconvenience with Pack Bits is that the nine bits must be
reformatted into the standard eight bit bytes used in computerstorage and transmission. A useful modification to this
scheme can be made when the input is restricted to be ASCIItext. In other words, the values 127 through 255 are notdefined with any standardized meaning, and do not need to be
stored or transmitted. This allows the eighth bit to indicate ifrun-length encoding is in progress. [3]
2. RELATED WORK IN THE FIELD OFLOSSLESS COMPRESSION AND
DECOMPRESSION
A new lossless method of image compression and
decompression using Huffman coding techniques shows that
the higher data redundancy helps to achieve morecompression. A new compression and decompressiontechnique is based on Huffman coding and decoding for scantesting to reduce test data volume, test application time. Soother methods of image compression can be carried out as
namely JPEG method, Delta encoding Entropy coding, etc.
[1]
An Efficient Lossless ECG Compression Method Using Delta
Coding and Optimal Selective Huffman Coding proposed anefficient lossless ECG compression based on delta coding andoptimal selective Huffman coding and implement theproposed algorithm in the some development board. The deltacoding is applied to reduce the dynamic range of the originalECG signals. The optimal selective Huffman coding is used to
enhance the computation efficiency of canonical Huffmancoding. Comparing with the canonical Huffman coding
-
7/28/2019 64-516-1-PB_2
3/5
Council for Innovative Research International Journal of Computers & Distributed Systems
www.cirworld.com Volum e 1, Issue 2, August, 2012
63 | P a g e
w w w . c i r w o r l d . c o m
algorithm, the proposed algorithm has gained muchimprovement [4]
Delta compression for fast wireless Internet downloadpresents a theoretical framework for delta compression basedon information theory and Markov models, including insights
into the compression bounds. They also simulated and
implemented a generic delta compression scheme anddemonstrated its real-time and non real-time performance by
applying the scheme on binary data. It will be shown that ourdelta compression scheme is able to improve the real-timeperformance of wireless Internet download by up to 4 times[7]
Lossless Grey-scale Image Compression using Source
Symbols Reduction and Huffman Coding proposedtechnique achieves better compression ratio than the HuffmanCoding. The experiment also reveals that the compressionratio in Huffman Coding is almost close with the experimental
images. This enables to achieve better compression ratiocompared to the Huffman coding. Further, the source symbolsreduction could be applied on any source data which uses
Huffman coding to achieve better compression ratio.Therefore, the experiment confirms that the proposedtechnique produces higher lossless compression than the
Huffman Coding. Thus, the proposed technique will besuitable for compression of text, image, and video files. [2]
Modified delta encoding and its applications to speech signal
presents a modified delta encoding method and itsapplications to speech signal are proposed. In this method,Hungarian algorithm is applied to find the minimum distance
of arbitrary two frames in speech signal and minimumspanning tree is used to find an effective delta encoding path.In simulation, the method is applied to the compression ofsinusoidal coding. The results show that the data size after
compression is 6% smaller than a usual delta encoding whose
path is not suitably permutated. In addition, the proposedmethod has the potential to apply on data security for practicaluse because the delta encoding path which can be used as thesecurity key is unordered and long enough. [5]
3. PROPOSED ALGORITHMDelta encoding and run-length encoding are two compressiontechniques. Run-length encoding (RLE) is a very simple formofdata compression in which runs of data (that is, sequencesin which the same data value occurs in many consecutive data
elements) are stored as a single data value and count, rather
than as the original run. This is most useful on data thatcontains many such runs: for example, simple graphic imagessuch as icons, line drawings, and animations. It is not useful
with files that don't have many runs as it could greatlyincrease the file size. By delta encoding the audio data, we
can exploit the fact audio data very often changes just a littlebetween samples and so the generated delta values (the
amount of change) need less bits to be expressed. However,you cant store 16-bit data with e.g. 8-bit delta values since itis not assured the change is always small enough to be storedin 8 bits, requiring the use of 16-bit delta values in the worst
case.
Here we are proposing an algorithm where we use both ofthese techniques to generate a lossless image data. Delta
encoding together with run length encoding generates a
lossless image. This algorithm works on any type of image.
Proposed algorithm Consist of following basic principalmechanisms:
Delta and run length encoding
Inverse of run length and delta encoding
Parameters compared:
PSNR
MSERMSE
3.1.1 Delta and run length encoding
Here first delta encoding and then run-length encoding areapplied to the source image to obtain a compressed image.
In the first step the input image is converted into deltaencoded image when delta encoding is applied to the image.This is the essence of delta coding; store the changes instead
of the absolute values. Of course the first sample needs to bestored in full resolution and occasionally there might be a
larger transition. To cope with this a flag is normally used toindicate whether or not the next sample is a delta or absolutevalue. In the next run length encoding is applied to the deltaencoded image to obtain a run length encoded image which is
again a lossless technique. Here we get a compressed image
3.1.2 Inverse of run length and delta encoding
http://c/wiki/Data_compressionhttp://c/wiki/Data_compression -
7/28/2019 64-516-1-PB_2
4/5
Council for Innovative Research International Journal of Computers & Distributed Systems
www.cirworld.com Volum e 1, Issue 2, August, 2012
64 | P a g e
w w w . c i r w o r l d . c o m
Here the compressed image obtained from previous step isconverted into decompressed i.e. lossless image by applyinginverse run length and delta decoding. This is how we get alossless image with the help of two lossless techniques. ThisAlgorithm can be applied to any image. This is the biggest
advantage of this algorithm.
3.2 .1 Parameters ComparedP.S.N.R and MSE
The PSNR block computes the peak signal-to-noise ratio, indecibels, between two images. This ratio is often used as aquality measurement between the original and a compressedimage. The higher the PSNR, the better the quality of thecompressed or reconstructed image.
TheMean Square Error (MSE) and thePeak Signal to NoiseRatio (PSNR) are the two error metrics used to compare imagecompression quality. The MSE represents the cumulativesquared error between the compressed and the original image,whereas PSNR represents a measure of the peak error. The
lower the value of MSE, the lower the error.
To compute the PSNR, the block first calculates the mean-
squared error using the following equation:
In the previous equation,MandNare the number of rows andcolumns in the input images, respectively. Then the blockcomputes the PSNR using the following equation:
In the previous equation,R is the maximum fluctuation in the
input image data type.
When both of these parameters were compared in thisalgorithm between original image and decompressed image
then PSNR is infinite and MSE is 0.
RMSE
Root-mean-square error (RMSE) is a frequently used measure
of the differences between values predicted by a model or anestimator and the values actually observed. RMSD is a goodmeasure ofaccuracy.
The RMSD of an estimator with respect to the estimated
parameter is defined as the square root of the mean square
error
When this parameter was compared in this algorithm betweenoriginal image and decompressed image then result was 0.
4. RESULT AND CONCLUSIONOur terminal objective is to obtain a lossless image method.
Delta encoding and Run-length encoding are the techniques
which are used collectively to obtain a new method to obtain alossless image. Here we have validated the results with thehelp of various parameters. The following figures show the
input image and resulting decompressed image
Figure 4.1: Original Image(JPG)
Figure 4.2: Delta encoding Image
Figure 4.3: Delta decoding Image
http://en.wikipedia.org/wiki/Accuracy_and_precisionhttp://en.wikipedia.org/wiki/Estimatorhttp://en.wikipedia.org/wiki/Mean_square_errorhttp://en.wikipedia.org/wiki/Mean_square_errorhttp://en.wikipedia.org/wiki/Mean_square_errorhttp://en.wikipedia.org/wiki/Mean_square_errorhttp://en.wikipedia.org/wiki/Estimatorhttp://en.wikipedia.org/wiki/Accuracy_and_precision -
7/28/2019 64-516-1-PB_2
5/5
Council for Innovative Research International Journal of Computers & Distributed Systems
www.cirworld.com Volum e 1, Issue 2, August, 2012
65 | P a g e
w w w . c i r w o r l d . c o m
Figure 4.4: Decoded Image
Let us now compare this method with a lossy imagecompression technique and see the difference. As we draw acomparison between this method and DCT (Discrete CosineTransform), a lossy image compression technique, then
following are the results
Figure 4.5: DCT Image
Figure 4.6: IDCT Image
When parameters (PSNR, MSE, RMSE) highlighted inProposed Algorithm were compared between both thedecompressed images then there was a vast difference, whichhowever, is not visible to the naked eye As evident from theresults shown above, the performance of proposed lossless
method for image compression when compared with the lossyDCT technique was far superior.
5. CONCLUSION AND FUTURE WORKIn this paper, we have presented a novel approach of losslessimage compression using two techniques. The results
presented are preliminary and there is a lot of scope forimprovement to develop this algorithm. Based on the results
presented in the previous chapter, I conclude that the image
obtained with the help of proposed method gives better resultsthan the technique based on DCT. This lossless image methodis only used for obtaining lossless image. No pixels have beenbroken in this method. This is a cost effective method .Thedisadvantage of this method is that it does not give a coloured
image result. In this research, the proposed approach has beenimplemented on various types of images formats (GIF, JPG,
BMP, PNG, etc). The results in each scenario validates theproposed algorithm and it gives optimized results.
5.1 Future Work
Further research work may focus on developing some newalgorithms, which involve combining both techniques(lossless and lossy) to yield results, wherein highercompression ratios may be achieved and also the end resultmay be a coloured image.
6. ACKNOWLEDGEMENTI am extremely grateful to Professor Amanpreet Singh Brar,Head of the Department, CSE Department, GNDEC,
Ludhiana, for encouraging and helping me in carrying out thepresent work. Without his wise counsel and able guidance, itwould have been extremely difficult to complete thisherculean task in the given time frame.
7. REFERENCES[1] Jagadish H. Pujar, Lohit M. Kadlaskar (2010) Anew lossless method of image compression anddecompression using Huffman coding techniques
Journal of Theoretical and Applied InformationTechnology
[2] C. Saravanan, R. Ponalagusamy (2010) LosslessGrey-scale Image Compression using Source Symbols
Reduction and Huffman Coding International Journalof Image Processing (IJIP), Volume (3): Issue (5)
[3] The Scientist and Engineers Guide to Digital SignalProcessing by Steven W. Smith.Ph.D
[4] G.C Chang Y.D Lin (2010) An Efficient LosslessECG Compression Method Using Delta Coding andOptimal Selective Huffman Coding IFMBE
proceedings 2010, Volume 31, Part 6, 1327-1330, DOI:
10.1007/978-3-642-14515-5_338.
[5] Ma, Y.; Noda, H.; Ito, I.; Nishihara, A. ;( 2010)Modified delta encoding and its applications to speech
signal Tencon 2010 - 2010 IEEE region 10 conferencePage(s): 495 - 498
[6] Gokmen, M.; Ersoy, I.; Jain, A.K. ;( 1996)Compression of fingerprint images using hybrid image
model image processing 1996. Proceedings.International Conference volume 3.
[7] Chunpeng Xiao; Bing, B.; Chang, G.K.; (2005)Delta compression for fast wireless Internetdownloads Global Telecommunications Conference,2005. GLOBECOM '05. IEEE volume (1)
http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=5680734http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=5680734http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=10511http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=10511http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=10511http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=10511http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=10511http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=5680734