EE465: Introduction to Digital Image Processing 1
Data Compression: Advanced Topics
Huffman Coding Algorithm Motivation Procedure Examples
Unitary Transforms Definition Properties Applications
EE465: Introduction to Digital Image Processing 2
Recall:
Recall: Variable Length Codes (VLC)
Assign a long codeword to an event with small probabilityAssign a short codeword to an event with large probability
ppI 2log)( Self-information
It follows from the above formula that a small-probability event containsmuch information and therefore worth many bits to represent it. Conversely, if some event frequently occurs, it is probably a good idea to use as few bits as possible to represent it. Such observation leads to the idea of varying thecode lengths based on the events’ probabilities.
EE465: Introduction to Digital Image Processing 3
Two Goals of VLC design
–log2p(x) For an event x with probability of p(x), the optimalcode-length is , where x denotes the smallest integer larger than x (e.g., 3.4=4 )
• achieve optimal code length (i.e., minimal redundancy)
• satisfy uniquely decodable (prefix) condition
code redundancy: 0)( XHlr
Unless probabilities of events are all power of 2, we often have r>0
EE465: Introduction to Digital Image Processing 4
“Big Question”
How can we simultaneously achieve minimum redundancyand uniquely decodable conditions?
D. Huffman was the first one to think about this problemand come up with a systematic solution.
EE465: Introduction to Digital Image Processing 5
Huffman Coding (Huffman’1952)
Coding Procedures for an N-symbol source Source reduction
List all probabilities in a descending order Merge the two symbols with smallest probabilities into
a new compound symbol Repeat the above two steps for N-2 steps
Codeword assignment Start from the smallest source and work back to the
original source Each merging point corresponds to a node in binary
codeword tree
EE465: Introduction to Digital Image Processing 6
symbol x p(x)
S
W
N
E
0.5
0.25
0.125
0.1250.25
0.25
0.5 0.5
0.5
Example-I
Step 1: Source reduction
(EW)
(NEW)
compound symbols
EE465: Introduction to Digital Image Processing 7
p(x)0.5
0.25
0.125
0.1250.25
0.25
0.5 0.5
0.5 1
0
1
0
1
0
codeword0
10
110
111
Example-I (Con’t)
Step 2: Codeword assignment
symbol xS
W
N
E
NEW 0
10EW
110
EW
N
S
01
1 0
1 0111
EE465: Introduction to Digital Image Processing 8
Example-I (Con’t)
NEW 0
10EW
110
EW
N
S
01
1 0
1 0
NEW 1
01EW
000
EW
N
S
10
0 1
1 0001
The codeword assignment is not unique. In fact, at eachmerging point (node), we can arbitrarily assign “0” and “1”to the two branches (average code length is the same).
or
EE465: Introduction to Digital Image Processing 9
symbol x p(x)
e
o
a
i
0.4
0.2
0.2
0.1
0.4
0.2
0.4 0.6
0.4
Example-II
Step 1: Source reduction
(iou)
(aiou)
compound symbolsu 0.10.2(ou)
0.4
0.2
0.2
EE465: Introduction to Digital Image Processing 10
symbol x p(x)
e
o
a
i
0.4
0.2
0.2
0.1
0.4
0.2
0.4 0.6
0.4
Example-II (Con’t)
(iou)
(aiou)
compound symbols
u 0.10.2(ou)
0.4
0.2
0.2
Step 2: Codeword assignment
codeword0
1
101
000
0010
0011
EE465: Introduction to Digital Image Processing 11
Example-II (Con’t)
0 1
0100
000 001
0010 0011
e
o u
(ou)i
(iou) a
(aiou)
binary codeword tree representation
EE465: Introduction to Digital Image Processing 12
Example-II (Con’t)
symbol x p(x)
e
o
ai
0.40.20.20.1
u 0.1
codeword1
0100000100011
length1
2344
bpsppXHi
ii 122.2log)(5
12
bpslpli
ii 2.241.041.032.022.014.05
1
bpsXHlr 078.0)(
If we use fixed-length codes, we have to spend three bits per sample, which gives code redundancy of 3-2.122=0.878bps
EE465: Introduction to Digital Image Processing 13
Example-III
Step 1: Source reduction
compound symbol
EE465: Introduction to Digital Image Processing 14
Example-III (Con’t)
Step 2: Codeword assignment
compound symbol
EE465: Introduction to Digital Image Processing 15
Summary of Huffman Coding Algorithm
Achieve minimal redundancy subject to the constraint that the source symbols be coded one at a time
Sorting symbols in descending probabilities is the key in the step of source reduction
The codeword assignment is not unique. Exchange the labeling of “0” and “1” at any node of binary codeword tree would produce another solution that equally works well
Only works for a source with finite number of symbols (otherwise, it does not know where to start)
EE465: Introduction to Digital Image Processing 16
Data Compression: Advanced Topics
Huffman Coding Algorithm Motivation Procedure Examples
Unitary Transforms Definition Properties Applications
EE465: Introduction to Digital Image Processing 17
An Example of 1D Transform with Two Variables
x1
x2
y1
y2
11
11
2
1,
11
11
2
1
2
1
2
1 AAxyx
x
y
y
Transform matrix
(1,1)(1.414,0)
EE465: Introduction to Digital Image Processing 18
Decorrelating Property of Transform
x1
x2 y1y2
x1 and x2 are highly correlated
p(x1x2) p(x1)p(x2)
y1 and y2 are less correlated
p(y1y2) p(y1)p(y2)
Please use MATLAB demo program to help your understanding whyit is desirable to have less correlation for image compression
EE465: Introduction to Digital Image Processing 19
Transform=Change of Coordinates Intuitively speaking, transform plays the role of
facilitating the source modeling Due to the decorrelating property of transform, it is
easier to model transform coefficients (Y) instead of pixel values (X)
An appropriate choice of transform (transform matrix A) depends on the source statistics P(X) We will only consider the class of transforms
corresponding to unitary matrices
EE465: Introduction to Digital Image Processing 20
A matrix A is called unitary if A-1=A*T
Unitary MatrixDefinition conjugate
transpose
Example
TAAA
11
11
2
1,
11
11
2
1 1
For a real matrix A, it is unitary if A-1=AT
Notes: transpose and conjugate can exchange, i.e., A*T=AT*
EE465: Introduction to Digital Image Processing 21
Example 1: Discrete Fourier Transform (DFT)
FA
NNN
N
NN
aa
aa
......
............
............
......
1
111
1,
,1
2
N
NN
j
N
klNkl
WeW
WN
a
Re
Im
NWklN
N
llk Wx
Ny
1
1
N
llklk xay
1
DFT:
DFT Matrix:
EE465: Introduction to Digital Image Processing 22
Discrete Fourier Transform (Con’t)
Nkljkl e
Na /21
TFF **1 FFF T
Properties of DFT matrix
symmetry
unitary
Proof: lkkl aa
Proof:
lk
lk
e
e
Ne
Naap
N
nNnlkj
nlkj
N
nlkjN
nnlknkl 0
1
1
111
1/)(2
)(2)(2
1
*
If we denote *TFFP
)1(,1
1 11
0
r
r
rr
NN
n
n
then we have
IFFP *T (identity matrix)
EE465: Introduction to Digital Image Processing 23
CA
NNN
N
NN
aa
aa
......
............
............
......
1
111
NlNkN
klN
NlkN
akl1,2,
2
)1)(12(cos2
1,1,1
TCCCC 1*,real You can check it using MATLAB demo
Example 2: Discrete Cosine Transform (DCT)
EE465: Introduction to Digital Image Processing 24
DCT ExamplesN=2:
11
11
2
1C
0.5000 0.5000 0.5000 0.5000 0.6533 0.2706 -0.2706 -0.6533 0.5000 -0.5000 -0.5000 0.5000 0.2706 -0.6533 0.6533 -0.2706
N=4:
% generate DCT matrix with size of N-by-NFunction C=DCT_matrix(N)for i=1:N;x=zeros(N,1);x(i)=1;y=dct(x);C(:,i)=y;end;end
Here is a piece of MATLAB code to generate DCT matrix by yourself
C
Haar Transform
EE465: Introduction to Digital Image Processing 25
HAA
AAAA
nn
nnn
2
1,
11
11
2
121
T* HHHH 1
% generate Hadamard matrix N=2^{n}function H=hadamard(n)
H=[1 1;1 -1]/sqrt(2);i=1;while i<n H=[H H;H -H]/sqrt(2); i=i+1;end
Here is a piece of MATLAB code to generate Hadamard matrix by yourself
Example 3: Hadamard Transform
EE465: Introduction to Digital Image Processing 26
When the transform matrix A is unitary, the defined 1D transform is called unitary transformxy
A
1D Unitary Transform
yyx T *1 AA
NNNN
N
N y
y
y
aa
aa
x
x
x
2
1
**1
*1
*11
2
1
Inverse TransformForward Transform
xy
A
NNNN
N
N x
x
x
aa
aa
y
y
y
2
1
1
111
2
1
N
jlklk xay
1
N
lllkk yax
1
*
EE465: Introduction to Digital Image Processing 27
Basis Vectors
N
k
Tinvk
invkk Nkk
aabbyx1
** ],...,[,1
basis vectors corresponding to inverse transform
N
k
TkNk
fork
forkk aabbxy
11 ],...,[,
basis vectors corresponding to forward transform (column vectors of transform matrix A)
NNNN
N
N x
x
x
aa
aa
y
y
y
2
1
1
111
2
1
(column vectors of transform matrix A*T )
NNNN
N
N y
y
y
aa
aa
x
x
x
2
1
**1
*1
*11
2
1
EE465: Introduction to Digital Image Processing 28
NNNNNN XAYDo N 1D transforms in parallel
NNN
N
NNN
N
NNN
N
xx
xx
aa
aa
yy
yy
1
111
1
111
1
111
From 1D to 2D
Nixy ii 1,
AT
iNiiT
iNii yyyxxx ],...,[,],...,[ 11
Ni yyy
|...||...|1 Ni xxx
|...||...|1
EE465: Introduction to Digital Image Processing 29
TNNNNNNNN AXAY2D forward transform
NNN
N
NNN
N
NNN
N
NNN
N
aa
aa
xx
xx
aa
aa
yy
yy
1
111
1
111
1
111
1
111
1D column transform 1D row transform
Definition of 2D Transform
EE465: Introduction to Digital Image Processing 30
2D Transform=Two Sequential 1D Transforms
Conclusion:
2D separable transform can be decomposed into two sequential The ordering of 1D transforms does not matter
TAXAY
AXY1 TTT )(
11 AYAYY row transform
column transform
TTT )(2 AXXAY
2AYY
row transform
column transform
(left matrix multiplication first)
(right matrix multiplication first)
EE465: Introduction to Digital Image Processing 31
8
1
8
1i jijijx BY
Basis ImagesTAXAY X T
ijT BAXAY ][ klij δX T
otherwise
jlikkl 0
,1
8
1
8
1i jijijx δX
TiNii
Tjiij aabbb ],...,[, 1
B
Basis image Bij can be viewed as the response of the linear system
(2D transform) to a delta-function input ij
T
N1 1N
EE465: Introduction to Digital Image Processing 32
Example 1: 8-by-8 Hadamard Transform
,
1111
1111
1111
1111
1111
1111
1111
11111111
1111
1111
1111
1111
1111
1111
1111
8
1
A
1b
8b
Biji
jDC
In MATLAB demo, you can generate these 64 basis images and display them
EE465: Introduction to Digital Image Processing 33
Example 2: 8-by-8 DCT
8881
1811
88
......
............
............
......
aa
aa
A
81,82,16
)1)(12(cos
2
1
81,1,8
1
lkkl
lkakl
In MATLAB demo, you can generate these 64 basis images and display them
i
jDC
EE465: Introduction to Digital Image Processing 34
inverse transform **NNNN
TNNNN AYAX
Since A is a unitary matrix, we have T*1 AA
Proof
XIXIAAXAA
AAXAAA)(AXAAYAA
TTT**T
*T*T*T*T**T
)()(
)()(
2D Unitary Transform
forward transform TNNNNNNNN AXAY
Suppose A is a unitary matrix,
TTT ABAB )(
EE465: Introduction to Digital Image Processing 35
Properties of Unitary Transforms Energy compaction: only a small fraction of
transform coefficients have large magnitude Such property is related to the decorrelating
capability of unitary transforms Energy conservation: unitary transform
preserves the 2-norm of input vectors Such property essentially comes from the fact that
rotating coordinates does not affect Euclidean distance
EE465: Introduction to Digital Image Processing 36
Energy Compaction Property
How does unitary transform compact the energy? Assumption: signal is correlated; no energy compaction can be
done for white noise even with unitary transform Advanced mathematical analysis can show that DCT basis is an
approximation of eigenvectors of AR(1) process (a good model for correlated signals such as an image)
A frequency-domain interpretation Most transform coefficients would be small except those around
DC and those corresponding to edges (spatially high-frequency components)
Images are mixture of smooth regions and edges
EE465: Introduction to Digital Image Processing 37
Energy Compaction Example in 1D
100
98
98
100
,
1111
1111
1111
1111
2
1x
AHadamard matrix
2
0
0
198
100
98
98
100
1111
1111
1111
1111
2
1xy
A
significant
A coefficient is called significant if its magnitudeis above a pre-selected threshold th
insignificant coefficients (th=64)
EE465: Introduction to Digital Image Processing 38
Energy Compaction Example in 2D
Example
949799100
100969798
9494100100
9998100100
X
5.105.12
5.025.01
25.425.2
15.505.391
Y
TAXAY
1111
1111
1111
1111
2
1A
insignificant coefficients (th=64)
A coefficient is called significant if its magnitudeis above a pre-selected threshold th
EE465: Introduction to Digital Image Processing 39
Image Example
Notice the excellent energy compaction property of DCT
Original cameraman image XIts DCT coefficients Y
(2451 significant coefficients, th=64)
low-frequency
high-frequency
EE465: Introduction to Digital Image Processing 40
Counter Example
Original noise image X Its DCT coefficients Y
No energy compaction can be achieved for white noise
EE465: Introduction to Digital Image Processing 41
Energy Conservation Property in 1D
xy
A A is unitary 22 |||||||| xy
Proof
2
1
2***
**
1
22
||||||)(
)()(||||||
xxxxxx
xxyyyy
N
ii
TTT
TTN
ii
AA
AA
1D case
EE465: Introduction to Digital Image Processing 42
Numerical Example
4
3,
11
11
2
1x
A
1
7
2
1
4
3
11
11
2
1xy
A
Check:
252
17||||,2543||||
222222
yx
EE465: Introduction to Digital Image Processing 43
Implication of Energy Conservation
xy
A
A is unitary
Q TNyyy ]ˆ,...,ˆ[ˆ
1T
Nyyy ],...,[ 1
TNxxx ],...,[ 1
T T-1
TNxxx ]ˆ,...,ˆ[ˆ
1
xy ˆˆ A
)ˆ(ˆ xxyy
A
22 ||ˆ||||ˆ|| xxyy
Linearity of Transform
EE465: Introduction to Digital Image Processing 44
Energy Conservation Property in 2D2-norm of a matrix X
Nixy ii 1,
A
Step 1: AXY 22XY A unitary
Proof: AXY Using energy conservation property in 1D, we have
Nixy ii 1,|||||||| 22
N
ii
N
i
N
jij xx
1
2
1 1
22||||||
X
N
ii
N
ii xy
1
2
1
2 ||||||||
22XY
EE465: Introduction to Digital Image Processing 45
Energy Conservation Property in 2D (Con’t)
TAXAY Step 2:
22XY A unitary
Hint:
AXY1 TTT )(
11 AYAYY row transform
column transform
2D transform can be decomposed into two sequential 1D transforms, e.g.,
Use the result you obtained in step 1 and note that22
XX T
EE465: Introduction to Digital Image Processing 46
Numerical Example
2/12/1
2/12/1A
43
21X
02
15TAXAY
304321 22222 X
T
Check:
300125|||| 22222 Y
EE465: Introduction to Digital Image Processing 47
TAXAY *ˆˆ AYAX *T
T)AXA(XYY ˆˆ
22 ||ˆ||||ˆ|| XXYY
Implication of Energy Conservation
Q YYX T T-1 X
Similar to 1D case, quantization noise in the transform domainhas the same energy as that in the spatial domain
Linearity of Transform
EE465: Introduction to Digital Image Processing 48
forwardTransform
entropycoding
image X binarybit stream
probabilityestimation
Y
Why Energy Conservation?
f
Y^
superchannel
entropydecoding
s
inverseTransform
image X f-1s^
22 ||ˆ||||ˆ|| XXYY
Top Related