[CSCI 6990-DC] 09: Scalar Quantizationpeople.cs.nctu.edu.tw/~cmliu/Courses/Compression/chap9.pdf ·...
-
Upload
truongkhuong -
Category
Documents
-
view
214 -
download
2
Transcript of [CSCI 6990-DC] 09: Scalar Quantizationpeople.cs.nctu.edu.tw/~cmliu/Courses/Compression/chap9.pdf ·...
Quantization
C.M. LiuPerceptual Signal Processing Lab College of Computer ScienceNational Chiao-Tung University
Office: EC538(03)5731877
http://www.csie.nctu.edu.tw/~cmliu/Courses/Compression/
Contents
Quantization Problem
Uniform Quantization
Adaptive Quantization
Nonuniform Quantization
Entropy-Coded Quantization
Vector Quantization
Rate-Distortion Function
2
Quantization
Definition:
The process of representing a large—possibly infinite—set of values with a smaller set.
Example:Source:
Real numbers in the [-10.0, 10.0]
Quantization
Q(x) = ⎣x+0.5⎦[-10.0, -10.0] {-10, -9, …, -1, 0, 1, 2, …, 9, 10}
Scalar vs. vector quantizationScalar: applied to scalars
Vector: applied to vectors
3
The Quantization Process
Two aspectsEncoder mapping
Map a range of values to a codewordIf source is analog A/D converterKnowledge of the source can help pick more appropriate ranges
Decoder mappingMap the codeword to a value in the rangeIf output is analog D/A converterKnowledge of the source distribution can help pick better approximations
Quantizer = encoder + decoder
4
Quantization Example 5
Quantizer Input-Output Map6
Quantization Problem Formulation
Input:X – random variablefX(x) – probability density function (pdf)
Output:{bi}i=0..M decision boundaries
{yi}i=1..M reconstruction levels
Discrete processes are often approximated by continuous distributions
E.g.: Laplacian model of pixel differenceIf source is unbounded, then first/last decision boundaries = ±∞
7
Quantization Error
Mean squared quantization error
8
( )
( ) dxfyx
dxfxQx
X
M
i
b
b i
Xq
i
i∑∫
∫
=
∞
∞−
−
−=
−=
1
2
22
1
)(σ
iii bxbyxQ ≤<= −1iff)(
Quantization error is a.k.a.Quantization noiseQuantizer distortion
Quantization Problem Formulation (2)
Bit rates w/ fixed-length codewordsR = ⎡log2M⎤E.g.: M = 8 R = 3
Quantizer design problemGiven:
input pdf fX(x) & MFind:
decision boundaries {bi} and Reconstruction levels {yi}
Such that: MSQE is minimized
9
Quantization Problem Formulation (3)
Bit rates w/ variable-length codewordsR depends on boundary selectionExample
10
∑ ==
M
i ii yPlR1
)(
∫−
=i
i
b
b Xi dxxfyP1
)()(
∑ ∫=−
=M
i
b
b Xii
i
dxxflR1
1
)(
Quantization Problem Formulation (4)
Rate-optimization formulation
Given:
Distortion constraint σq2 ≤ D*
Find:{bi}, {yi}, binary codes
Such that:R is minimized
Distortion-optimization formulation
Given:Rate constraint R ≤ R*
Find:{bi}, {yi}, binary codes
Such that:
σq2 is minimized
11
Uniform Quantizer
All intervals of the same sizei.e., boundaries are evenly spaced (Δ)Outer intervals may be an exception
ReconstructionUsually the midpoint is selected
Midrise quantizerZero is not an output level
Midtread quantizerZero is an output level
12
Midrise vs. Midtread Quantizer13
Midrise Midtread
Uniform Quantization of Uniform Source
InputUniform [-Xmax, Xmax]
OutputM-level uniform quantizer
14
MX max2
=Δ
1221
2122
22
1)1(
max
22 Δ
=⎟⎠⎞
⎜⎝⎛ Δ
−−= ∑∫
=
Δ
Δ−
M
i
i
iq dxX
ixσ
Alternative MSQE Derivation
Consider quantization error instead:q = x – Q(x)
q ∈ [-Δ/2, Δ/2]
15
∫Δ
Δ−
Δ=
Δ=
2
2
222
121 dqqqσ
( ) ( )
dB02.6)2(log20)(log10
212
122log1012
122log10log10)dB(SNR
102
10
2max
2max
102
2max
102
2
10
nM
MX
XX
n
q
s
===
⎟⎟⎟⎟⎟
⎠
⎞
⎜⎜⎜⎜⎜
⎝
⎛
⎟⎠⎞
⎜⎝⎛
=⎟⎟⎠
⎞⎜⎜⎝
⎛Δ
=⎟⎟⎠
⎞⎜⎜⎝
⎛=
σσ
Examples (8 1, 2, 3 bits/pixel)16
Darkening, contouring & dithering
Uniform Quantization of Nonuniform Sources
Example nonuniform source:x ∈ [-100, 100], P(x ∈ [-1, 1]) = 0.95
ProblemDesign an 8-level quantizer
The naïve approach leads to95% of samples values represented by two numbers:
-12.5 and 12.5
Max quantization error (QE)= 12.5
Min QE = 11.5 (!)
Consider an alternativeStep = 0.3
Max QE = 98.5, however 95% of the time QE < 0.15
17
Optimizing MSQE18
Numerically solvable for specific PDF
Example Optimum Step Sizes19
QE for 3-bit Midrise Quantizer20
Overload/Granular Regions21
"4"4,stdev
valuegranularmax11 loadingff σ→==
The step selectionTradeoff between overload noise and granular noise.
Variance Mismatch Effects22
Variance Mismatch Effects (2)23
Distribution Mismatch Effects
8-level quantizers, SNR
24
Adaptive Quantization
IdeaInstead of a static scheme, adapt to the actual data:
Mean, variance, pdf
Forward adaptive (off-line)Divide source in blocksAnalyze block statisticsSet quantization scheme
Side channel
Backward adaptive (on-line)Adaptation based on quantizer outputNo side channel necessary
25
Forward Adaptive Quantization (FAQ)
Choosing block sizeToo large
Not enough resolutionIncreased latency
Too smallMore side channel information
Assuming a mean of zeroVariance estimate:
26
∑ −
= +=1
022 1ˆ N
i inq xN
σ
Speech Quantization Example
16-bit speech 3-bit fixed
27
Speech Quantization Example (2)
16-bit speech 3-bit FAQBlock = 128 samples
8-bit variance quantization
28
FAQ Refinement
So far we assumed uniform pdf
RefinementAssume uniform pdf butRecord min/max values for each block
Example:Sena image8x8 blocks3-bit quantizationOverhead = 16/8x8 = 0.25 bits/pixel
29
FAQ Refinement Example30
Original: 8 bits/pixel Quantized: 3.25 bits/pixel
Backward Adaptive Quantization (BAQ)
ObservationOnly encoder sees inputAdaptation can only be based on quantized output
ProblemHow do we deduce mismatch information from output only?It is possible, if we know the pdf …… and we are very patient
31
Jayant Quantizer
IdeaIf input falls in the outer levels
Expand step size
If input falls in the inner levelsContract step size
The product of expansions & contraction should be 1
Multipliers: Mk
If input Sn-1 falls in the kth interval, thenstep is multiplied by Mk
Inner Mk < 1, outer Mk > 1
32
1)1( −− Δ=Δ nnln M
3-bit Jayant Quantizer Output Levels33
Jayant Example
M0 = M4 = 0.8, M1 = M5 = 0.9
M2 = M6 = 1.0, M3 = M7 = 1.2, Δ0 = 0.5
Input: 0.1 -0.2 0.2 0.1 -0.3 0.1 0.2 0.5 0.9 1.5
34
Picking Jayant Multipliers
Δmin / Δmax to prevent under/overflow.
Adaption speed affected by γ.
35
⎣ ⎦
∑∏
∏
∏
∏
==
=
=
=
=⇒=
=>=
==
=
=
M
kkk
M
k
Pl
kkl
k
M
k
kk
P
M
k
Nn
M
k
n
Pl
llMLetNnPM
M
M
kk
k
k
k
k
k
k
k
00
0
0
0
01
,1where,
where,1
1
1
γ
γγ
Jayant Example36
“Ringing”
Jayant Performance37
Expands more rapidly than contracts to avoid overload errors.Robustness over changing input statistics.
Non-uniform Quantization
Idea: Pick the boundaries such that error is minimized
i.e., smaller/bigger step for smaller/bigger values
e.g.:
38
Non-uniform Quantization-- pdf-optimized Quantization
Problem: Given fX, minimize MSQE:
39
( ) dxfyx X
M
i
b
b iqi
i∑∫
= −
−=1
22
1
σ
Set derivative w.r.t. yj to zero and solve for yj:
( )
( )dxxf
dxxfxy
j
j
j
j
b
b X
b
b X
j
∫∫
−
−=
1
1
Set derivative w.r.t. bj to zero and solve for bj:( ) 21 jjj yyb += +
Non-uniform Quantization-- Lloyd-Max Algorithm
Observation:Circular dependency b/w bj and yj
Lloyd/Max/Lukaszewics/Steinhaus approach:Solve the two iteratively until an acceptable solution is found
Example:
40
Non-uniform Quantization-- Lloyd-Max Algorithm (2)
Boundaries: { b1, b2, …, bM/2-1}b0 = 0, bM/2-1 = MAX_INPUT
Reconstruction levels: { y1, y2, …, yM/2-1}
41
( ) ( )dxxfdxxfxyb
b X
b
b X ∫∫=1
0
1
01
One equation, two unknowns: b1, y1Pick a value for b1 (e.g. b1 = 1), solve for y1 and continue:
112 2 yby += ( ) ( )dxxfdxxfxyb
b X
b
b X ∫∫=2
1
2
12
… and so on until all { bn} and { ym} are found
Non-uniform Quantization-- Lloyd-Max Algorithm (3)Terminating condition:
42
12122 2ˆ −− += MMM yby
( ) ( )dxxfdxxfxy M
M
M
M
b
b X
b
b XM ∫∫−−
=2/
12/
2/
12/2
ε≤− 22 ˆMM yy
Else: pick a different b1 & repeat
Non-uniform Quantization— Example: pdf-Optimized Quantizers
43
Significant improvement over the uniform quantizer.
Lloyd-Max Quantizer Properties
1. MeanOUTPUT = MeanINPUT
2. VarianceOUTPUT ≤ VarianceINPUT
3. MSQE:
44
[ ]jj
M
jjxq bxbPy ≤≤−= −
=∑ 1
1
222 σσ
4. If N is a random variable representing QE
[ ] 2qXNE σ−=
5. Quantizer output and QE are orthogonal (uncorrelated)
[ ] 0,,,|)( 10 =MbbbNXQE K
Mismatch Effects45
4-bit Laplacian pdf-optimized quantizer
Companded Quantization (CQ)46
Compressor Uniform quantizer Expander
CQ Example: Compressor47
CQ Example: Uniform Quantizer48
CQ Example: Expander49
CQ Example: Equivalent Non-uniform Quantizer50
Vector Quantization
Definition
in alues vector v) stic(determini L of one only takemay it in that
on distributi special a has y vector e vector thrandom ldimensiona -N : ,
variablesrandom real : 1 , )( , )(
)]( .......... )2( )1([ )]( .......... )2( )1([
NR
yx
Niiyix
NyyyyNxxxx
≤≤
==
Vector Quantization (c.1)
Vector quantization
the vector quantization of x may be viewed as a pattern recognition problem involving the classification of the outcomes of the random variable x into a discrete number of categories or cell in N-space in a way that optimizes some fidelity criterion, such as mean square distortion.
y Q x= ( )
Vector Quantization (c.2)
VQ Distortion
VQ Optimizationminimize the average distortion D.
D P x C E d x y x C
d x yR l l l
k k kk
L
kN
= ∈ ∈=
∞
∑ ( ) { ( , )| }
( , )1
1 2
are typically the distance measures in , including , , norm
Vector Quantization (c.3)
Two conditions for optimalityNearest Neighbor Selection
minimize average distortion
=> applied to partition the N-dimensional space into cell when the joint pdf is known.
.1,for ),( ),( , )(
LjjkyxdyxdiffCxyxQ
jk
kk
≤≤≠≤∈=
NnxCxy
ky
ky
k
ddfyxd
CxyxdEDy
kξξξξ .....)....(),(....=
}|),({
11minarg
minargminarg
∈∫∫
∈===
{ , }C k Lk 1≤ ≤ fx( )•
55
Appendix C: Rate-Distortion Functions
Introduction
Rate-Distortion Function for a Gaussian Source
Rate-Distortion Bounds
Distortion Measure Methods
1. Introduction
Considering questionGiven a source-user pair and a channel, under what conditions is it possible to design a communication system that reproduces the source output for the user with an average distortion that does not exceed some specified upper limit D?
The capacity (C) of a communication channel. The rate distortion function ( R(D) )of a source-user pair.
Rate-distortion function R(D)A communication system can be designed that achieves fidelity D if and only if the capacity of the channel that connects the source to user exceeds R(D).
The lower limit for data compression to achieve a certain fidelity subject to a predetermined distortion measure D.
1. Introduction (cont.)
Equations representations :D is to r t io n D :
M u tu a l in fo rm a tio n :
R a te d is to r t io n fu n c t io n R (D ) :
d is to r t io n m e a su re fo r th e s o u rc e w o rd
= (
D d q p x q y x x y d x d y
I q p x q y xq y x
q yd x d y
R D I q Q q y x d q D
x y
x x
q Qd
n
D
= =
=
= = =
∫ ∫
∫ ∫
∈
( ) ( ) ( | ) ( , )
( ) ( ) ( | ) lo g( | )
( )
( ) in f ( ) , { ( | ) : ( ) }
( , ) :
, . . . , )
ρ
ρx 1 r e p ro d u c e d a s = (y
T h e fa m ily is c a l le d th e
s in g le - le t te r f id e li ty c r i te r io n g e n e ra te d b y .
1
n
y
x ,y
,. . . , )
( ) ( , )
{ , }
y
n x y
F n
n
t t
t
n
n
ρ ρ
ρ
ρρ
=
= ≤ < ∞
−
=∑1
1
1
2. Rate-Distortion Bounds
Introduction
Rate-Distortion Function for A Gaussian Source R(D) for a memoryless Gaussian sourceSource coding with a distortion measure
Rate-Distortion Bounds
Conclusions
3. Rate-Distortion Function for A Gaussian Source
Rate-Distortion for a memoryless Gaussian source
The minimum information rate (bpn) necessary to represent the output of a discrete-time, continuous-amplitude, memorylessstationary Gaussian source based on an MSE distortion measure per symbol.
R DD D
Dg
x x
x
( )log ( ),
,=
≤ ≤
≥
⎧⎨⎪
⎩⎪
12 2
2 2
20
σ σ
σ
0
3. Rate-Distortion Function for A Gaussian Source (c.1)
Source coding with a distortion measure (Shannon, 1959)
There exists a coding scheme that maps the source output into codewords such that for any given distortion D, the minimum rate R(D) bpn is sufficient to reconstruct the source output with an average distortion that is arbitrarily close to D.
Transform the R(D) to distortion-rate function D(R)
21010
22
log106)(log10dBin Express
2)(
xg
xR
g
RRD
RD
σ
σ
+−=
= −
Comparison between different quantizations
3. Rate-Distortion Function for A Gaussian Source (c.2)
4. Rate-Distortion Bounds
Source:Memoryless, continuous-amplitude source with zero mean and finite variance with respect to the MSE distortion measure.
Upper boundAccording to the theorem of Berger (1971), it implies that the Gaussian source requires the maximum rate among all other sources for a specified level of mean square distortion.
R DD
R D D
D R D R
xg x
gR
x
( ) log ( ),
( ) ( )
≤ = ≤ ≤
≤ = −
12
0
2
2
22
2 2
σσ
σ
σx2
Lower bound (Shannon lower bound)
where
4. Rate-Distortion Bounds (c.1)
)(2*
2*
22
1)(
2log21)()(
xHR
eRD
eDxHDR
−−=
−=
π
π
DeDeDR
exH
exf
xx
xg
x
xx
x
2
222
2*
22
2
log212log
212log
21)(
2log21)(
21)(
:sourceGaussian For 22
σπσπ
σπ
σπσ
=−=⇒
=
= −
∫∞
∞−−= ξξξ dffxH
xH
nxnx
def)(log)()(
entropy aldifferenti :)(
)(2)(
)()()(
)()()(*
*
RDRDRD
DRDRDR
g
g
≤≤
≤≤
For Gaussian source, the rate-distortion, upper bound and lower bound are all identical to each other.The bound of differential entropy
4. Rate-Distortion Bounds (c.2)
)(by boundedupper isentropy aldifferenti The
)]()([6
)]()([6)()(
log10
)]()([66)(log10
*
*10
*10
xH
DRDR
xHxHRDRD
xHxHRRD
g
g
gg
g
⇒
−=
−=
−−−=
Rate-distortion R(D) to channel capacity CFor C > Rg(D)
The fidelity (D) can be achieved.
For R(D)<C< Rg(D) Achieve fidelity for stationary sourceMay not achieve fidelity for random source
For C<R(D)Can not be sure to achieve fidelity
4. Rate-Distortion Bounds (c.3)
4. Rate-Distortion Bounds (c.4)
5. Distortion Measure Methods
{ }GG m r k c x m
k rcr
c rr
rrrrr
r( , , ) exp ( )
( / )( / )
( / ):
σ
σ
= − −
= =
= →= →= →= ∞ →
and
For different Laplacian pdf Gaussian pdf constant pdf uniform pdf
2 13
1
120
2ΓΓ
Γ
Problems
Homeworks (Sayood 3rd, pp. 270-271)
3, 6.
Referenceshttp://en.wikipedia.org/wiki/Rate_distortion_theory.J.R. Deller, J.G. Proakis, and J.H.L. Hansen, “Discrete-Time Processing of Speech Signals,” IEEE Press
69