21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/...

Post on 18-Jan-2018

218 views 0 download

description

23 Binary Numbers Decimal notation –Symbols: 0, 1, 2, 3, 4, …, 9 –e.g., Binary notation –Symbols: 0, 1 –e.g.,

Transcript of 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/...

1

Audio Signal Processing-- Quantization

Shyh-Kang JengDepartment of Electrical Engineering/

Graduate Institute of Communication Engineering

2

Overview• Audio signals are typically continuous-time

and continuous-amplitude in nature• Sampling allows for a discrete-time represe

ntation of audio signals• Amplitude quantization is also needed to co

mplete the digitization process• Quantization determines how much distorti

on is presented in the digital signal

3

Binary Numbers

• Decimal notation– Symbols: 0, 1, 2, 3, 4, …, 9– e.g.,

• Binary notation– Symbols: 0, 1– e.g.,

0123 10*910*910*910*11999

1002*02*02*12*0

2*02*12*12*0]01100100[0123

4567

4

Negative Numbers• Folded binary

– Use the highest order bit as an indicator of sign• Two’s complement

– Follows the highest positive number with the lowest negative

– e.g., 3 bits,• We use folded binary notation when we

need to represent negative numbers

42]100[4],011[3 4

5

Quantization Mapping

• Quantization

• Dequantization

Continuous values Binary codes

Binary codes Continuous values)(1 xQ

)(xQ

6

Quantization Mapping (cont.)

• Symmetric quantizers– Equal number of levels (codes) for positive and

negative numbers• Midrise and midread quantizers

7

Uniform Quantization

• Equally sized range of input amplitudes are mapped onto each code

• Midrise or midread• Maximum non-overload input value,• Size of input range per R-bit code,• Midrise• Midread • Let

maxx

Rx 2/2 max)12/(2 max Rx

1max x

8

2-Bit Uniform Midrise Quantizer

01

10

11

3/4

1/4-1/4

-3/4

1.0

-1.0

1.0

0.0

-1.0

00

9

Uniform Midrise Quantizer• Quantize: code(number) = [s][|code|]

• Dequantize: number(code) = sign*|number|

0number10number0

s

elsewhere)number2int(1numberwhen12code 1R

1R

1R2/)5.0code(number

1sif1

0sif1sign

10

2-Bit Uniform Midtread Quantizer

01

11

2/3

0.0

-2/3

1.0

-1.0

1.0

0.0

-1.0

00/10

11

Uniform Midread Quantizer• Quantize: code(number) = [s][|code|]

• Dequantize: number(code) = sign*|number|

0number10number0

s

elsewhere)2/)1number)12int(((1numberwhen12code R

1R

1sif1

0sif1sign

)12/(code2number R

12

Two Quantization Methods• Uniform quantization

– Constant limit on absolute round-off error – Poor performance on SNR at low input power

• Floating point quantization– Some bits for an exponent– the rest for an mantissa– SNR is determined by the number of mantissa bits and r

emain roughly constant– Gives up accuracy for high signals but gains much great

er accuracy for low signals

2/

13

Floating Point Quantization

• Number of scale factor (exponent) bits : Rs• Number of mantissa bits: Rm• Low inputs

– Roughly equivalent to uniform quantization with

• High inputs– Roughly equivalent to uniform quantization wit

h

Rm12R Rs

1RmR

14

Floating Point Quantization Example

• Rs = 3, Rm = 5[s0000000abcd] scale=[000]

mant=[sabcd][s0000000abcd]

[s0000001abcd] scale=[001]mant=[sabcd]

[s0000001abcd]

[s000001abcde] scale=[010]mant=[sabcd]

[s000001abcd1]

[s1abcdefghij] scale=[111]mant=[sabcd]

[s1abcd100000]

15

Quantization Error• Main source of coder error• Characterized by • A better measure

• Does not reflect auditory perception• Can not describe how perceivable the errors are• Satisfactory objective error measure that

reflects auditory perception does not exist

)/(log10SNR 2210 qx

2q

16

Quantization Error (cont.)

• Round-off error• Overload error

Overload

17

Round-Off Error• Comes from mapping ranges of input amplitudes

onto single codes• Worse when the range of input amplitude onto a

code is wider• Assume that the error follows a uniform distribut

ion• Average error power

• For a uniform quantizer

2/

2/

222 12/1 dqqq

)2*3/( R22max

2 xq

18

Round-Off Error (cont.)

771.4R021.6)/(log10

3log102logR20)/(log10

)/(log10SNR

2max

210

10102max

210

2210

xx

xx

qx

4 bits8 bits

16 bits

Input power (dB)

SNR

(dB

)

19

Overload Error

• Comes from signals where • Depends on the probability distribution of

signal values• Reduced for high • High implies wide levels and

therefore high round-off error• Requires a balance between the need to

reduce both errors

max)( xtx

maxx

maxx

20

Entropy• A measure of the uncertainty about the next code t

o come out of a coder• Very low when we are pretty sure what code will c

ome out• High when we have little idea which symbol is co

ming• • Shanon: This entropy equals the lowest possible bit

s per sample a coder could produce for this signal

n

nn ppEntropy )/1(log2

21

Entropy with 2-Code Symbols

• When there exist other lower bit rate ways to encode the codes than just using one bit for each code symbol

))1/(1(log*)1()/1(log* 22 ppppEntropy

p

Entro

py

0 1

5.0p

22

Entropy with N-Code Symbols• • Equals zero when probability equals 1• Any symbol with probability zero does not

contribute to entropy• Maximum when all probabilities are equal• For equal-probability code symbols

• Optimal coders only allocate bits to differentiate symbols with near equal probabilities

n

nn ppEntropy )/1(log2

R2R))2(log*2(*2 R

2RR Entropy

23

Huffman Coding• Create code symbols based on the

probability of each symbols occurrence• Code length is variable• Shorter codes for common symbols• Longer codes for rare symbols• Shannon:• Reduce bits over fixed-bit coding, if the

symbols are not evenly distributed

1RHuffman EntropyEntropy

24

Huffman Coding (cont.)

• Depend on the probabilities of each symbol• Created by recursively allocating bits to

distinguish between the lowest probability symbols until all symbols are accounted for

• To decode, we need to know how the bits were allocated– Recreate the allocation given the probabilities– Pass the allocation with the data

25

Example of Huffman Coding• A 4-symbol case

– Symbol 00 01 10 11– Probability 0.75 0.1 0.075 0.075

• Results– Symbol 00 01 10 11– Code 0 10 110 111–

bits4.13*15.02*1.01*75.0R

010

10

1

0

26

Example (cont.)• Normally 2 bits/sample for 4 symbols• Huffman coding required 1.4 bits/sample on

average• Close to the minimum possible, since

• 0 is a “comma code” here– Example: [01101011011110]

bits2.1Entropy

27

Another Example• A 4-symbol case

– Symbol 00 01 10 11– Probability 0.25 0.25 0.25 0.25

• Results– Symbol 00 01 10 11– Code 00 01 10 11

• Adds nothing when symbol probabilities are roughly equal

0 1 0 1

0 1