02 LINEAR BLOCK CODING Kul - Gadjah Mada...

14
3/8/13 1 DR. RISANURI HIDAYAT LINEAR BLOCK CODING Kul 02 3/8/13 JTETI FT UGM 1 INTRO linear block code takes k-bit message blocks and converts each such block into n-bit coded blocks. The rate of the code is k/n. The conversion in a linear block code involves only linear operations over the message bits to produce codewords. codeword consists of the k message bits D1D2...Dk followed by the nk parity bits P1P2...Pnk, where each Pi is some linear combination of the Di’s. 3/8/13 JTETI FT UGM 2

Transcript of 02 LINEAR BLOCK CODING Kul - Gadjah Mada...

3/8/13  

1  

D R . R I S A N U R I H I D AYAT

LINEAR BLOCK CODING Kul 0

2

3/8/13 JTETI FT UGM

1

INTRO

•  linear block code takes k-bit message blocks and converts each such block into n-bit coded blocks. •  The rate of the code is k/n. The conversion in a

linear block code involves only linear operations over the message bits to produce codewords. •  codeword consists of the k message bits D1D2...Dk

followed by the n−k parity bits P1P2...Pn−k, where each Pi is some linear combination of the Di’s.

3/8/13 JTETI FT UGM 2

3/8/13  

2  

•  message-to-codeword transformation

3/8/13 JTETI FT UGM 3

•  D is a k × 1 matrix of message bits D1 D2 . . . Dk , •  C is the n-bit codeword C1 C2 . . . Cn , •  G is the k × n generator matrix that completely

characterizes the linear block code

•  If the code is in systematic form, C has the form

.

•  G is decomposed into a k × k identity matrix “concatenated” horizontally with a k × (n − k) matrix of values that defines the code.

3/8/13 JTETI FT UGM 4

3/8/13  

3  

INTRO

Message (u)

0 0 0 0

0 0 0 1

0 0 1 0

1 1 1 1

3/8/13 JTETI FT UGM 5

Codeword (v)

0 0 0 0 1 1 0

0 0 0 1 1 0 1

0 0 1 0 0 0 1

1 1 1 1 1 0 0

k = 4

2k =

16

n = 7

Bloc

k co

de

EXAMPLES

•  k = •  G = Ik×k|1T, •  Ik×k is the k × k identity matrix •  1T is a k-bit column vector of all ones

3/8/13 JTETI FT UGM 6

G =1 0 00 1 00 0 1

111

!

"

####

$

%

&&&&

3/8/13  

4  

(7,4) HAMMING CODE

3/8/13 JTETI FT UGM 7

G =

1 0 0 00 1 0 00 0 1 00 0 0 1

1 1 01 0 10 1 11 1 1

!

"

#####

$

%

&&&&&

•  k = 4 •  G = Ik×k|A, •  Ik×k is the k × k identity matrix

MAXIMUM-LIKELIHOOD (ML) DECODING

•  For a linear block code, an ML decoder takes n received bits as input and returns the most likely k-bit message among the 2k possible messages •  Compare the received word, r, to each of these

valid codewords and find the one with smallest Hamming distance to r. •  ML decoding by comparing a received word, r,

with all 2k possible valid n-bit codewords does work, •  but has exponential time complexity. •  compare to all valid codewords” method does not take

advantage of the linearity of the code.

•  Need the decoding a lot faster.

3/8/13 JTETI FT UGM 8

3/8/13  

5  

SYNDROME DECODING

•  Consider the (7, 4) Hamming code whose generator matrix G is given by

3/8/13 JTETI FT UGM 9

G =

1 0 0 00 1 0 00 0 1 00 0 0 1

1 1 01 0 10 1 11 1 1

!

"

#####

$

%

&&&&&

the parity equations

•  we can rewrite these equations by moving the P’s to the same side as the D’s (in modulo-2 arithmetic, there is no difference between a − and a + sign!):

3/8/13 JTETI FT UGM 10

P1 = D1 +D2 +D4

P2 = D1 +D3 +D4

P3 = D2 +D3 +D4

There are n − k such equations.

3/8/13  

6  

•  In matrix notation using a parity check matrix, H, it can be written as follows:

3/8/13 JTETI FT UGM 11

[D1D2...Dk P1P2...Pn-k ].H =0c.H = 0

H = A

•  Hence, for any received word r without errors,

•  c = r •  r.H = 0

•  Now suppose a received word r has some errors in it.

•  r =c +

•  c is some valid codeword and e is an error vector,

•  r.H = (c+e).H =

3/8/13 JTETI FT UGM 12

3/8/13  

7  

•  Given a received word, r, the decoder computes r.H •  If it is 0, then there are no single-bit errors, and the receiver

returns the first k bits of the received word as the decoded message.

•  If not, then it compares that (n − k)-bit value with each of the k stored syndromes.

•  If syndrome j matches, then it means that data bit j in the received word was in error, and the decoder flips that bit and returns the first k bits of the received word as the most likely message that was encoded and transmitted.

3/8/13 JTETI FT UGM 13

•  If r.H is not all zeroes, and if it does not match any stored syndrome, •  then the decoder concludes that either some parity bit was

wrong, or •  that there were multiple errors. In this case, it might simply

return the first k bits of the received word as the message.

3/8/13 JTETI FT UGM 14

3/8/13  

8  

EXAMPLE

•  Consider the (7,4) Hamming code.

3/8/13 JTETI FT UGM 15

G =

1 0 0 00 1 0 00 0 1 00 0 0 1

1 1 01 0 10 1 11 1 1

!

"

#####

$

%

&&&&&

message code

0000 0000 000

0001 0001 111

0010 0010 011

0011 0011 100

0100 0100 101

0101 0101 010

0110 0110 110

0111 0111 001

message code

1000

1001

1010

1011

1100

1101

1110

1111

•  The parity check matrix, H

3/8/13 JTETI FT UGM 16

G =

1 0 0 00 1 0 00 0 1 00 0 0 1

1 1 01 0 10 1 11 1 1

!

"

#####

$

%

&&&&&

3/8/13  

9  

•  Suppose r = c = 1010 101

•  c.H = (000)

•  Suppose r = 1110101 (error in the second bit) • 

•  (101) is on the second row of the parity check matrix •  The fail of r is on the second bit •  r = 1110101 becomes c = 1010 101

3/8/13 JTETI FT UGM 17

H =

1 1 01 0 10 1 11 1 1

1 0 00 1 00 0 1

!

"

#########

$

%

&&&&&&&&&

•  The decoder pre-computes syndromes corresponding to all possible single-bit errors.

3/8/13 JTETI FT UGM 18

[1000000]. H = [110][0100000]. H = [101] [0010000]. H = [011]

e

H =

1 1 01 0 10 1 11 1 1

1 0 00 1 00 0 1

!

"

#########

$

%

&&&&&&&&&

3/8/13  

10  

G - GENERATION

•  d1 d2 d3…dk (k bits messages), berapa jumlah bit code, n (c1 c2 c3…cn ) •  Berapa jumlah bit paritas, P (p1 p2… pP)

•  n = k+P, P = n – k

•  Ada 2P kombinasi bit paritas

•  Tidak semua kombinasi dapat dipakai •  Ada Minimum Hamming Weight

3/8/13 JTETI FT UGM 19

EXAMPLE

•  Kita punya 3 bit paritas (P = 3) •  Ada 23 = 8 kombinasi

3/8/13 JTETI FT UGM 20

P1p2p3

0 0 0

0 0 1

0 1 0

0 1 1

1 0 0

1 0 1

1 1 0

1 1 1

•  Jika kita ingin mampu mengkoreksi 1 bit code, Min (HW)>=2

•  Bit paritas yang HW<2 dicoret •  Ada sejumlah P+1 = 3+1 = 4

baris dicoret

•  Tinggal 8 – 4 = 4 baris kombinasi tersisa

3/8/13  

11  

EXAMPLE

•  Tambahkan matriks Identitas kxk (4x4) di depannya •  k = 4, P = 3, n = 7 •  Code (n,k) = Code (7,4)

3/8/13 JTETI FT UGM 21

p1p2p3

0 1 1

1 0 1

1 1 0

1 1 1

H GENERATION

3/8/13 JTETI FT UGM 22

p1p2p3

0 1 1

1 0 1

1 1 0

1 1 1

•  Tambahkan matriks Identitas PxP (3x3) di bahwahnya

3/8/13  

12  

•  k = 3 ?

3/8/13 JTETI FT UGM 23

P=n-k n k

1 1 0

2 3 1

3 7 4

4 15 11

5 31 26

P 2P - 1 n - P

k=2,3,4

k=5,..,11

EXAMPLE

•  k = 3, P = 3, n = ..6 •  Code (n,k) = Code (6,3) •  Coret salah satu baris dari P •  Tambahkan matriks Identitas kxk (3x3) di depannya

3/8/13 JTETI FT UGM 24

p1p2p3

0 1 1

1 0 1

1 1 0

1 1 1

3/8/13  

13  

H GENERATION

3/8/13 JTETI FT UGM

p1p2p3

0 1 1

1 0 1

1 1 1

•  Tambahkan matriks Identitas PxP (3x3) di bahwahnya

SEC CODES

•  SEC codes are a good building block, but they correct at most one error in a block of n coded bits. As messages get longer, the solution, of course, is to break up a longer message into smaller blocks of k bits each, and to protect each one with its own SEC code.

3/8/13 JTETI FT UGM 26

3/8/13  

14  

BURST ERRORS

•  Wireless channels suffer from •  interference from other transmitters •  fading, caused mainly by multi-path propagation

•  The BSC model needs to be replaced with a more complicated one in which errors may occur in bursts

•  what do we mean by a “burst”? •  The simplest model is to model the channel as having two

states, •  a “good” state, and •  a “bad” state.

•  In the “good” state, the bit error probability is pg and in the “bad” state, it is pb > pg.

•  Once in the good state, the channel has some probability of remaining there (generally > 1/2) and some probability of moving into the “bad” state, and vice versa.

3/8/13 JTETI FT UGM 27

3/8/13 JTETI FT UGM 28