Weight enumerators

23
1 Weight enumerators Weight enumerating function (WEF) A(X) = d A d X Input-Output weight enumerating function IOWEF A(W,X,L) = w,d,l A w,d,l W w X d L l Gives the most complete information about weight structure From the IOWEF we can derive other enumerator functions: WEF (set W=L=1) Conditional WEF: considers given input weigh Bit CWEF/ Bit IOWEF/ Bit WEF Input-Redundancy WEFs (IRWEFs) WEFs of truncated/terminated codes

description

Weight enumerators. Weight enumerating function (WEF) A ( X ) =  d A d X d Input-Output weight enumerating function IOWEF A ( W,X,L ) =  w,d,l A w,d,l W w X d L l Gives the most complete information about weight structure From the IOWEF we can derive other enumerator functions: - PowerPoint PPT Presentation

Transcript of Weight enumerators

Page 1: Weight enumerators

1

Weight enumerators

• Weight enumerating function (WEF) A(X) = d A d X d

• Input-Output weight enumerating function IOWEF

• A(W,X,L) = w,d,l A w,d,l WwXdLl

• Gives the most complete information about weight structure

• From the IOWEF we can derive other enumerator functions:

• WEF (set W=L=1)

• Conditional WEF: considers given input weight

• Bit CWEF/ Bit IOWEF/ Bit WEF

• Input-Redundancy WEFs (IRWEFs)

• WEFs of truncated/terminated codes

Page 2: Weight enumerators

2

Conditional WEF

• Aw (X) = d Aw,d Xd

• …where Aw,d is the number of codewords of information weight w and code weight d

• An encoder property

• Useful for analyzing turbo codes with convolutional codes as component codes

Page 3: Weight enumerators

3

Truncated/terminated encoders

• Output length limited to = h + m blocks

• h is the number of input blocks

• m is the number of terminating output blocks (the tail) necessary to bring the encoder back to the initial state

• For a terminated code, apply the following procedure:

• Write the IOWEF A(W,X,L) in increasing order of L

• Delete the terms of L degree larger than

Page 4: Weight enumerators

4

Do we count all codewords?

• No• Only those that start at time 0

• Why?• Each time instant is similar (for a time invariant code)• The Viterbi decoding algorithm (ML on trellis) makes

decisions on k input bits at a time. Thus any error pattern will start at some time, and the error pattern will be structurally similar to an error starting at time 0

• Only first-event paths• Why?• Same as above

• Thus FER/BER calculation depends on the first event errors that start at time 0

Page 5: Weight enumerators

5

BER calculation

• Bit CWEF Bw (X) = d Bw,d Xd

• …where Bw,d= (w/k) Aw,d is the total number of nonzero information bits associated with all codewords of weight d and produced by information sequences of weight w, divided by k

• Bit IOWEF B(W,X,L) = w,d,l Bw,d,l WwXdLl

• Bit WEF B(X) = d Bd Xd = w,d Bw,d WwXd = w,d (w/k) Aw,d WwXd = 1/k (w,d w Aw,d WwXd )/ W |W=1

Page 6: Weight enumerators

6

IRWEF• Systematic encoders: codeword weight d = w + z, where z is the

parity weight

• This instead of the IOWEF A(W,X,L) = w,d,l A w,d,l WwXdLl,

we may (and in some cases it is more convenient to) consider the input redundancy WEF A(W,Z,L) = w,z,l A w,z,l WwZzLl

Page 7: Weight enumerators

7

Alternative to Mason’s formula

• Introduce state variables i giving the weights of all paths from S0 to state Si

1 = WZL + L 2

2 = WL 1 + ZL 3

3 = ZL 1 + WL 3

• A(W,Z,L) =WZL 2

• Solve this set of linear equations

Page 8: Weight enumerators

8

Distance properties• The decoding method determines what is actually the important

distance property

• The free distance of the code (ML decoding)

• The column distance function (sequential decoding)

• The minimum distance of the code (majority logic decoding)

Page 9: Weight enumerators

9

Free distance• dfree = minu,u’ { d(v,v’) : u u’ }

= minu,u’ { w(v+v’) : u u’ } = minu, { w(v) : u 0 }

• Lowest power of X in the WEFs

• Minimum weight of any path that diverge from the zero state and remerges later

• Note: We implicitly assume noncatastrophic encoder here

• Catastrophic encoders: May have paths of smaller weight than dfree that do not remerge

Page 10: Weight enumerators

10

Column distance• [G]l : The binary matrix consisting of the first n(l+1) colums of G

• Column distance function (CDF) dl : The minimum distance of the block code defined by [G]l

• Important for sequential decoding

Page 11: Weight enumerators

11

Special case of column distance• Special cases:

• l=m, dl = minimum distance (important for majority logic decoding of convolutional codes)

• l: dl dfree

Page 12: Weight enumerators

12

Optimum decoding of CCs• A trellis offers an ”economic” representation of all codewords

• Maximum likelihood decoding: The Viterbi algorithm

• Decode to the nearest codewords

• MAP decoding: The BCJR algorithm

• Minimize information bit error probability

• Turbo decoding applications

Page 13: Weight enumerators

13

Trellises for convolutional codes• How to obtain the trellis from the state diagram

• Make one copy of the states of a state diagram for each time instant

• Let branches from states at time instant i go to states at time instant i+1

Page 14: Weight enumerators

14

Example• G(D) = [1 + D, 1+D2, 1+D+D2]

Page 15: Weight enumerators

15

Metrics

• A metric is a measure of (abstract) distance between (abstract) points

a

c

b

• that obeys the triangle inequality: M(a,b) M(a,c) + M(c,b)

M(a,b)

M(c

,b)

M(a,c)

Page 16: Weight enumerators

16

Metrics for a DMC• Information u = (u0,…, uh-1) = (u0,…, uK -1) K = kh

• Codeword v = (v0,…, vh-1) = (v0,…, vN -1) N = n(h+m)

• Receive r = (r0,…, rh-1) = (r0,…, rN -1)

• Recall:

• P(r|v) = l=0..h+m-1 P(rl|vl) = j =0..N-1 P(rj|vj)

• ML decoder: Choose v to maximize this expression

• …or to maximize log P(r|v) = l= 0..h+m-1 log P(rl|vl) = j =0..N-1 log P(rj|vj)

Bit metrics:

M(rj|vj) =

log P(rj|vj)

Branch metrics:

M(rl|vl) = log P(rl|vl)

Path metrics:

M(r|v) = log P(r|v)

Page 17: Weight enumerators

17

Partial path metrics• Path metric for the first t branches of a path

• M([r|v]t) = l= 0..t-1 M(rl|vl) = l= 0..t-1 log P(rl|vl) = j =0..nt-1 log P(rj|vj)

Page 18: Weight enumerators

18

The Viterbi algorithm• Recursive algorithm to grow the partial path metric of the best paths

going through each state.

• Basic algorithm:

• Initialize t=1. The loop of the algorithm looks like this:

1. (Add, Compare, Select) Add: Compute the partial path metrics for each path entering each state at time t, based on the partial path metrics at time t –1 and the branch metrics from time t-1 to time t.Compare all such incoming paths, and Select the (information block associated with the) best one, record its path gain and a pointer to where it came from.

2. t:=t+1. If t < h+m, repeat from 1.

3. Backtracing: At time h+m, Trace back through the pointers to obtain the winning path.

Page 19: Weight enumerators

19

Proof of ML decoding• Theorem: The final survivor w in the Viterbi algorithm is an ML

path, that is M(r|w) M(r|v), for all v C.• Proof:

• Consider any non-surviving codeword v C • The paths v and w must merge in some state S at some time t• Since v was not the final survivor, it must have been eliminated

in state S at time t• Thus M([r|w]t) M([r|v]t), and the best path from state S at time

t to the terminal state at time h+m has a partial path metric not better than that of w

• Alternative proof by recursion• The algorithm, finds the best path to each state at time1• For t>0, if the algorithm finds the best path to each state at time

t, it also the best path to each state at time t+1

Page 20: Weight enumerators

20

Note on implementation (I)• In hardware!!! Implementations of the Viterbialgorithm often uses

simple processors that either cannot process floating point numbers, or where such processing is slow

• For a DMC the bit metrics can be represented by a finite size table

• The bit metric M(rj|vj) = log P(rj|vj) is usually a real number, but

• Since the algorithm only determines the path of maximum metric, the result is not affected by scaling or adding constants

• Thus M(rj|vj) = log P(rj|vj) can be replaced by c2[ log P(rj|vj)+c1]

• Select the constants c1 and c2 such that all bit metrics values are closely approximated by an integer

Page 21: Weight enumerators

21

Example 2-input 4-output DMC

Page 22: Weight enumerators

22

Example

Page 23: Weight enumerators

23

Suggested exercises

• 11.17-...

• 12.1-12.5