Channel Coding I - uni-bremen.de · Channel Coding I Solutions of the exercises ... Error...
Transcript of Channel Coding I - uni-bremen.de · Channel Coding I Solutions of the exercises ... Error...
Channel Coding I
Solutions of the exercises
– SS 2015 –
Lecturer: Dirk Wubben, Carsten Bockelmann
Tutor: Matthias Woltering
NW1, Room N 2400, Tel.: 0421/218-62392
E-mail: {wuebben, bockelmann, woltering}@ant.uni-bremen.de
Universitat Bremen, FB1
Institut fur Telekommunikation und Hochfrequenztechnik
Arbeitsbereich Nachrichtentechnik
Prof. Dr.-Ing. A. Dekorsy
Postfach 33 04 40
D–28334 Bremen
WWW-Server: http://www.ant.uni-bremen.de
Version from July 28, 2015
4 CONVOLUTION CODES July 28, 2015 1
Solutions
1 Introduction 3
Solution 1.1: Design of a discrete channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Solution 1.3: Binary symmetric channel (BSC) . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Solution 1.4: Serial concatenation of two BSC . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Solution 1.5: Transmission of a coded data over BSCs . . . . . . . . . . . . . . . . . . . . . . . 4
2 Information theory 5
Solution 2.1: Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Solution 2.2: Channel capacity of a discrete memory-free channel . . . . . . . . . . . . . . . . 6
Solution 2.3: Channel capacity of a binary channel . . . . . . . . . . . . . . . . . . . . . . . . 7
Solution 2.4: Channel capacity of the AWGN . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3 Linear block codes 12
3.1 Finite field algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Solution 3.1: 2-out-of-5-code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Solution 3.2: Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Solution 2: Error correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2 Distance properties of block codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Solution 3.4: Sphere-packing bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.5 Matrix description of block codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Solution 3.5: Generator and parity check matrices . . . . . . . . . . . . . . . . . . . . . . . . . 18
Solution 3.6: Expansion, shortening, and puncturing of codes . . . . . . . . . . . . . . . . . . . 19
Solution 3.7: Coset decomposition and syndrome decoding . . . . . . . . . . . . . . . . . . . . 20
Solution 3.8: Coding program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.6 Cyclic codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Solution 3.9: Polynomial multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Solution 3.10: Polynomial division . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Solution 3.11: Generator polynomial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Solution 3.12: Syndrome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Solution 3.13: Primitive Polynomial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Solution 3.14: CRC codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4 Convolution codes 27
4.1 Fundamental principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Solution 4.1: Convolution code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2 Characterization of convolution codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Solution 4.2: Catastrophic codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4 CONVOLUTION CODES July 28, 2015 2
4.3 Optimal decoding with Viterbi-algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Solution 4.3: Viterbi-decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Solution 4.4: Viterbi-decoding with puncturing . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Solution 4.5: Coding and decoding of a RSC code . . . . . . . . . . . . . . . . . . . . . . . . . 31
Solution 4.6: Investigation to Viterbi-decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Solution 4.7: Simulation of a convolution-coder and -decoder . . . . . . . . . . . . . . . . . . . 34
1 INTRODUCTION July 28, 2015 3
1 Introduction
Solution of exercise 1.1 Design of a discrete channel
Item a)
Relativ to the hard-decision, the decision thresholds shall lie in the middle of two symbols, therefore at -2, 0 und
+2. The several classes then correspond to the channel output values Yµ like in figure 3.
X0=-3 X
3=+3
Y0= X
0 Y2= X
2Y1= X
1 Y3= X
3
X1=-1 X
2=+1
Fig. 3: Hard-decision at 4-ary input signal and AWGN-channel
The transition probabilities P (Yµ|Xν) can now be calculated in the following way. For the transmitting symbol
X0 = −3 the Gaussian distribution pN(n) of the AWGN-channel is shifted by X0. The probability of the symbol
X1 falsely being detected then results from the area under the shifted probability density function pN(n − X0)between -2 and 0
Solution of exercise 1.3 Binary symmetric channel (BSC)
Item a)
The probability that exactly m error occur at certain positions when a sequence of length n is transmitted over a
BSC is given by (according to eq. (1.27))
P (m bit of n incorrect) = Pme · (1− Pe)
n−m.
In this case exactly m = 2 errors occur and 5 bits are correct in a sequence of the length n = 7. Accordingly the
probability is
P (2 bit of 7 incorrect) = P 2e · (1− Pe)
7−2= 0.012 · 0.995 = 9.5099 · 10−5 ≈ 10−4 .
Item b)
The probability of m errors occurring in a sequence of length n can be calculated by eq. (1.28):
Pf (m) = P (m errors in a sequence of the length n) =
(n
m
)
· Pme · (1− Pe)
n−m ,
where (n
m
)
=n!
m!(n−m)!
is the number of possibilities of choosing m elements out of n different elements without consideration of the
succession (combinations).
Item c)
Method of approach 1: The probability of more than 2 errors occurring results from the sum of error probabilities
for 3, . . . , 31 errors:
Pf (m > 2) =
31∑
r=3
(31
r
)
· P re · (1− Pe)
31−r ≈
(31
3
)
· P 3e · (1− Pe)
31−3 ≈ 0, 0034
1 INTRODUCTION July 28, 2015 4
Method of approach 2: By a simple consideration the calculation of the probability can be simplified. Because
of the completeness property of the probabilities by:
Pf (m > 2) = P (more than 2 errors) = 1− Pf (m = 0)− Pf (m = 1)− Pf (m = 2) .
The single error probabilities are calculated to
Pf (m = 0) =
(31
0
)
· 0.00 · 0.9931 = 1 · 1 · 0.9931 ≈ 0.7323
Pf (m = 1) =
(31
1
)
· 0.011 · 0.9930 = 31 · 0.011 · 0.9930 ≈ 0.2293
Pf (m = 2) =
(31
2
)
· 0.012 · 0.9929 = 465 · 0.012 · 0.9929 ≈ 0.0347
and for the wanted probability follows
Pf (m > 2) = 1− 1 · 0.9931− 31 · 0.9930 · 0.011 − 465 · 0.9929 · 0.012 ≈ 0.0036 .
Solution of exercise 1.4 Serial concatenation of two BSC
Because the channel is further on symmetric, the consideration of one error case is sufficient for the determination
of the error probability. The probability of the transmitting symbol X0 being mapped on the output symbol Y1 can
be calculated to:
P (Y1 | X0) = P (Y1 | Z0) · P (Z0 | X0) + P (Y1 | Z1) · P (Z1 | X0)
= Pe,2 · (1− Pe,1) + (1− Pe,2) · Pe,1
= Pe,1 + Pe,2 − 2 · Pe,1 · Pe,2
For the resulting BSC we obtain Pe = Pe,1 +Pe,2 − 2 ·Pe,1 ·Pe,2. At consideration of two example channels with
Pe,1 = 0.01 and Pe,2 = 0.02 it becomes obvious that the resulting BSC with Pe = 0.01+ 0.02− 2 · 0.01 · 0.02 =0.0296 ≈ 0.3 has a considerably greater error probability.
Channel 2
Y0
Y1
1-Pe,2
1-Pe,2
Pe,2
Pe,2
Channel 1
X0Z0
X1 Z1
1-Pe,1
1-Pe,1
Pe,1
Pe,1
Fig. 4: Seriell concatenation of two BSC
Solution of exercise 1.5 Transmission of a coded data over BSCs
2 INFORMATION THEORY July 28, 2015 5
2 Information theory
Solution of exercise 2.1 Entropy
Item a)
The mean information content H(Xν) of signal Xν is according to eq. (2.2) determined by
H(Xν) = −P (Xν) · ldP (Xν) (1)
The maximum of H(Xν) results from the derivation being equal to zero:
∂H(xν)
∂P (xν)
!= 0
With application of the relation ldx = lnx/ ln 2 and d lnx/dx = 1/x follows
∂H(xν)
∂P (xν)= −
∂
∂P (xν)P (Xν) · ldP (Xν)
= −ldP (Xν)−P (Xν)
ln 2·
∂
∂P (xν)lnP (Xν)
= −ldP (Xν)−P (Xν)
ln 2·
1
P (Xν)
= −ldP (Xν)−1
ln 2!= 0
For P (Xν) = Pmax then follows
P (Xν) = Pmax = 2−1/ ln 2 = 0.3679 (2)
and for the maximal entropy
H(Xν)max = −Pmax · ldPmax = 0.3679 · ld 0.3679 = 0.531 (3)
You get to the same result with MATLAB by numerical calculation of the partial information, as shown in figure 5.
Item b)
Corresponding to the task only the following five events can occur:
X1 X2 X3
0 0 0
0 0 1
0 1 1
1 0 1
1 1 1
1. H(X1) describes the entropy of the symbol X1 and is calculated as in eq. (2.3) to:
H(X1) = −∑
P (X1) · ldP (X1) = − 35 ld
35 − 2
5 ld25 = 0.971
Because X1 isn’t equally distributed, its entropy is not maximal and therefore less than one.
2. H(X2) = −∑
P (X2) · ldP (X2) = − 35 ld
35 − 2
5 ld25 = 0.971
3. H(X3) = −∑
P (X3) · ldP (X3) = − 15 ld
15 − 4
5 ld45 = 0.722
4. The joint entropy H(X1, X2) describes the common information content of the symbol X1X2:
H(X1, X2) = −∑
P (X1, X2) · ldP (X1, X2) = − 25 ld
25 − 1
5 ld15 − 1
5 ld15 − 1
5 ld15 = 1.922
2 INFORMATION THEORY July 28, 2015 6
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.1
0.2
0.3
0.4
0.5
0.6
P(Xν
)
P(H
ν
)
Average information content H(Xν
)
Maximum part entropy
Fig. 5: Mean information content H(Xν) in dependence on the symbol probability P (Xν)
5. H(X1, X2, X3) = 5 ·(− 1
5 ld15
)= 2.322
6. The conditional entropy H(X2|X1) describes the information content of X2, if X1 is already known:
H(X2|X1) = H(X1, X2)−H(X1) = 1.9219− 0.971 = 0.9509
7. The conditional entropy H(X2|X1 = 0) describes the information content of X2, if X1 = 0 is already
known:
H(X2|X1 = 0) = − 23 ld
23 − 1
3 ld13 = 0.918
8. The conditional entropy H(X2|X1 = 1) describes the information content of X2, if X1 = 1 is already
known:
H(X2|X1 = 1) = − 12 ld
12 − 1
2 ld12 = 1
H(X2|X1 = 1) = 1 means, that due to X1 = 1 there isn’t any knowledge about X2 and consequently the
symbol has the maximal entropy.
9. The conditional entropy H(X3|X1, X2) describes the information content of X3, if X1 and X2 are already
known:
H(X3|X1, X2) = H(X1, X2, X3)−H(X1, X2) = 2.3219− 1.9219 = 0.4
Solution of exercise 2.2 Channel capacity of a discrete memory-free channel
The channel capacity follows with the help of eq. (2.20)
C =∑
ν
∑
µ
P (Yµ|Xν) · P (Xν) · ldP (Yµ|Xν)
P (Yµ)(4)
As the input alphabet with P (Xν) = 1/3 is equally distributed and it’s a symmetric channel, also the output
alphabet with P (Yµ) = 1/3 is equally distributed. For the channel capacity then follows:
C =1
3
∑
ν
∑
µ
P (Yµ|Xν) · ld (3 · P (Yµ|Xν))
=1
3
[
31
2ld
3
2+ 3
1
3ld
3
3+ 3
1
6ld
3
6
]
=1
3
[3
2ld
3
2+ ld 1 +
1
2ld
1
2
]
= 0.126
2 INFORMATION THEORY July 28, 2015 7
Solution of exercise 2.3 Channel capacity of a binary channel
Item a)
Hence the channel is symmetric and the input symbols are equally distributed, the output symbols are also equally
distributed (P (Y0) = P (Y1) = 0.5).
The channel capacity is at given source statistics according to eq. (2.21) described as
C =∑
ν
∑
µ
P (Yµ|Xν) · P (Xν) · ldP (Yµ|Xν)
P (Yµ).
The transition probabilities P (Yµ|Xν) for the BSC are:
P (Yµ|Xν) =
{1− Pe y = x
Pe y 6= x
So the channel capacity of the BSC is:
CBSC =∑
ν
∑
µ
P (Yµ|Xν) · P (Xν) · ldP (Yµ|Xν)
P (Yµ)
= P (Y0|X0) · P (X0) · ldP (Y0|X0)
P (Y0)+ P (Y1|X0) · P (X0) · ld
P (Y1|X0)
P (Y1)
+P (Y0|X1) · P (X1) · ldP (Y0|X1)
P (Y0)+ P (Y1|X1) · P (X1) · ld
P (Y1|X1)
P (Y1)
= (1− Pe) · 0.5 · ld1− Pe
0.5+ Pe · 0.5 · ld
Pe
0.5+ Pe · 0.5 · ld
Pe
0.5+ (1− Pe) · 0.5 · ld
1− Pe
0.5
= ld1− Pe
0.5· (0.5− 0.5 · Pe + 0.5− 0.5 · Pe) + ld
Pe
0.5· (0.5 · Pe + 0.5 · Pe)
= ld [2 (1− Pe)] · (1− Pe) + ld (2Pe) · Pe
= [ld 2 + ld (1− Pe)] · (1− Pe) + [ld 2 + ldPe] · Pe
= [1 + ld (1− Pe)] · (1− Pe) + [1 + ldPe] · Pe
= 1− Pe + (1− Pe) ld (1− Pe) + Pe + Pe ldPe
= 1 + (1− Pe) · ld (1− Pe) + Pe · ldPe
You get to the same result with the help of eq. (2.14):
CBSC = H(X) +H(Y )−H(X,Y )
= −
1∑
ν=0
P (Xν) · ldP (Xν)−
1∑
µ=0
P (Yµ) · ldP (Yµ) +
1∑
ν=0
1∑
µ=0
P (Xν , Yµ) · ldP (Xν , Yµ)
= 1 + (1− Pe) · ld (1− Pe) + Pe · ldPe . (5)
For the extrema Pe = 0 and Pe = 1 the channel capacity carries the value 1 bit/s/Hz, consequently an error
free transmission is possible without any channel coding. On the other hand is C = 0 for Pe = 0.5, there’s no
transmission possible because the output symbols occur quite by chance.
Item b)
It is:
C =∑
ν
∑
µ
P (Yµ|Xν) · P (Xν) · ldP (Yµ|Xν)
P (Yµ)
Because it’s an unsymmetric channel the occurrence probability of the receiving symbols has to be determined.
With
P (Yµ) =
1∑
ν=0
P (Xν , Yµ) =
1∑
ν=0
P (Yµ|Xν) · P (Xν)
2 INFORMATION THEORY July 28, 2015 8
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Pe
C
Fig. 6: channel capacity of the BSC for equally probable input symbols in dependence on the error probability Pe
the probability of P (Y0) and P (Y1) is given by:
P (Y0) = P (Y0|X0) · P (X0) + P (Y0|X1) · P (X1) = (1− Pe,1) · P (X0) + Pe,2 · P (X1)
P (Y1) = P (Y1|X0) · P (X0) + P (Y1|X1) · P (X1) = Pe,1 · P (X1) + (1− Pe,2) · P (X1) .
Accordingly, we can write in matrix notation:
(P (Y0)
P (Y1)
)
=
(P (Y0|X0) P (Y0|X1)
P (Y1|X0) P (Y1|X1)
)
·
(P (X0)
P (X1)
)
=
(1− Pe,1 Pe,2
Pe,1 1− Pe,2
)
·
(P (X0)
P (X1)
)
Figure 7 shows the channel capacity of an symmetric channel for several error probabilities Pe0 = Pe1 in
dependence on the occurrence probability P (x0). The channel capacity decreases with increasing error probability
Pe.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
P(x0)
C(P
(x0),
Pe
1,P
e2)
Pe = 0Pe = 0.05Pe = 0.10Pe = 0.15Pe = 0.20Pe = 0.25Pe = 0.30
Fig. 7: Capacity of the binary channel for several Pe with Pe0= Pe1
in dependence on the occurrence probability P (x0)
Figure 8a shows the channel capacity of a binary channel for a varying error probability Pe0 and a fixed error
probability Pe1 = 0.1 in dependence on the occurrence probability P (x0). The channel capacity again decreases
with increasing error probability Pe, but now isn’t symmetric concerning P (x0) any more. This effect becomes
more clearly in figure 8b with Pe1 = 0.3.
2 INFORMATION THEORY July 28, 2015 9
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
P(x0)
C(P
(x0),
Pe
1,P
e2)
Pe = 0Pe = 0.05Pe = 0.10Pe = 0.15Pe = 0.20Pe = 0.25Pe = 0.30
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
P(x0)
C(P
(x0),
Pe
1,P
e2)
Pe = 0Pe = 0.05Pe = 0.10Pe = 0.15Pe = 0.20Pe = 0.25Pe = 0.30
Fig. 8: Capacity of the binary channel for several Pe0in dependence on the occurrence probability P (x0) and a) Pe1
= 0.1and b) Pe1
= 0.3
Item c)
Figure 9 shows the channel capacity of the binary symmetric channel for several input statistics P (x0) in depen-
dence on the error probability Pe (cmp. fig. 2.3).
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Pe
C(P
(x0),
Pe)
P(X0) = 0.5
P(X0) = 0.3
P(X0) = 0.1
Fig. 9: Capacity of the binary channel for several occurrence probabilities P (x0) in dependence on the error probability Pe
2 INFORMATION THEORY July 28, 2015 10
Solution of exercise 2.4 Channel capacity of the AWGN
According to eq. (2.26)
C = suppx(ξ)
[H(y)−H(y|x)]
the channel capacity is determined as the difference of the differential entropies. By using the output distribution
py(ϑ) =1
√
2πσ2y
· e−
ϑ2
2σ2y
the differential entropy H(y) becomes
H(y) = −
∫∞
−∞
py(ϑ) · ldpy(ϑ) dϑ
= −
∫∞
−∞
1√
2πσ2y
· e−
ϑ2
2σ2y · ld
1
√
2πσ2y
· e−
ϑ2
2σ2y
dϑ . (6)
With application of the relation ld(x) = ld(e) · ln(x) the term ld(·) in (6) can be simplified:
ld
1
√
2πσ2y
· e−
ϑ2
2σ2y
= ld
1
√
2πσ2y
+ ld
(
e−
ϑ2
2σ2y
)
= ld
1
√
2πσ2y
+ lde · ln
(
e−
ϑ2
2σ2y
)
= ld
1
√
2πσ2y
+ lde ·
(
−ϑ2
2σ2y
)
and for the differential entropy follows:
H(y) = −
∫∞
−∞
1√
2πσ2y
· e−
ϑ2
2σ2y ·
ld
1
√
2πσ2y
+ lde ·
(
−ϑ2
2σ2y
)
dϑ
= −ld
1
√
2πσ2y
·
∫∞
−∞
1√
2πσ2y
· e−
ϑ2
2σ2y dϑ
︸ ︷︷ ︸∫
∞
−∞py(ϑ) dϑ=1
−lde ·
(
−1
2σ2y
)
·
∫∞
−∞
ϑ2
√
2πσ2y
· e−
ϑ2
2σ2y dϑ
︸ ︷︷ ︸∫
∞
−∞ϑ2
·py(ϑ) dϑ=σ2y
= −ld
1
√
2πσ2y
+ lde ·1
2σ2y
· σ2y =
1
2ld(2πσ2
y
)+
1
2lde
=1
2ld(2πeσ2
y
).
For the irrelevance H(y|x) follows analogously:
H(y|x) = H(n) =1
2ld(2πeσ2
n
),
2 INFORMATION THEORY July 28, 2015 11
such that the channel capacity results as the difference of the output entropy and the irrelevance to
C = H(y)−H(y|x) =1
2ld(2πeσ2
y
)−
1
2ld(2πeσ2
n
)
=1
2ld
(
σ2y
σ2n
)
=1
2ld
(Es +N0/2
N0/2
)
=1
2ld
(
1 + 2Es
N0
)
3 LINEAR BLOCK CODES July 28, 2015 12
3 Linear block codes
3.1 Finite field algebra
Solution of exercise 3.1 2-out-of-5-code
Item a)
The field of code words consists of exactly(52
)=10 elements. They are listed in the following table.
index code words index code words
1 1 1 0 0 0 6 0 1 0 1 0
2 1 0 1 0 0 7 0 1 0 0 1
3 1 0 0 1 0 8 0 0 1 1 0
4 1 0 0 0 1 9 0 0 1 0 1
5 0 1 1 0 0 10 0 0 0 1 1
The code is not linear, because for example the modulo-2 addition of the code words (1 1 0 0 0) and (0 0 0 1 1)results in the word (1 1 0 1 1), which is not an element of the field of code words. So the property of being closed
is violated. The code is not linear.
Item b)
Because of the non-linearity of the code, the zero word cannot be used as reference for the determination of the
distance properties (in this case it isn’t an element of the field of code words anyway). Rather the distances of
all pairs of code words have to be determined. The solution of this exercise can quickly be finished by a small
MATLAB routine. We get the following table.
index 1 2 3 4 5 6 7 8 9 10
1 0 2 2 2 2 2 2 4 4 4
2 2 0 2 2 2 4 4 2 2 4
3 2 2 0 2 4 2 4 2 4 2
4 2 2 2 0 4 4 2 4 2 2
5 2 2 4 4 0 2 2 2 2 4
6 2 4 2 4 2 0 2 2 4 2
7 2 4 4 2 2 2 0 4 2 2
8 4 2 2 4 2 2 4 0 2 2
9 4 2 4 2 2 4 2 2 0 2
10 4 4 2 2 4 2 2 2 2 0
You can see that each code word a has altogether 6 neighbors with the distance dH(a,b) = 2 and 3 neighbors
with the distance dH(a,b) = 4. Because of this regular structure, it is possible to determine the error probabilities
without great expense in the following items.
Item c)
The probability of the occurrence of an undetectable error can be calculated with eq. (3.14). With the assumption
that all code words are equally probable and because of the identical distance properties for all code words it is
sufficient to calculate Pue for any code word. The probability is
Pue = 6 · P 2e · (1− Pe)
3 + 3 · P 4e · (1− Pe)
and is shown in figure 10a in dependence on Pe (double logarithmic representation).
3 LINEAR BLOCK CODES July 28, 2015 13
Item d)
As the ML decoding determines the code word with the greatest similarity to the received word, a simple direct
implementation consists in the correlation of all code words with the receiving word. The wanted code word has
the greatest correlation. Of course such a proceeding can only be applied for small code word alphabets as in this
case for M = 16.
The obtained simulation results are together with the in c) determined error probabilities shown in figure 10a.
For the determination of the error rates N = 105 code words were transmitted. You can directly see that the
analytically calculated values correspond very well to the simulation results. Deviations only occur for very small
values of Pue because of the finite number of simulation values. Further is the occurrence probability of undetected
errors in the overall represented range smaller than the error rate of the BSC.
10−3
10−2
10−1
100
10−6
10−5
10−4
10−3
10−2
10−1
100
analytischsimuliert Pe
PSfrag repla ements
P
e
!
P
u
e
!
a) Wahrs heinli hkeit unerkannter Fehler f�ur BSC
−2 0 2 4 610
−4
10−3
10−2
10−1
100
analytischsimuliert uncodiert
PSfrag repla ements
E
s
=N
0
in dB !
P
w
!
b) Fehlerwahrs heinli hkeit beim AWGN
Fig. 10: Error probabilities for the 2-out-of-5-code at the BSC- and at the AWGN-channel
Item e)
For the transmission over an AWGN-channel can be used eq. (3.27) for the estimation of the error rate. For the
distance spectrum of the considered 2-out-of-5-code we get
Pw ≤ 0.5 ·[
6 · erfc(√
2Es/N0
)
+ 3 · erfc(√
4Es/N0
)]
.
The result is an upper bound of the actual error rate and is shown in figure 10b (interpretation in item f).
Item f)
At the simulations N = 105 code words were again transmitted. The result shows figure 10b. It becomes clear that
only for error rates underneath about Pw < 10−2 exists a good correspondence between the estimation with the
help of the Union Bound and the simulation results. For small signal-to-noise ratios, the Union Bound in contrast
yields inexact results, it doesn’t converge upon the actual error rate.
Further strikes, that the error rate after the decoding is smaller than before the decoding only for a signal-to-noise
ratio from about Es/N0 > 2 dB on (uncoded case). You have to consider then, that it was plotted against Es/N0.
The decoding thus causes for very bad transmission conditions more errors than we would get in the uncoded case.
We will see that this is a frequently observed effect, that nevertheless isn’t true for all codes.
Solution of exercise 3.2 Field
Following, the connection tables for q = 2, 3, 4, 5, 6 are given.
q = 2 :+ | 0 1
0 | 0 1
1 | 1 0
∗ | 0 1
0 | 0 0
1 | 0 1
3 LINEAR BLOCK CODES July 28, 2015 14
q = 3 :+ | 0 1 2
0 | 0 1 2
1 | 1 2 0
2 | 2 0 1
∗ | 0 1 2
0 | 0 0 0
1 | 0 1 2
2 | 0 2 1
q = 4 :+ | 0 1 2 3
0 | 0 1 2 3
1 | 1 2 3 0
2 | 2 3 0 1
3 | 3 0 1 2
∗ | 0 1 2 3
0 | 0 0 0 0
1 | 0 1 2 3
2 | 0 2 0 2
3 | 0 3 2 1
q = 5 :
+ | 0 1 2 3 4
0 | 0 1 2 3 4
1 | 1 2 3 4 0
2 | 2 3 4 0 1
3 | 3 4 0 1 2
4 | 4 0 1 2 3
∗ | 0 1 2 3 4
0 | 0 0 0 0 0
1 | 0 1 2 3 4
2 | 0 2 4 1 3
3 | 0 3 1 4 2
4 | 0 4 3 2 1
q = 6 :
+ | 0 1 2 3 4 5
0 | 0 1 2 3 4 5
1 | 1 2 3 4 5 0
2 | 2 3 4 5 0 1
3 | 3 4 5 0 1 2
4 | 4 5 0 1 2 3
5 | 5 0 1 2 3 4
∗ | 0 1 2 3 4 5
0 | 0 0 0 0 0 0
1 | 0 1 2 3 4 5
2 | 0 2 4 0 2 4
3 | 0 3 0 3 0 3
4 | 0 4 2 0 2 4
5 | 0 5 4 3 2 1
For q = 4 and q = 6, the inverse element of the multiplication is not always existing. As explained in ch. 3.2.1,
the Galois fields GF (q) only exist for q = pm, where p is a prime number and m is a natural number. As q = 6cannot be represented as the power of a prime number, a Galois field to the basis q = 6 doesn’t exist. For the basis
q = 4 = 22 does exist a Galois field, but it isn’t a prime field. In ch. 3.2.2, the definition of the Galois fields was
thereto completed to the so called complementary fields.
Solution of exercise 2 Error correction
Item a)
The maximal number of correctable errors is determined with eq. (3.3) to:
t =
⌊dmin − 1
2
⌋
=
⌊8− 1
2
⌋
= 3 .
With eq. (3.4) results for the number of the detectable errors at pure error detection:
t′ = dmin − 1 = 8− 1 = 7 .
Item b)
At simultaneous correction of t errors and detection of t′ > t errors, eq. (3.5) must be fulfilled. With t = 2 and
t′ = 5 follows for the minimum distance:
dmin ≥ t+ t′ + 1 = 2 + 5 + 1 = 8 .
3 LINEAR BLOCK CODES July 28, 2015 15
| | | | ||||
xxx
|0 1 2 3 4 5 6 7 8
Distanz ->
t
Fig. 11: Field of code words for dmin = 8
Accordingly results the field of code words shown in figure 11 with a Hamming distance dmin = 8.
Item c)
At the transmission, the following cases for the change of a code word according to figure 12 have to be distin-
guished.
• 1. error free transmission
• 2. correctable error pattern ⇒ right correction
• 3. as false detectable error pattern (invalid word)
• 4. correctable error pattern ⇒ false correction
• 5. received word = other code word ⇒ neither error detection nor error correction
xxx1
2
3
4
5
Fig. 12: Field of code words for dmin = 8
3.2 Distance properties of block codes
Solution of exercise 3.4 Sphere-packing bound
Item a)
At a (n, 2)2-block code, the information word u of the length k = 2 is mapped on a code word x of the length n,
where each digit can carry q = 2 different values. With eq. (3.3) follows for the maximal number of correctable
errors with dmin = 5:
t =
⌊dmin − 1
2
⌋
=
⌊5− 1
2
⌋
= 2 .
3 LINEAR BLOCK CODES July 28, 2015 16
Therewith, maximally errors of the length t = 2 can be detected with this code. For a (n, k, dmin)q-code that shall
correct t errors, the sphere-packing bound is fulfilled with eq. (3.7):
qn−k ≥t∑
r=0
(n
r
)
· (q − 1)r
For determining the minimal block length n, the code parameters k = 2, t = 2, q = 2 are set in the sphere-packing
bound and then follows
2n−2 ≥2∑
r=0
(n
r
)
· (2− 1)r
≥
(n
0
)
+
(n
1
)
+
(n
2
)
≥ 1 + n+
(n
2
)
For n = 6 results:
26−2?≥ 1 + 6 +
(6
2
)
16 6≥ 1 + 6 + 15 = 22 ,
so that the sphere-packing bound is not fulfilled. For n = 7 results
27−2?≥ 1 + 7 +
(7
2
)
32 > 1 + 7 + 21 = 29 ,
so that the sphere-packing bound is fulfilled. The minimal block length therefore is n = 7.
Item b)
The solution ensures analogously to item a). The maximal number of correctable errors with dmin = 5 is
determined with eq. (3.3):
t =
⌊dmin − 1
2
⌋
=
⌊5− 1
2
⌋
= 2 .
The existence of the code is then checked with the help of the sphere-packing bound eq. (3.7) with n = 15, k = 7,
t = 2 and q = 2:
qn−k?≥
t∑
r=0
(n
r
)
· (q − 1)r (7)
215−7?≥
2∑
r=0
(15
r
)
· (2− 1)r (8)
28?≥
(15
0
)
+
(15
1
)
+
(15
2
)
(9)
256 > 1 + 15 + 105 = 121 . (10)
Thus the existence of a (15, 7, 5)2-code is secured. Thereby it isn’t taken into consideration how to construct this.
• left side: qn−k different vectors, that are no code word
• right side: number of error patterns at t errors
Statement: For being able to correct t errors, it must be possible to assign each error pattern to a vector. The
difference of both sides results in the number of the remaining words. At difference = 0, it is a perfect code,
because the decoding spheres are compact.
3 LINEAR BLOCK CODES July 28, 2015 17
Item c)
For checking the existence of a (15, 7, 7)2, for the present the number of the correctable errors is determined
t =
⌊dmin − 1
2
⌋
=
⌊7− 1
2
⌋
= 3
and then the validity of the sphere-packing bound is checked:
qn−k?≥
t∑
r=0
(n
r
)
· (q − 1)r (11)
215−7?≥
3∑
r=0
(15
r
)
· (2− 1)r (12)
28?≥
(15
0
)
+
(15
1
)
+
(15
2
)
+
(15
3
)
(13)
256 6≥ 1 + 15 + 105 + 455 = 576 . (14)
As the sphere-packing bound is not fulfilled, this code cannot exist!
Item d)
For checking the existence of a (23, 12, 7)2, for the present the number of the correctable errors is determined
t =
⌊dmin − 1
2
⌋
=
⌊7− 1
2
⌋
= 3
and then the validity of the sphere-packing bound is checked:
qn−k?≥
t∑
r=0
(n
r
)
· (q − 1)r (15)
223−12?≥
3∑
r=0
(23
r
)
· (2− 1)r (16)
211?≥
(23
0
)
+
(23
1
)
+
(23
2
)
+
(23
3
)
(17)
2048 = 1 + 23 + 253 + 1771 = 2048 . (18)
As the sphere-packing bound is fulfilled, such a code does exist and it is even a perfect code! Special code:
(23, 12, 7)2-Golay code.
Item e)
An input alphabet with 16 different symbols corresponds to an input word u of the length k = 4.
qn−k?≥
t∑
r=0
(n
r
)
· (q − 1)r (19)
2n−4?≥
4∑
r=0
(n
r
)
· (2− 1)r (20)
2n−4?≥
(n
0
)
+
(n
1
)
+
(n
2
)
+
(n
3
)
+
(n
4
)
(21)
(22)
For n = 14 results:
214−4?≥
(14
0
)
+
(14
1
)
+
(14
2
)
+
(14
3
)
+
(14
4
)
210?≥ 1 + 14 + 91 + 364 + 1001
1024 6≥ 1471 ,
3 LINEAR BLOCK CODES July 28, 2015 18
so that the sphere-packing bound is not fulfilled. For n = 15 results
215−4?≥
(15
0
)
+
(15
1
)
+
(15
2
)
+
(15
3
)
+
(15
4
)
211?≥ 1 + 15 + 105 + 455 + 1365
2048 > 1941 ,
so that the sphere-packing bound is fulfilled. The minimal block length therefore is n = 15 and for the code rate
Rc follows:
Rc =k
n=
4
15= 0.2667
3.5 Matrix description of block codes
Solution of exercise 3.5 Generator and parity check matrices
Item a)
Generator and parity check matrix of the (4,1,dmin = 4)-repetition code are in systematical form (see ch. 3.5.7)
G = (1 | 1 1 1) H =
1 1 0 0
1 0 1 0
1 0 0 1
.
At the dual code, the parity check matrix is now used as generator matrix. The consideration of H shows, that a
simple (4,3,dmin = 2)-single-parity-check code results, that puts a test sum in front of the three information bits.
Further the product G ·HT = 0 illustrates the orthogonality of both codes.
Item b)
The parity check matrix of a Hamming code of the degree r = 3 can easily be produced by assigning the dual
numbers from 1 to 7 in columns
H =
1 0 1 0 1 0 1
0 1 1 0 0 1 1
0 0 0 1 1 1 1
.
Unfortunately the generator matrix cannot directly be produced with eq. (3.31) from this non-systematical form. In
order to achieve the generator matrix we construct an equivalent systematical code Hs by exchanging the columns
of H and calculate the corresponding systematical generator matrix Gs
Hs =
1 1 0 1 1 0 0
1 0 1 1 0 1 0
0 1 1 1 0 0 1
=⇒ Gs =
1 0 0 0 1 1 0
0 1 0 0 1 0 1
0 0 1 0 0 1 1
0 0 0 1 1 1 1
.
To finally get the generator matrix G, that fits to H, the exchange of columns has to be undone. The generator
matrix of the original code has the form
G =
1 1 1 0 0 0 0
1 0 0 1 1 0 0
0 1 0 1 0 1 0
1 1 0 1 0 0 1
.
Item c)
The consideration of H shows, that there do not exist two linear dependent (identical) columns. Nevertheless there
3 LINEAR BLOCK CODES July 28, 2015 19
are combinations of three columns that are linear dependent (for instance the first three columns). This also is
exactly the minimal distance of the code. It’s the general connection, that the minimum distance of the code is
identical to the minimum number of linear dependent columns of the parity check matrix. Thus, every selection of
dmin − 1 columns is linear independent and there exists at least one selection of dmin linearly dependent columns.
Solution of exercise 3.6 Expansion, shortening, and puncturing of codes
Item a)
An additional test bit x7 implies for the parity check matrix HS of the original code, that the number of columns
(length of the code words) and rows (number of test bits) is each increased by one. If the expanded code shall
have the minimum distance dmin = 4, its parity check matrix HE must contain according to exercise 3.5c at
least 4 linear independent columns. The easiest expansion consists of first adding a column of zeros, whereby the
decoding of the original code isn’t affected. Then, a row of ones is added, that forms an additional parity-check-
sum over all n = 7 bits of the original code. Therewith, each code word of the expanded code has an even weight,
i.e. the odd minimum distance dmin = 3 is increased to dE, min = 4. The new parity check matrix is
HE =
(HS 0
1 1
)
=
1 1 0 1 1 0 0 0
1 0 1 1 0 1 0 0
0 1 1 1 0 0 1 0
1 1 1 1 1 1 1 1
.
We get the corresponding generator matrix GE by adding an additional column g+ for the added test bit x7. As
x7 = u · g+ represents the test sum of all xi with 0 ≤ i < 7, the condition for g+ is
x7 = u · g+ !=
6∑
i=0
xi mod 2 = u ·GS ·
1...
1
=⇒ g+ = GS ·
1...
1
.
The column g+ can thus be calculated from the modulo-2 sum of all columns of G. The generator matrix of the
expanded code has the form
GE = (GS | g+) =
1 0 0 0 1 1 0 1
0 1 0 0 1 0 1 1
0 0 1 0 0 1 1 1
0 0 0 1 1 1 1 0
.
Item b)
The bisection of the field of code words is reached by cancellation of one information bit, which results in a half
rate (6,3)-code. By using the systematical coder from exercise 3.5, each of the four information bits of the original
code can be cancelled. In each case the minimum distance is further on dmin = 3. The generator matrix results by
cancelling the ith row and column of Gs, 1 ≤ i ≤ k, at the parity check matrix accordingly the ith column of Hs.
Both matrices are for the example i = k = 4
GK =
1 0 0 1 1 0
0 1 0 1 0 1
0 0 1 0 1 1
; HK =
1 1 0 1 0 0
1 0 1 0 1 0
0 1 1 0 0 1
.
Item c)
The punctured generator and parity check matrix are achieved by cancelling the ith column (1 ≤ i ≤ n). The
minimum distance for each bit is dmin = 2. The difference to item b) consists in the fact, that further on 16 code
words exist.
GP =
1 0 0 0 1 1
0 1 0 0 1 0
0 0 1 0 0 1
0 0 0 1 1 1
; HP =
(1 1 0 1 1 0
1 0 1 1 0 1
)
.
3 LINEAR BLOCK CODES July 28, 2015 20
Solution of exercise 3.7 Coset decomposition and syndrome decoding
Item a)
The (7,4,3)-Hamming code has n − k = 3 test bits, so that overall 2n−k = 8 syndromes (inclusively s = 0) can
be formed. Furthermore, a single error can always be corrected due to dmin = 3. Because of the word length of
n = 7 there are exactly 7 different single error patterns. Thus, the number of nonzero syndromes corresponds to
the number of correctable error patterns and therefore the Hamming code is a perfect code.
Item b)
For the Hamming code, the columns of H represent all syndromes s 6= 0. Because of t = 1, the error words e
with the Hamming weight wH(e) = 1 form the coset leaders. Now, only the assignment has to be determined.
The following table results:
syndrome sµ coset leader eµ
1 1 0 1 0 0 0 0 0 0
1 0 1 0 1 0 0 0 0 0
0 1 1 0 0 1 0 0 0 0
1 1 1 0 0 0 1 0 0 0
1 0 0 0 0 0 0 1 0 0
0 1 0 0 0 0 0 0 1 0
0 0 1 0 0 0 0 0 0 1
Hint: Let eµ denote the µth row of the n × n identity matrix, i.e. eµ contains a one a the µth position and zeros
elsewhere. Then s = eµ ·HT corresponds to the µth row of HT .
Item c)
The syndrome for the received word is
s = y ·HT = (1 0 1) .
So, there is an error. The coset leader belonging to the syndrome can be found in the second row of the table from
item b). The correction can be realized by adding y and e2 = (0 1 0 0 0 0 0).
c = y + e2 = (1 0 0 1 0 0 1)
The estimated information word thus is u = (1 0 0 1).
Item d)
To be able to use the syndrome s directly for addressing the coset leader, the position of the one within the coset
leader has to correspond to the decimal representation of s (See hint for item a)). The parity check matrix is
H =
1 0 1 0 1 0 1
0 1 1 0 0 1 1
0 0 0 1 1 1 1
.
Solution of exercise 3.8 Coding program
Item a)
The (7, 4, 3)-Hamming code maps k = 4 information symbols onto n = 7 code symbols. Within the MATLAB-
programm, k information symbols are randomly chosen using randint and encoded by the generator matrix.
A randomly determined error vector is added and the syndrom is calculated. If the syndrom is nonzero, the
corresponding coset leader is assigned. Please notice: all calculations have to be executed within GF (2).
Item b)
See the MATLAB-program.
3 LINEAR BLOCK CODES July 28, 2015 21
3.6 Cyclic codes
Solution of exercise 3.9 Polynomial multiplication
Item a)
Multiplication of f(D) and g(D):
(D3 +D + 1) · (D + 1) = D4 +D2 +D +D3 +D + 1
= D4 +D3 +D2 + 1
Item b)
T
g0
g1
f3
f2
f1
f0
Fig. 13: Block diagram of a non-recursive system for the sequential multiplication of two polynomials
Item c)
With in −→ memory and out = in ⊕ memory the corresponding function table is achieved.
clock in memory out
0 0 0 0
1 1 0 1
2 0 1 1
3 1 0 1
4 1 1 0
5 0 1 1
The output (1, 1, 1, 0, 1) corresponds to the solution D4 +D3 +D2 + 1 achieved in item a).
Solution of exercise 3.10 Polynomial division
Item a)
Division of f(D) by g(D).
D3 +D +1 : D2 +D + 1 = D + 1
D3 +D2 +D
D2 +1
D2 +D +1
D
⇒ remainder Rg(D)[f(D)] = D
3 LINEAR BLOCK CODES July 28, 2015 22
Simple calculation scheme:
1 0 1 1 : 1 1 1 = 1 1
1 1 1
1 0 1
1 1 1
1 0
Item b)
g1
r0
g0
r1f
3
f2
f1
f0
A B
Fig. 14: Block diagram of a recursive system for the sequential division of two polynomials
Item c)
By defining A = Out ⊕ In −→ r+0 and B = Out ⊕ r0 −→ r+1 the function table is easily generated:
clock In A r0 B r1 Out
0 0 0 0 0 0 0
1 1 1 0 0 0 0
2 0 0 1 1 0 0
3 1 0 0 1 1 1
4 1 0 0 1 1 1
5 0 - 0 - 1 1
Solution of exercise 3.11 Generator polynomial
Aufgabenteil a)
If g(D) shall be generator polynomial, Rg(D)[Dn − 1] = 0 has to be fulfilled according to ch. 3.6.2. With n = 15
and g(D) = D8 +D7 +D6 +D4 + 1 follows
(D15 − 1) : (D8 +D7 +D6 +D4 + 1)︸ ︷︷ ︸
g(D)
= D7 +D6 +D4 + 1︸ ︷︷ ︸
h(D)
⇒ Rest = 0 ,
so that the condition is fulfilled.
Item b)
With eq. (3.56) the code word x can be divided into two parts at systematical coding:
x(D) = p(D) +Dn−k · u(D)
with the parity check polynomial
p(D) = Rg(D)[−Dn−k · u(D)] .
With the code parameters n = 15, k = 7, n−k = 8 follows Dn−k ·u(D) = D8 ·(D4+D+1) = D12+D9+D8.
Thereby results for the parity check polynomial
(D12 +D9 +D8) : (D8 +D7 +D6 +D4 + 1) = D4 +D3 ⇒ rest = p(D) = D7 +D4 +D3
3 LINEAR BLOCK CODES July 28, 2015 23
and thereby for the code word
x(D) = Dn−k · u(D) + p(D)
= D12 +D9 +D8
︸ ︷︷ ︸
D8·u(D)
+D7 +D4 +D3
︸ ︷︷ ︸
p(D)
= 001 0011 1001 1000
Item c)
If it shall be possible that the polynomial y(D) = D14 +D5 +D+1 can be a code word, the syndrome s(D) has
to be fulfilled according to the test equation eq. (3.61) s(D) = Rg(D)[y(D)] = 0.
(D14+D5+D+1) : (D8+D7+D6+D4+1) = D6+D5+D3 ⇒ Rest = s(D) = D7+D6+D3+D+1
Thus is s(D) 6= 0 and thereby doesn’t exist a valid code word.
Solution of exercise 3.12 Syndrome
Item a)
For the present all coset leaders in the matrix E
E =
1 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0
0 0 1 0 0 0 0 0
0 0 0 1 0 0 0 0
0 0 0 0 1 0 0 0
0 0 0 0 0 1 0 0
0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 1
and all different syndromes to the matrix S
S =
1 0 1 0 0
0 1 0 1 0
0 0 1 0 1
1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 1 0
0 0 0 0 1
are summarized. With eq. (3.35) is
S = E ·HT
Thereby the parity check matrix H can be determined by
HT = E−1 · S = E · S = S
so that
H =
1 0 0 1 0 0 0 0
0 1 0 0 1 0 0 0
1 0 1 0 0 1 0 0
0 1 0 0 0 0 1 0
0 0 1 0 0 0 0 1
= (PT3,5|E5,5)
3 LINEAR BLOCK CODES July 28, 2015 24
results. With the decomposition of the parity check matrix H = (PTn−k,k|En−k,n−k) with eq. (3.31) results for
the generator matrix with eq. (3.30)
G = (Ek,k|Pk,n−k) = (E3,3|P3,5)
=
1 0 0 1 0 1 0 0
0 1 0 0 1 0 1 0
0 0 1 0 0 1 0 1
.
Item b)
From the parity check matrix H can with H = (PTn−k,k|En−k,n−k) directly be read the number of test digits
n− k = 5 and the number of information digits k = 3.
Item c)
In chapter ch. 3.62 is explained the calculation of the generator matrix from the generator polynomial g(D). It is
the connection
G =
g0 . . . gn−k 0 0
0 g0 . . . gn−k 0
0 0 g0 . . . gn−k
From the first row (10010100) of the generator matrix found in item a) thereby results the generator polynomial
g(D) = 1 +D3 +D5.
Item d)
According to the number of information digits k = 3 can be formed code words with the generator polynomial
23 = 8.
Item e)
For the detection of transmission errors the syndrome corresponding to the receiving word y1 is determined:
s = y1 ·H = (0 2 1 2 2)modulo 2 = (0 0 1 0 0) .
As the syndrome is not equal to zero, it is not a code word. By comparing with the syndrome table we find s = s6,
so that we conclude the error vector e6 = 00000100. According to eq. (3.38) results as maximum-likelihood
estimation value the code word
x = y1 + e6
= (0 1 1 0 1 0 1 1) + (0 0 0 0 0 1 0 0)
= (0 1 1 0 1 1 1 1)
as maximum-likelihood estimation value.
Item f)
For the decoding of the receiving word y2 = (1 0 1 1 0 1 1 1) again the corresponding syndrome is determined:
s = y2 ·H = (2 0 3 1 2)modulo 2 = (0 0 1 1 0) .
Thus it is not a code word. As the syndrome also isn’t contained in the syndrome table, no error correction can
follow.
Solution of exercise 3.13 Primitive Polynomial
A polynomial p(D) of the degree m (with coefficients pi∈GF (p)) is called irreducible if it cannot be factorized
3 LINEAR BLOCK CODES July 28, 2015 25
into polynomials of the degree < m (with coefficients from GF (p)). Consequently the polynomial doesn’t have
any zeros in the Galois field GF (p).
An irreducible polynomial p(D) of the degree m (with pi∈GF (p)) is called primitive polynomial if a zero
α∈GF (p) with p(α) = 0 exists and the powers α1 · · ·αn with n = pm − 1 form the extension field GF (pm). αis called primitive element of GF (pm) and n is, because of αn = α0 = 1, called order of α.
Polynomial p(D) = g1(D) = D4 +D + 1:
For the primitive element z the auxiliary condition z4 = z + 1 is valid with p(z) = z4 + z + 1 = 0 .
z0 = = 1
z1 = = z
z2 = = z2
z3 = = z3
z4 = = z + 1
z5 = z · z4 z · (z + 1) = z2 + z
z6 = z · z5 z · (z2 + z) = z3 + z2
z7 = z2 · z5 = z4 + z3 = z3 + z + 1
z8 = z4 · z4 = (z + 1) · (z + 1) = z2 + 1
z9 = z · z8 = z · (z2 + 1) = z3 + z
z10 = z · z9 = z4 + z2 = z2 + z + 1
z11 = z · z10 = z · (z2 + z + 1) = z3 + z2 + z
z12 = z · z11 = z4 + z3 + z2 = z3 + z2 + z + 1
z13 = z · z12 = z4 + z3 + z2 + z = z3 + z2 + 1
z14 = z · z13 = z4 + z3 + z = z3 + 1
z15 = z · z14 = z4 + z = 1
For the polynomial g1(D) therewith results the order n1 = 15 = 24 − 1 and thus it is a primitive polynomial.
Polynomial p(D) = g2(D) = D4 +D3 +D2 +D + 1:
For the primitive element z the auxiliary condition z4 = z3+z2+z+1 is valid with p(z) = z4+z3+z2+z+1 = 0.
z0 = = 1
z1 = = z
z2 = = z2
z3 = = z3
z4 = = z3 + z2 + z + 1
z5 = z · z4 z4 + z3 + z2 + z = 1
As z0 = z5 is valid, for the polynomial g2(D) follows the order n2 = 5 < 24 − 1, so that it is not a primitive
polynomial.
Solution of exercise 3.14 CRC codes
Item a)
A CRC code has the parameters n = 2r − 1 and k = 2r − r − 2. For n = 15 is accordingly r = 4 and k = 10.
Besides, the generator polynomial can be factorized to g(D) = p(D) · (1 +D), so that p(D) must be a primitive
polynomial of the degree 4 (g(D) has the degree n − k = 5). It can be determined with gfprimdf and is
p(D) = 1 +D +D4. Therewith results the generator polynomial
g(D) = 1 +D2 +D4 +D5 .
Item b)
MATLAB calculates the generator matrix G and the parity check matrix H from the generator polynomial g(D)
3 LINEAR BLOCK CODES July 28, 2015 26
with the help of the command cyclgen.
H =
1 0 0 0 0 1 1 1 0 1 1 0 0 1 0
0 1 0 0 0 0 1 1 1 0 1 1 0 0 1
0 0 1 0 0 1 1 0 1 0 1 1 1 1 0
0 0 0 1 0 0 1 1 0 1 0 1 1 1 1
0 0 0 0 1 1 1 0 1 1 0 0 1 0 1
G =
1 0 1 0 1 1 0 0 0 0 0 0 0 0 0
1 1 1 1 1 0 1 0 0 0 0 0 0 0 0
1 1 0 1 0 0 0 1 0 0 0 0 0 0 0
0 1 1 0 1 0 0 0 1 0 0 0 0 0 0
1 0 0 1 1 0 0 0 0 1 0 0 0 0 0
1 1 1 0 0 0 0 0 0 0 1 0 0 0 0
0 1 1 1 0 0 0 0 0 0 0 1 0 0 0
0 0 1 1 1 0 0 0 0 0 0 0 1 0 0
1 0 1 1 0 0 0 0 0 0 0 0 0 1 0
0 1 0 1 1 0 0 0 0 0 0 0 0 0 1
Item c)
If only burst errors are allowed, that don’t contain any correct digits, all errors are detected. The syndromes are
always not equal to zero.
4 CONVOLUTION CODES July 28, 2015 27
4 Convolution codes
4.1 Fundamental principles
Solution of exercise 4.1 Convolution code
x l1( )
u l( -2)u l( -1)u l( )
x l2( )
x l3( )
Fig. 15: Shift register structure for the code of the rate R=1/3
Item a)
For the input sequence u = (0 1 1 0 1 0) results the following output sequence:
c = (000 111 010 010 000 101) .
Thus it is not a systematical code.
Item b)
For the input sequence u = (0 1 1 0 1 0) results the path in the Trellis chart shown in figure 16.
Fig. 16: Trellis chart for the input sequence u = (0 1 1 0 1 0)
Item c)
The corresponding state diagram is represented in figure 17.
Item d)
The free distance df gives the minimal Hamming distance between two sequences. Because of the linearity
of convolution codes, the comparison of sequences with the zero path is sufficient for the determination of the
4 CONVOLUTION CODES July 28, 2015 28
0 0
1 0
1 1
0 1
1/111
0/000
1/101
0/010
0/101
1/000
0/111
1/010
Fig. 17: State diagram for the code of the rate R=1/3
00
01
10
11
0/000
1/010
0/111
0/101
0/010
1/111
0/000 0/000 0/000
0/111
w d l=1, =8, =3
l=0 l=4l=3l=2l=1
w d l=2, =8, =4
l=5
w d l=2, =10, =5
0/111
0/000
1/000
0/101
Fig. 18: Free distance for the code of the rate R=1/3
Hamming distance. The following figure shows three sequences, that are different from the zero path, of the length
l = 3, l = 4 and l = 5.
The first sequence corresponds to the input sequence u = (1 0 0) (input weight w = 1) and to the output sequence
c = (111 101 111). At comparison with the zero sequence (u = (0 0 0) and c = (000 000 000)) results the
Hamming distance d = 8.
The second sequence corresponds to the input sequence u = (1 1 0 0) (input weight w = 2) and to the output
sequence c = (111 010 010 111), so that again the Hamming distance is d = 8.
The third sequence corresponds to the input sequence u = (1 0 1 0 0) (input weight w = 2) and to the output
sequence c = (111 101 000 101 111), so that the Hamming distance is d = 10.
The free distance therewith is df = 8.
4.2 Characterization of convolution codes
Solution of exercise 4.2 Catastrophic codes
In figure 19 is shown the shift register structure and the state diagram of the convolution code. As in the
state diagram exists a closed loop with the weight zero (closed loop in the state 11 has the weight zero), it is a
catastrophic code.
4 CONVOLUTION CODES July 28, 2015 29
0 0
1 0
1 1
0 1
1/11
0/00
1/00
0/11
0/01
1/01
0/10
1/10
D D
u l( )g10=1 g11=0
g21=1 g22=0g20=1
g12=1
Fig. 19: Shift register structure and state diagram for catastrophic convolution code
Further characteristic: the generator polynomial g1(D) = 1+D2 = (1+D) · (1 +D) has as factor the generator
polynomial g2(D) = (1 +D).
Further characteristic: as can be seen in figure 19, all adders have an even number of connections.
Because of these properties a finite number of transmission errors can lead to an infinite number after the decoding
at catastrophic codes.
4.3 Optimal decoding with Viterbi-algorithm
Solution of exercise 4.3 Viterbi-decoding
Item a)
In figure 20 is represented the Trellis for the convolution code.
Fig. 20: Trellis diagram of the convolution codes g0(D) = 1 +D +D2 and g1(D) = 1 +D2
For the terminated information sequence u(ℓ) = (1 1 0 1( 0 0)) follows the code sequence x = (11 01 01 00 10 11).
Item b)
The decoding of the code sequence x = (11 01 01 00 10 11) is shown in figure 21. From the decoding follows the
transmitted data sequence u(ℓ) = (1 1 0 1 (0 0)).
Figure 22 shows the decoding for the disturbed receiving sequence y1 = (11 11 01 01 10 11) and the estimated
data sequence u1 = (1 1 0 1 (0 0)) follows, so that the two transmission errors were corrected.
4 CONVOLUTION CODES July 28, 2015 30
Fig. 21: Viterbi-decoding of x
00
01
10
11
l=0 l=4l=3l=2l=1
2
11
0
l=5 l=6
01 01 10 11
2
0
2
1
0
1
4
1
1
2
2
2
1
2
2
2
2
2
3
2
2
3
2
3
3
3
1
1
1
1
2
2
0
0
11
1
1
1
1
2
2
0
0
1
1
1
1
0
0
2
2
2
0
0
2
1
1
1
1
y1
1 0 1 0 01u1
Fig. 22: Viterbi-decoding of y1 = (11 11 01 01 10 11)
Figure 23 shows the decoding for the disturbed receiving sequence y2 = (11 11 10 01 10 11) and the estimated
data sequence u2 = (1 0 0 1 (0 0)) follows, which doesn’t correspond to the transmitted data sequence.
Solution of exercise 4.4 Viterbi-decoding with puncturing
Item a)
The test matrix P1
P1 =
(1 1 1 0
1 0 0 1
)
consists of n = 2 rows and LP = 4 columns. Only n = 5 bits of the originally n ·LP = 8 code bits are transmitted
after the puncturing, so that results the code rate of the punctured code
Rc =k
n= RFC ·RP =
1
2·8
5=
4
5.
4 CONVOLUTION CODES July 28, 2015 31
Fig. 23: Viterbi-decoding of y2 = (11 11 10 01 10 11)
Item b)
Under consideration of the puncturing scheme follows from the receiving sequence y = (1 1 0 0 0 1 0 1) the
punctured receiving sequence y = (11 0x 0x x0 10 1x) (with free variable parameter x for the punctured bits).
The decoding is presented in figure 24 and it is correctly decided to u = (1 1 0 1 (0 0)).
00
01
10
11
l=0 l=4l=3l=2l=1
2
11
0
l=5 l=6
0x x0 10 1x
2
0
0
0
1
1
2
0
1
3
2
1
0
1
1
1
1
0
2
1
0
2
0
1
2
1
0
1
1
0
1
1
0
0
0x
0
1
1
0
0
0
1
1
1
1
1
1
0
0
2
2
1
0
0
1
0
0
1
1
x
1 0 1 0 01u
Fig. 24: Viterbi-decoding with puncturing
Solution of exercise 4.5 Coding and decoding of a RSC code
Item a)
Figure 25 shows a Trellis segment for the considered convolution code. The broken lines typify transitions for an
information bit u = 1, the solid lines for u = 0. The 2-bit-words at the right margin represent the code words of the
respective state transitions, where the upper code word is assigned to the upper path, that arrives at the respective
state.
Item b)
4 CONVOLUTION CODES July 28, 2015 32
S0
S1
S2
S3
S4
S5
S6
S7
0 0
1 1
0 0
1 1
0 1
1 0
0 1
1 0
0 0
1 1
0 0
1 1
0 1
1 0
0 1
1 0
Fig. 25: Trellis segment for convolution code with the generators g0(D) = 1 and g1(D) = (1+D+D3)/(1+D+D2+D3)
The input sequence of the information bits is given by u(ℓ) = (1 1 0 1 1). At the end of the sequence have to be
added tail bits for transferring the coder again to the zero state. As this is a recursive code, the tail bits cannot be
determined until the end of the input sequence u(ℓ). We get the following scheme and the tail bits are (1 0 1).
u(ℓ) state successor state output
1 0 0 0 1 0 0 1 1
1 1 0 0 0 1 0 1 1
0 0 1 0 1 0 1 0 1
1 1 0 1 1 1 0 1 1
1 1 1 0 1 1 1 1 0
1 1 1 1 0 1 1 1 0
0 0 1 1 0 0 1 0 1
1 0 0 1 0 0 0 1 1
Item c)
Figure 26 shows the Trellis diagram for the decoding at a chance noise disturbance. It can be seen, that in the
course of the decoding many paths merge in the beginning of the Trellis diagram. At the end, the Trellis is fan-
shaped. If the final state is unknown, this makes the data decision on the last information bits more difficult. In
this case, more errors occur at the end of the data block. This makes clear, that a secure decision on a certain
information bit can only follow after a sufficiently great decision depth. In this example, the block is correctly
decoded at known final state (solid line), while in the other case errors occur at the end of the block (broken line).
Item d)
If the four errors are arranged in bursts, the decoder isn’t able to correct them because of the free distance of the
code of df = 6. If they, on the other hand, occur as independent single errors and are nearly equally distributed in
the block, they can be corrected. This is an essential difference to block codes.
Solution of exercise 4.6 Investigation to Viterbi-decoding
4 CONVOLUTION CODES July 28, 2015 33
PSfrag repla ements
S
0
S
1
S
2
S
3
S
4
S
5
S
6
S
7
Fig. 26: Decoding of the convolution codes with the generators g0(D) = 1 and g1(D) = (1+D+D3)/(1+D+D2 +D3),exercise 8.6.4
Item a)
Figure 27a shows the results for several decision depths. It can be seen, that symbol as well as bit error rates do
not improve from a depth of K = 20 on. Smaller decision depths in contrast lead to drastic deteriorations.
4 CONVOLUTION CODES July 28, 2015 34
0 10 20 30 40 5010
−4
10−3
10−2
10−1
Bit error
Block error
PSfrag repla ements
Ents heidungstiefe !
B
E
R
!
a) OhnePun tering
0 10 20 30 40 5010
−3
10−2
10−1
100
Bitfehler
Blockfehler
PSfrag repla ements
Ents heidungstiefe K !
B
E
R
!
b) Mit Punktierung
Fig. 27: Influence of the decision depth on the decoding with and without puncturing
Item b)
The influence of the decision depth at a puncturing is shown in figure 27b. Here can be observed a similar behavior,
nevertheless, the decision depths have to be chosen clearly longer. Only from K = 30 on, there’s reached no more
improvement. The reason for this are the worse distance properties of the code, that result from the puncturing.
The punctured symbols now are not available to the differentiation of code sequences any more, so that now longer
path sections are necessary for being able to differentiate sequences securely.
Item c)
Figure 28 finally illustrates the distribution of the errors in a sequence, if the Trellis diagram is not terminated.
The increase of the error rate at the end of the sequence is clearly visible. The reason is, that at an open Trellis
diagram, the memory of the coder cannot be used at the end, and thus only very insecure decisions can be made.
0 10 20 30 40 500
0.02
0.04
0.06
0.08
PSfrag repla ements
Bitposition im Burst !
r
e
l
.
F
e
h
l
e
r
h
�a
u
�
g
k
e
i
t
!
Verteilung der Bitfehler
Fig. 28: Distribution of the errors at open Trellis diagram
Solution of exercise 4.7 Simulation of a convolution-coder and -decoder
The curve of the simulated bit error rate is shown in figure 29.