ON THE GENERATING FUNCTION AND ITS APPLICATION ......2016/08/26  · of any two codewords is another...

11
Vol.101(4) December 2010 SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS 121 K. Ouahada, H. C. Ferreira Department of Electrical and Electronic Engineering Science, University of Johannesburg, South Africa E-mail: {kouahada, hcf erreira} @uj.ac.za Abstract: One of the major problems researchers face is how to verify experimental or simulation results of any class of codes. Usually this can be done by deriving bounds to test the performance of codes. As they are used in binary codes and especially in linear convolutional codes, where the sum of any two codewords is another codeword and the all-zeros sequence is a codeword, the major step to derive these bounds is to calculate the generating function. Since ternary line codes are non-linear line codes, we will investigate in this paper the possibility of deriving the generating function for at least some of these codes. Key Words: Ternary line codes, AMI, HDBn, BnZS, transfer function, generating function 1. INTRODUCTION Ternary line codes [1] such as the High-Density Bipo- lar of order n (HDBn) and Binary n-Zero Substitution (BnZS) have been used in PCM metallic cable systems worldwide. They are relatively simple and satisfy tight channel input restrictions, which make them suitable for use in baseband communications systems. Ternary line codes are considered as DC-free codes, they do not contain zero-frequency components and do have low-frequency components. The clock recovery and synchronization [2] is easy since, when using the NRZ pulses, the frequent transitions between symbols that the code sequence has, enable the encoder clock signal to be derive from the code sequence at the re- ceiver. All textbooks and papers use linear convolutional codes as the encoders to study the behaviour of Viterbi de- coding [3]–[5] and to present the well known 2 dB asymptotic gain of soft decisions over hard decisions on wideband Gaussian channels. In their paper [6], Ouahada and Ferreira investigated the performance of ternary line codes under Viterbi decoding and verified the simulation set-up to enhance the confidence in their results. They subjected their simulation set-up to several tests to verify its technical correctness and assess its numerical accuracy. They first tested their noise-generator results by writing a Matlab program to fit the Gaussian probability density function to the measured results by adjusting different parameters to obtain a better fit for better accuracy. They also double checked the 2 dB gain of the soft de- cision Viterbi decoding over the hard decision decoding by using a well known text book example of convolu- tional code [3]. Although the derivation of the upper bounds on the error probability for non-linear codes has been investi- gated in the context of trellis codes [7, 8], we investigate in this paper a different technique of verification which is usually used with binary convolutional codes. This technique is based on the derivation of the generating function to calculate the decoding error probability up- per bounds of the soft and hard decisions, in order to be able to verify the 2 dB gain between them. We present the concept of the generating function or the transfer function, as it is called in control sys- tems [9], and show how it is possible to apply it to coding techniques and more specifically to ternary line codes [1, 10]. The generating function or transfer function of a line code, which was first introduced for convolutional codes [3], contains all the necessary parameters such as the value of the distances between the expected and received data, the length of the path on the trellis and the inputs of the message. It gives the properties of all the possible paths that any code can assume. The generating function will be used for the research into the upper bound for the probability of errors of Viterbi decoding. It can be derived by Mason’s rule [11], which is explained as follows: The overall transfer function of a flow graph can be obtained from the formula: T = k (T k Δ k ) Δ , (1) ON THE GENERATING FUNCTION AND ITS APPLICATION TO TERNARY LINE CODES

Transcript of ON THE GENERATING FUNCTION AND ITS APPLICATION ......2016/08/26  · of any two codewords is another...

Page 1: ON THE GENERATING FUNCTION AND ITS APPLICATION ......2016/08/26  · of any two codewords is another codeword and the all-zeros sequence is a codeword, the major step to derive these

Vol.101(4) December 2010 SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS 121

On the Generating Function and its Application to Ternary LineCodes

K. Ouahada, H. C. Ferreira

Department of Electrical and Electronic Engineering Science, University of Johannesburg, SouthAfricaE-mail: {kouahada, hcferreira}@uj.ac.za

Abstract: One of the major problems researchers face is how to verify experimental or simulationresults of any class of codes. Usually this can be done by deriving bounds to test the performance ofcodes. As they are used in binary codes and especially in linear convolutional codes, where the sumof any two codewords is another codeword and the all-zeros sequence is a codeword, the major stepto derive these bounds is to calculate the generating function. Since ternary line codes are non-linearline codes, we will investigate in this paper the possibility of deriving the generating function for atleast some of these codes.

Key Words: Ternary line codes, AMI, HDBn, BnZS, transfer function, generating function

1. INTRODUCTION

Ternary line codes [1] such as the High-Density Bipo-lar of order n (HDBn) and Binary n-Zero Substitution(BnZS) have been used in PCM metallic cable systemsworldwide. They are relatively simple and satisfy tightchannel input restrictions, which make them suitablefor use in baseband communications systems.

Ternary line codes are considered as DC-free codes,they do not contain zero-frequency components and dohave low-frequency components. The clock recoveryand synchronization [2] is easy since, when using theNRZ pulses, the frequent transitions between symbolsthat the code sequence has, enable the encoder clocksignal to be derive from the code sequence at the re-ceiver.

All textbooks and papers use linear convolutional codesas the encoders to study the behaviour of Viterbi de-coding [3]–[5] and to present the well known ∼ 2 dBasymptotic gain of soft decisions over hard decisionson wideband Gaussian channels.

In their paper [6], Ouahada and Ferreira investigatedthe performance of ternary line codes under Viterbidecoding and verified the simulation set-up to enhancethe confidence in their results. They subjected theirsimulation set-up to several tests to verify its technicalcorrectness and assess its numerical accuracy. Theyfirst tested their noise-generator results by writing aMatlab program to fit the Gaussian probability densityfunction to the measured results by adjusting differentparameters to obtain a better fit for better accuracy.They also double checked the 2 dB gain of the soft de-cision Viterbi decoding over the hard decision decoding

by using a well known text book example of convolu-tional code [3].

Although the derivation of the upper bounds on theerror probability for non-linear codes has been investi-gated in the context of trellis codes [7, 8], we investigatein this paper a different technique of verification whichis usually used with binary convolutional codes. Thistechnique is based on the derivation of the generatingfunction to calculate the decoding error probability up-per bounds of the soft and hard decisions, in order tobe able to verify the 2 dB gain between them.

We present the concept of the generating function orthe transfer function, as it is called in control sys-tems [9], and show how it is possible to apply it tocoding techniques and more specifically to ternary linecodes [1, 10].

The generating function or transfer function of a linecode, which was first introduced for convolutionalcodes [3], contains all the necessary parameters suchas the value of the distances between the expected andreceived data, the length of the path on the trellis andthe inputs of the message. It gives the properties ofall the possible paths that any code can assume. Thegenerating function will be used for the research intothe upper bound for the probability of errors of Viterbidecoding. It can be derived by Mason’s rule [11], whichis explained as follows:

The overall transfer function of a flow graph can beobtained from the formula:

T =∑

k (Tk∆k)∆

, (1)

ON THE GENERATING FUNCTION AND ITS APPLICATION TO TERNARY LINE CODES

Page 2: ON THE GENERATING FUNCTION AND ITS APPLICATION ......2016/08/26  · of any two codewords is another codeword and the all-zeros sequence is a codeword, the major step to derive these

Vol.101(4) December 2010SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS122

where:

* Tk is the transfer function of each forward pathbetween a source and a sink node.

* ∆ = 1−∑L1 +

∑L2 −

∑L3 + · · · , is the deter-

minant [11] of the whole graph.*∑

L1 is the sum of the transfer functions ofeach closed path.

*∑

L2 is the sum of the product of the trans-fer functions of all possible combinations of two non-touching loops. We are given three non-touching loops:1, 2 and 3. Then

∑L2 = 1 × 2 + 1 × 3 + 2 × 3.

*∑

L3 is the sum of the product of the trans-fer functions of all possible combinations of three non-touching loops. We are given three non-touching loops:1, 2 and 3. Then

∑L3 = 1 × 2 × 3.

* ∆k is the cofactor of Tk. It is the determinant ofthe remaining sub-graph when the forward path, whichproduces Tk, is removed. Thus, it does not include anyloops that touch the forward path in question. It isequal to unity when the forward path touches all theloops in the graph, or when the graph contains no loops.

Section 2 introduces the error probability of transmit-ted data for convolutional codes in soft and hard deci-sion Viterbi decoding. In Section 3, a new code com-bining two properties of two different codes from twodifferent classes is presented. The new code will beused to understand the behaviour of certain ternaryline codes when they are subjected to Viterbi decod-ing algorithm. Section 4 presents a generalization ofthe concept of generating function. We conclude withsome final remarks.

2. PROBABILITY OF ERROR IN TRANSMISSION

2.1. Binary Convolutional Codes

2.1.1. Probability of error for soft-decision Viterbi de-coding

In this section we make use of the encoder of the con-volutional code to study the error rate performance ofViterbi decoding on an additive white Gaussian noisechannel with soft-decision decoding [3]. Many factorsmust be known before we can begin calculating the er-ror probability. These include the linearity propertyof the convolutional encoder which will be employedto simplify the derivation, the all-zero path (referencepath) to calculate the metrics, and the type of modula-tor used in the transmission. These are crucial factorsfor the calculation of any error probability.

It is assumed that the transmitted data is modulatedby using the Binary Phase-Shift Keying (BPSK) [12]modulation and detected coherently at the demodu-lator. So the probability of error, denoted by P2 on abinary symmetric channel (BSC), in the pair-wise com-parison of two paths that differ in d (distance metric

compared to the reference path) bits is [13]:

P2(d) = 0.5erfc(√

γbRd), (2)

where γb is the signal to noise ratio per bit; and R isthe code rate.

The time domain convolutions can be conveniently re-placed by polynomial multiplications in D-transformdomain and after calculating the generating function,we obtain the following form:

T (D,N) =∞∑

d=dfree

adDdNf(d) (3)

where ad and N are respectively the number of encodedsequences of weight d from the all-zero path and theinformation block length. The function f(d) denotesthe exponent of N and it depends on the value of d. Asan example for a convolutional code with a constraintlength K = 3 and a rate of R = 1/2, where its freedistance is dfree = 5, we have ad = 2(d−5) and f(d) =d − 4.

Taking the derivative of T (D,N) with respect to Nand setting N = 1, we obtain:

∂T (D,N)∂N

=∑

βdDd (4)

where βd = adf(d).

The successful completion of the above calculationbrings us close to our main purpose, which is the cal-culation of the upper-bound error probability.

Once we have determined the upper-bound error prob-ability for the soft and hard decisions, it becomes pos-sible to verify the well-known 2 dB gain improvementof soft decisions over hard decisions.

Pb <∑

βdP2(d). (5)

Taking into consideration the error probabilityin (2), (5) can be written as:

Pb = 0.5∑

erfc(√

γbRd). (6)

By using the derivative of the generating function, (6)can be expressed simply as follows:

Pb < 0.5∂T (D,N)

∂N, (7)

with N = 1, and D = exp(−γbR) since erfc(√

x) <exp(−x).

Page 3: ON THE GENERATING FUNCTION AND ITS APPLICATION ......2016/08/26  · of any two codewords is another codeword and the all-zeros sequence is a codeword, the major step to derive these

Vol.101(4) December 2010 SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS 123

Input

Output1

Output2

00 (0) 10 (1)11 (1)

01 (0)

0 1

0/00

0/01

1/11

1/101

0dfree = 3

(a) (b) (c)

Figure 1: Convolutional encoder: R = 1/2, K = 2, dfree = 3

2.1.2. Probability of error for hard-decision Viterbi de-coding

On a binary symmetric channel, the hard-decision de-coding of the binary code will be used to study theperformance of the channel’s Viterbi decoding. TheHamming distance will be used for the calculation ofthe metrics at each node of the trellis.

Using the same notation as for soft decisions, the dis-tance d refers to the distance from the all-zero path. Itis important to know whether d is odd or even, to getan understanding of the formula of the error probabil-ity that will be used later. This principle is outlined inthe formula below [13]:

If d is odd, the error probability of selecting the incor-rect path is:

P2(d) =d∑

k=(d+1)

2

(dk

)pk(1 − p)d−k, (8)

and if d is even, the error probability of selecting theincorrect path is:

P2(d) =d∑

k= d2 +1

(dk

)pk(1 − p)d−k

+ 0.5(

dd/2

)pd/2(1 − p)d/2,

(9)

where p is the transition probability.

By summing the pair-wise error probabilities, P2(d),over all possible paths, which merge with the all-zeropath at the given node and by using the Chernoffbound [14, 15] on the pairwise error probability asP2(d) < [2

√p(1 − p)]d, we obtain the union bound:

Pb <∞∑

d=dfree

ad[2√

p(1 − p)]d. (10)

Using the generating function directly, the error prob-ability upper bound can be calculated as follows:

Pb < T (D,N), (11)

where D = 2√

p(1 − p).

Example: In this example we present a binary con-volutional code with only two states and a rate ofR = 1/2 [13]. This code was chosen because of its sim-plicity and the close resemblance to the Alternate MarkInversion (AMI) [13] line code, which will be studiedfurther as a simple study case for ternary line codes.

Fig. 1(a) shows the block diagram of our convolutionalencoder with two states.

As can be seen from the model of the convolutionalencoder presented in Fig. 1(a), the block diagram hasone shift-register bit that lead us to a constraint length,K = 2, with two outputs. Thus the encoder has acode rate of R = 1/2. The state diagram shown inFig. 1(b) explains the method to obtain the distanceproperties of the convolutional code that we have. Itis clear from the trellis diagram presented in Fig. 1(c)that the minimum distance (free distance) is dfree = 3.

To obtain the generating function, it is neces-sary to label the branches of the state diagram asD0, D1, D2, D3, where the exponent of D denotes theHamming distance of the sequence of output bits com-pared to the sequence of output bits on the all-zerobranch [13]. As mentioned earlier, the generating func-tion is obtained by Mason’s rule, so it is necessary thatan input and an output be defined in the state dia-gram to be able to calculate the gain, which is thetransfer function divided by that input to that output.From here, the self-loop, which the connection between

Page 4: ON THE GENERATING FUNCTION AND ITS APPLICATION ......2016/08/26  · of any two codewords is another codeword and the all-zeros sequence is a codeword, the major step to derive these

Vol.101(4) December 2010SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS124

(0) (1) (0)

Xi LND2 LD X0

LND

Figure 2: Flow graph of convolutional encoder:R = 1/2, K = 2, dfree = 3

the same state, at node (0) can be eliminated, since itmakes no contribution to the distance properties of thecode sequence relative to the all-zero code sequence.Node (0) is split into two nodes: one represents theinput; the other represents the output of the state dia-gram.Definition 1 The all-zero path is all branches in thetrellis that correspond to the self-loop of state 00. �

Definition 2 The term L in the generating functiondetermines the length of a given path. The exponentof L, reflects the number of branches in the path. �

Definition 3 The term N is included in the generat-ing function only if the branch transition is caused byan input data of “one” for the all-zero path. �

Fig. 2 shows the state diagram labelled according to thedistance from the all-zero paths, lengths and numbersof input ones. The terms X0 and Xi represent respec-tively the output and the input of the state diagram.We denote by Tconv(D,N, L) the generating functioncorresponding to all codewords beginning with a non-zero branch for the convolutional encoder and definedas:

Tconv(D,N, L) =X0

Xi, (12)

From Fig. 2, we see that we have:

X(1) = LND2Xi + LNDX(1), (13)

X0 = LDX(1), (14)

Tconv(D,N, L) =L2ND3

(1 − DNL), (15)

From (15) we can expand the denominator usingMaclaurin [16] series, which leads to the following:

Tconv(D,N, L) =D3L2N + D4L3N2 + D5L4N3

+ · · · + D3+nL2+nN1+n + · · ·(16)

0 2 4 6 8 10 12 14 16 18 2010-7

10-6

10-5

10-4

10-3

10-2

10-1

1

SNR (dB)

BER

Upper-bound Soft decisionUpper-bound Hard decisionSimulation: Hard decision

Figure 3: Upper bounds for convolutional encoder:R = 1/2, K = 2, dfree = 3

This generating function will provide the properties ofall the paths in the convolutional code which start fromstate 0 with a non-zero branch and end in state 0. Thefirst term in (16) indicates that the minimum distanceis d = 3, which is represented by the exponent of D,the corresponding path is of length 2, represented bythe exponent of L; and out of two information bits, oneis equal to 1 hence the exponent of N is 1. The secondterm in the expansion of Tconv(D,N,L) indicates thatwhen the minimum observed distance is d = 4, the cor-responding path is of length 3 and two of three informa-tion bits in the path have the value 1. In the case wherethe sequence in the transmission is extremely long, es-sentially an infinite-length sequence, it is recommendedthat the generating function Tconv(D,N, L) in (16) willbe simplified by setting L = 1, which leads to:

Tconv(D,N, L) =D3N + D4N2 + D5N3 + · · ·+ D3+nN1+n + · · · ,

(17)

and by writing the terms in (17) in a sigma form, weobtain:

Tconv(D,N) =∞∑

d=3

Nd−2Dd. (18)

A Matlab program is used to plot the curves of the up-per bounds for hard and soft decisions and to check the2 dB gain improvement between them. On the samegraph the simulation results are presented in order tobe compared to theoretical upper bounds. We have justlimited our results to hard decisions since soft decisionsare presented theoretically.

Fig. 3 shows the 2 dB gain between soft decisions andhard decisions for the convolutional encoder R = 1/2,

Page 5: ON THE GENERATING FUNCTION AND ITS APPLICATION ......2016/08/26  · of any two codewords is another codeword and the all-zeros sequence is a codeword, the major step to derive these

Vol.101(4) December 2010 SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS 125

10-1210-1110-1010-910-810-710-610-510-410-310-210-1110-12

10-11

10-10

10-9

10-8

10-7

10-6

10-5

10-4

10-3

10-2

10-1

1

Error Probability

BER

Transfer functionSimulation

Figure 4: Error curves for the AMI code

K = 2. We observe the closeness of our simulation forhard decisions to the theoretical hard-decision upperbound. These results make us positive about the wayin which we use the same approach to calculate theerror probability upper bound for our line codes.

2.2. Ternary Line Codes

This class of codes differs from the binary convolutionalcodes in terms of linearity and negative outputs thatthe encoder generates. Thus more complexity appearsin calculations to determine the distances for Viterbialgorithm.

Example: We take the example of AMI, because of itssimplicity.

Since the outputs of the AMI are positive and negativevalues [17], the Hamming distance will be problematicfor calculating the weight of distances. Thus Euclideandistance is the appropriate choice for this class of codes.

AMI is a line code with 1Mb/s as a bandwidth and itdoes not need a modulator to send the message throughthe channel, which is not the case with convolutionalcodes where we use PSK modulation. Also, it must benoted that AMI is a non-linear code, while the gener-ating function approach was developed for linear con-volutional codes. And this makes it difficult for us todouble check our simulation results with a theoreticalbackground, as was done in Example 2.1.2 with theconvolutional code.

From the above discussion, it can be predicted thata perfect match between the simulation and the errorprobability upper bound for the AMI results cannot beachieved.

Since AMI code is the simplest ternary line code to

0 2 4 6 8 10 12 14 16 18 2010-7

10-6

10-5

10-4

10-3

10-2

10-1

1

SNR (dB)

BER

Hard decisionSoft decision

Figure 5: Soft and Hard decision Viterbi decoding forAMI

understand and to design in view of its close state ma-chine structure as the case for convolutional codes, weassume that the derivation of its corresponding gen-erating function is achievable. Therefore, we plottedthe approximated curves, comparing the simulation re-sults of AMI [18] to its generating function as shownin Fig. 4. It is clear from Fig. 4, as it was predicted,that the difference between the two plotted lines can beclearly seen. Thus, the assumption that a ternary linecode, as the case of AMI, can be considered or treatedas line code is actually proved to be wrong.

Using only simulation results [6], Fig. 5 shows the gainbetween soft decisions and hard decisions for AMI [19],which is almost equal to 2 dB at the bit error rate(BER) value of 10−6. This result is very importantand shows the accuracy of our simulation set up. Tocheck the theoretical upper-bounds for soft and harddecisions, we need to look at another technique basedon certain approaches that can verify and back up oursimulation results. This will be described in the follow-ing section.

3. CONVO–AMI: A NEW “LINEARISED” LINECODE

The generating function is often used to obtain the up-per bound for the error probability of Viterbi decoding,and is readily obtained for linear convolutional codes.However, this is not the case for the non-linear ternaryline codes that we present in this paper. A new linecode called Convo-AMI was therefore developed in or-der to apply existing theory to these ternary line codes.

To obtain the corresponding Convo-AMI code, the neg-ative symbols of the AMI are converted to binary usingthe 2-bit two’s complement rule. The obtained code isnow considered as a binary code with two bit outputcorresponding to each one input bit. The new binary

Page 6: ON THE GENERATING FUNCTION AND ITS APPLICATION ......2016/08/26  · of any two codewords is another codeword and the all-zeros sequence is a codeword, the major step to derive these

Vol.101(4) December 2010SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS126

InputOutput1

Output2

+−

+

Two’s

Complement

Convertor

2-Bit

01 (1)00 (0)

11 (1)

0 1

00 (0)

(a) (b)

0/00

1/111

0dfree = 3

(c)

0/00

1/01

Figure 6: Convo-AMI encoder: R = 1/2, K = 2, dfree = 3

(0) (1) (0)

Xi X0LND

L

LND2

Figure 7: Flow graph of the Convo-AMI code,dfree = 3

code with two states is considered as a new convolu-tional code with rate R = 1/2 and constraint lengthK = 2. The new Convo-AMI code, with block di-gram presented in Fig. 6(a) has a similar state machinestructure to the previously studied convolutional codeas shown in Fig. 6(b).

Using the Convo-AMI, we were able to obtain a tightupper bound on the error probability using the gener-ating function. Our new code retained certain charac-teristics of the AMI as the number of states and thesame outputs in a binary form and also has a mini-mum distance, dfree = 3, as depicted in Fig. 6(c). Thismakes the new code similar to the convolutional codeof constraint length K = 2 presented in Fig. 1. Thebehaviour of the new code however, imitates a binaryconvolutional code.

TConvo-AMI(D,N, L) denotes the generating function ofthe new code, Convo-AMI encoder, defined as follows:

TConvo-AMI(D,N,L) =X0

Xi, (19)

As was done with the convolutional encoder and fromFig. 7, we have,

TConvo-AMI(D,N, L) =L2N2D3

(1 − L). (20)

Using Maclaurin series, the denominator in (20) can beexpanded and we can obtain the following:

TConvo-AMI(D,N, L) = D3N2(L2+L3+L4+· · · ). (21)

0 2 4 6 8 10 12 14 16 18 2010-8

10-7

10-6

10-5

10-4

10-3

10-2

10-1

1

SNR (dB)

BER

Upper-bound Soft decisionUpper-bound Hard decisionSimulation: Hard decision

Figure 8: Upper bounds for Convo-AMI: R = 1/2,K = 2, dfree = 3

It is clear from (21), that there is an infinite numberof paths with the same distance equal to three whichdiffer by two input bits from the all-zero path. Thereis only one path with different lengths, varying from 2to infinity.

The name given to this new code is Convo-AMI. Thename reflects the link between both codes: AMI andconvolutional codes. While we retained the character-istics of the AMI, we could apply Mason’s rule to ob-tain the generating function of Convo-AMI presentedin (21).

Fig. 8 shows the 2 dB gain between soft- and hard-decision Viterbi decoding for the Convo-AMI code.

4. GENERALIZATION

After we have resolved the problem of linearisation ofAMI, we wish to generalize this method to the rest ofthe ternary line codes. The two’s complement tech-nique and the simplicity of AMI structure helped us tolinearise this line code and work through its look-alikelinear line code to study the behaviour of its error prob-ability upper bound . The question that still confrontedus was if we could linearise the rest of our ternary linecodes, would we be able to implement all Mason’s rulesteps to calculate the generating function?

Page 7: ON THE GENERATING FUNCTION AND ITS APPLICATION ......2016/08/26  · of any two codewords is another codeword and the all-zeros sequence is a codeword, the major step to derive these

Vol.101(4) December 2010 SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS 127

Table I: Decoding gain and state machine structure of different line codes

Ternary Line Gain at BER = 10−6 for Number of Self loop

Codes 3-bit quantisation states

AMI ∼ 2.00 2 yes

HDB1 ∼ 2.00 8 no

B4ZS 2.05 18 no

B6ZS (0VB0VB) 2.00 70 no

HDB1

0/+ 0/+

1/+1/-1/+ 1/-

1/01/0

0/-

0/0

0/-0/+

0/0

1/0

0/-

1/0

Figure 9: HDB1 State diagram

0 2 4 6 8 10 12 14 16 18 2010-8

10-7

10-6

10-5

10-4

10-3

10-2

10-1

1

SNR (dB)

BER

Hard decisionSoft decision

Figure 10: Soft and Hard decision Viterbi decodingfor HDB1

It is clear from the previous sections that the calcu-lation of the generating function of any code needs toinvolve some basic steps, which are presented as fol-lows:

1. The use of Hamming distance to calculate theexponent of the metric distances.

2. The choice of an all-zero path to be used as areference to calculate the metric distances of all otherpaths by comparing the received data to that of theall-zero path.

3. The self-loop is necessary for the generating func-tion to apply Mason’s rule.

The AMI was not difficult to implement because of thesimilarity of its state diagram to that of the convolu-tional encoder. Difficulties arose when the code did nothave all, or even some, of the steps mentioned before.

The challenge is to identify which steps are necessaryand which are not.

Most of ternary line codes [20], have no self loops intheir state machines. If we take the example of thestate machine of HDB1 line code presented in Fig. 9,we can clearly see that it has no self loop. Fig. 10 showsthe simulation results and emphasises the 2 dB gainbetween the hard- and soft decision Viterbi decodingof the HDB1 line code.

Table I presents the structure of few ternary line codesand the gain achieved between soft and hard decisionViterbi decoding. It can be seen that AMI is the onlyternary line code that has a self loop.

As was the case with AMI, the problem of the Ham-ming distance is always solved by changing the negativeoutputs of the encoder to their two’s complement.

The all-zero path presents a question: what will hap-pen if the encoder does not have zero outputs? Doesthis change the generating function? To find the an-swers, we will use a more extended and complicatedconvolutional encoder to calculate the generating func-tion. We use a convolutional encoder with four statesto calculate the generating function not only based onthe all-zero path but also based on other different ref-erence paths.

00

11

01 10

00 (0)

10 (1)

11 (1)11 (0)00 (1)

10 (0)

01 (1)01 (0)

(a)

(b) (c)

(d)

Figure 11: State Diagram of convolutional encoder:R = 1/2, K = 3, dfree = 5

Page 8: ON THE GENERATING FUNCTION AND ITS APPLICATION ......2016/08/26  · of any two codewords is another codeword and the all-zeros sequence is a codeword, the major step to derive these

Vol.101(4) December 2010SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS128

LND2LD

LND

(c) (b)

LD2

NL

LDLND

(d)

LND2LD

LND

LD2

NL

LDLND

(a)

(d1) (d2)(c)(b)

LD

LD2

NLLND

(c)(b)

(a)

LD

LND2

(d)

(a) (b) (c)

(a1) (a2)

LND

Figure 13: Flow graph of the convolutional encoder R = 1/2, K = 3, dfree = 5 with different reference states

0 2 4 6 8 10 12 14 16 18 2010-7

10-6

10-5

10-4

10-3

10-2

10-1

1

SNR (dB)

BER

Upper-bound Soft decisionUpper-bound Hard decisionSimulation

Figure 12: Upper bounds for convolutional encoder:R = 1/2, K = 3, dfree = 5

Example:

We take the example of a more general convolutionalencoder with 4 states and a rate of R = 1/2 [13]. Inaddition to the calculation of upper bound curves, wewill compare our simulation to these curves.

Fig. 11 shows the state diagram of the convolutionalencoder with R = 1/2 , dfree = 5. All states are la-belled alphabetically.

Fig. 12 shows how close the hard-decision simulation isto the hard-decision error probability upper bound.

4.1. Generating function with state (00) as the refer-ence path

The state (00), labelled in Fig. 11 as state (a), is takenhere to be the reference for the rest of the states to cal-culate the distance metric for all other paths comparedto the all-zero path, which will be called the referencepath. The corresponding flow graph of the state ma-chine in Fig. 11 is presented in Fig. 13(a) and the corre-sponding relations between all states input and outputsare presented as follows:

Xa2 = LD2Xb, (22)

Xb = LDXb + LDXc, (23)

Xc = NLXb + LND2Xa1 , (24)

Xd = LNDXc + LNDXd, (25)

Using the substitution method to solve the system tocalculate the generating function

T(a)(D,N, L) =Xa2

Xa1

(26)

we get

T(a)(D,N, L) =L3ND5

(1 − ND(L + L2)). (27)

It is clear here that the minimum distance or the freedistance is equal to 5.

With L = 1, we get

T(00)(D,N) =T(a)(D,N)

=∞∑

d=5

2(d−5)N(d − 4)Dd.(28)

4.2. Generating function with state (11) as the refer-ence path

The state (11), labelled in Fig. 11 as state (d), is takenhere to be the reference for the rest of the states to cal-culate the distance metric for all other paths previously

Page 9: ON THE GENERATING FUNCTION AND ITS APPLICATION ......2016/08/26  · of any two codewords is another codeword and the all-zeros sequence is a codeword, the major step to derive these

Vol.101(4) December 2010 SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS 129

compared to the reference path.

The Hamming distance, as used before to calculate thedistance metric for each path and symbolized in thegenerating function as the exponent of the distances,will be calculated referring to the output (10) of thestate (11) instead of (00) of the state (00). The sameprocedure will be followed to search for the weight ofeach distance corresponding to each path.Definition 4 The term N is included in the generat-ing function only if the branch transition is a result ofan input data of the value “zero”. �

The term L is used as it is defined in Definition 2. FromDefinition 4, we can see that the value of N is defineddifferently to the one in Definition 3 since it dependson the input of the reference path.

In the case of the all-zero path, where the self loophas the input of 0, the term N was included in thegenerating function for any input of 1. This value isalways considered to be the complement to the valueof the input of the reference path.

In the case of the (11) state as a reference path, we aredealing with a reference path with an input of 1. Thuswe include the term N in the generating function whenthe input data is “zero”.

In implementing these changes to the calculating pro-cedure, we try here to calculate the generating func-tion of the same convolutional encoder introduced inFig. 11 and presented by its flow graph in Fig. 13(b),corresponding to (11) as the reference path.

Xd2 = LD2Xc, (29)

Xc = LDXa + LDXb, (30)

Xb = NLXc + LND2Xd1 , (31)

Xa = LNDXb + LNDXa. (32)

Using the substitution method to solve the system tocalculate the generating function,

T(d)(D,N, L) =Xd2

Xd1, (33)

we get

T(d)(D,N, L) =L3ND5

(1 − ND(L + L2)), (34)

It is clear here that the minimum distance or the freedistance is equal to 5.

With L = 1 we get

T(11)(D,N) =T(d)(D,N)

=∞∑

d=5

2(d−5)N(d − 4)Dd.(35)

It can be seen from (28) and (35), that both generatingfunctions have the same formula, which means that wehave always:

T(00)(D,N) = T(11)(D,N). (36)

It is clear from (36) that the reference path from a selfloop does not affect the generating function if we takeinto consideration the changes that we have to make tothe definitions of the term N , related to the inputs atthe self loops.

Proposition: The generating function of any convolu-tional code is independent to the self-loop.

Proof: One of the major properties of convolutionalcodes is the symmetry of their state machines. Fig. 11shows clearly the symmetry between the two self-loopstates. This symmetry guarantees the non-changingof the minimum distance and the shortest path corre-sponding to it. The minimum distance and the shortestpath are both the major elements for the derivation ofthe generating function. Thus if these two elements areunchangeable, then we can confirm that the generatingfunction stays the same despite the selection of the selfloop.

4.3. Generating function and the self-loop

Mason’s rule shows that to calculate the generatingfunction of any system it is necessary to have an inputand output for the system. The generating functionis then the ratio of both. Using the state diagram intelecommunications is a way to get to the applicationof Mason’s rule by obtaining an input and an outputfrom the states of the code. This is ensured by theself-loops.

Page 10: ON THE GENERATING FUNCTION AND ITS APPLICATION ......2016/08/26  · of any two codewords is another codeword and the all-zeros sequence is a codeword, the major step to derive these

Vol.101(4) December 2010SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS130

In general the self-loops make no contributions to thedistances since the distances are results of the compar-ison of the binary outputs of the code to the outputsof the reference path.

In the case where we choose two different states - onefor the input and the other for the output of the system- the question that can be asked is: can we obtain thesame result of the generating function as if we use theself-loop? In other words, can two states replace theself-loop state?

From Fig. 11, the state (01), labelled as state (b), ischosen as the input of the system and the state (11),labelled as (d), is chosen as the output of the system.

The branch between the two states will be the referencepath and N will be used for the inputs of “one”. Thebranch between the state (11) and the state (01) willbe eliminated since it is used as a reference path.

Fig. 13(c) shows the new flow graph of the convolu-tional encoder.

T(bd)(D,N,L) =Xd

Xb, (37)

Xb = LD2Xc, (38)

Xd = LND2Xd + LNXc, (39)

T(bd)(D,N, L) =N

(1 − NLD2)D2, (40)

Comparing this generating function with those in theprevious sections, it is clear that they are totally differ-ent from each other. This leads to the conclusion thatthe self-loop in the state diagram for any line code is arequirement to calculate the generating function of theline code.

Proposition: The self-loop is fundamental in the calcu-lation of the generating function of any code.

Proof: The calculation of the generating function ofany code is based on the linearity of that code. Incontrol systems, the linearity is guaranteed when theoutput and the input of a system are related with alinear function. The linearity is guaranteed with theindependence of the input of the system to the outputas the non-existence of any feedback.

The split of the self loop is to ensure that the inputand the output are not connected to other states thatmight contribute to the generating function and play arole similar to the feedback of the system.

This fundamental property does not exist for all codesthat do not have self-loops, which make all states de-pending and contributing to one another and thus willimpact on the generating function.

5. CONCLUSION

It is clear from our investigation that the calculationof the generating function of any line code - used forthe error probability - using Mason’s rule requires cer-tain steps to be executed. Some of these steps can beavoided or modified, like the reference path, but one ofthem is a necessity and can never be avoided, namelythe self-loop.

The problem that we experienced with the ternary linecodes studied here is that most of them do not haveself loops which makes it difficult or, even impossible,to calculate the generating function.

The case of the AMI line code is an exception in viewof its simplicity and the state diagram. It is necessaryafter this investigation to look for other tools besidesMason’s rule that could be used to calculate the gen-erating function or try to apply similar technique usedfor non-linear codes in the context of trellis codes tonon-linear line codes.

6. AKNOWLEDGEMENT

This material is based upon work supported byNational Research Foundation under Grant number(2053408) and also by AAT.

7. REFERENCES

[1] A. Croisier, “Introduction to Pseudo-ternaryTransmission Codes,” IBM J. Res. Develop. ,vol.14, pp. 354–367, July 1970.

[2] B. R. Alyaei and A. Glass, “Line coded modu-lation,“ in the Proceedings of IEEE -Signal Pro-cessing and Communication Systems, Omaha, NEUSA, pp. 1–4, Sept. 28-30, 2009.

[3] A. Viterbi and J. Omura, Principles of Digi-tal Communication and Coding. McGraw-Hill Ko-gakusha LTD, Tokyo Japan, 1979.

[4] G. D. Jr. Forney, “The Viterbi algorithm,” Pro-ceedings IEEE, vol. 61, no. 3, pp. 268–278, 1967.

[5] A. J. Viterbi, “Error bounds for convolutionalcodes and an asymptotically optimum decoding al-gorithm,” IEEE Trans. Info. Theory, vol. 13, pp.260–269, Oct. 2004.

[6] K. Ouahada and H. C. Ferreira, “Simula-

Page 11: ON THE GENERATING FUNCTION AND ITS APPLICATION ......2016/08/26  · of any two codewords is another codeword and the all-zeros sequence is a codeword, the major step to derive these

Vol.101(4) December 2010 SOUTH AFRICAN INSTITUTE OF ELECTRICAL ENGINEERS 131

tion study of the performance of ternary linecodes under Viterbi decoding,” IEE Proceedings–Communications, vol. 151, no. 5, pp. 409–414,Oct. 2004.

[7] M. Rouanne and D. J. Costello, Jr, “An algo-rithm for computing the distance spectrum of trel-lis codes,” IEEE J. Sel. Areas Commun., vol. 7,no. 6, pp. 929–940, Aug. 1989.

[8] R. D. Wesel, “Reduced-state representations fortrellis codes using constellation symmetry,” IEEETrans. Commun., vol. 52, no. 8, pp. 1302–1310,Aug. 2004.

[9] G. F. Franklin, J. D. Powell, A. Emami-Naeini,Feedback Control of Dynamic Systems. Addison-Wesley, Third Edition, 1994.

[10] J. B. Buchner, “Ternary Line Codes,” PhilipsTelecommunications Review, vol. 34, no. 2, pp. 72–86, June 1976

[11] F. H. Raven, Automatic Control Engineering.McGraw-Hill, Fourth Edition, Singapore, 1987.

[12] F. G. Stremler, Introduction to CommunicationSystems. Addison-Wesley, Third Edition, 1990

[13] J. G. Proakis and M. Salehi, Communication Sys-tems Engineering. Prentice-Hall, Inc., EnglewoodCliffs, New Jersey, 1994.

[14] G. L. Stuber, Principles of Mobile Communica-tion. Kluwer Academic Publishers, Boston USA,2002.

[15] A. Dholakia, Introduction to convolutional codeswith applications. Kluwer Academic Publishers,Boston USA, 1994.

[16] A. Croft, R. Davison and M. Hargreaves, Engi-neering mathematics: a modern foundation forelectronic, electrical and system engineers . Har-low : Addison-Wesley, 2nd ed. 1996.

[17] N. Q. Duc, “Line Coding Techniques for BasebandDigital Transmission,” A. T. R. vol. 9 no. 1, pp.3–17, 1975.

[18] J. M. Wozencraft, Irwin Mark Jacobs, Principlesof Communication Engineering. John Wiley Sons,United States of America, First Edition, 1965.

[19] K. Ouahada and H. C. Ferreira, “Soft-decision de-coding of some ternary line codes,” ElectronicsLetters, vol. 39, no. 14, pp. 1068–1069, 2003.

[20] D. B. Keogh, “Finite-State Machine Descriptionsof Filled Bipolar Codes,” A. T. R. vol. 18, no. 2,pp. 3–12 , June 1984.