A new iterative soft-output Viterbi algorithm and its application to the “dicode” channel

19
798 pp. 798-816 Mohamed SIALA ** Eckhard PAPPROTH ** Ka'fs HAJTAIEB ** Ghassan Kawas KALEH ** A new iterative soft-output Viterbi and its application to the "dicode" algorithm channel* Abstract We present an iterative soft-output decoding algo- rithm for serially concatenated coding systems. It has better performance than the conventional noniterative decoding algorithm. When applied together with an ou- ter convolutional code to the dicode channel with par- tial response (1 - D), we obtain an additional coding gain of about 1 dB at a bit-error rate of 10 -4 after two iterations. This new algorithm can also be applied advantageously to satellite communication and fading channels. Key words : Error correcting code, Concatenation, Viterbi deco- ding, Iteration, Weighting, Partial responsecode, Convolutional code, Channel capacity, Error rate. aussi ~tre avantageusement appliqu6 aux communica- tions par satellite et aux canaux d 6vanouissement. Mots el~s : Code correcteur erreur, Concatrnation, D~codage Viterbi, Itrration, Pondrration, Code rrponse partielle, Code convo- lutif, Capacit6 canal, Taux erreur. Contents I. Introduction. II. System description. III. Iterative decoding algorithm. IV. Application to the magnetic recording channel. V. Conclusion. Appendix. References (27 ref ). UN NOUVEL ALGORITHME DE VITERBI ITI~RATIF ET A SORTIE PONDI~RI~E ET SON APPLICATION AU CANAL "DICODE" R4sum4 On pr~sente un algorithme iMratif et d sortie pond~- r~e, de d~codage des systdmes de codage avec concate- nation s~rie. II a de meilleures performances que l'algo- rithme de d~codage habituel, non iteM. Quand on l'ap- plique avec un code convolutif exMrieur au canal dicode de r~ponse partielle (1 - D), on obtient un gain addi- tionnel d' environ 1 dB pour un taux d' erreur binaire de 10 -4, aprds deux iterations. Ce nouvel algorithme peut I. INTRODUCTION We are concerned with an iterative soft-output deco- ding algorithm for serially concatenated coding systems. This algorithm is useful in applications where good inner codes can be designed for the channel. It is composed of an inner decoder (or detector) and an outer decoder (or detector). During an iteration, the inner detector uses a modified version of the simplified soft-output Viterbi algorithm (SOVA) [1, 2, 3] that takes into account the intrinsic contributions of the outer detector at the pre- vious iteration. The outer detector uses also a simplified SOVA. It exploits the fact that the sequence delivered by * This paper is essentially Chapter X of the doctoral dissertation of M. Siala withheld on February 13, 1995, for obtaining the title of "Docteur de l'Ecole nationale suprrieure des tE16communications". ** Ecole nationale sup~rieure des t414communications, Drpartement Communications and UgA 820 of CNRS, 46, rue Barrault, F-75634 Paris Cedex 13, France. ANN. TI~LI~COMMUN., 50, n~ 9-10, 1995 1/19

Transcript of A new iterative soft-output Viterbi algorithm and its application to the “dicode” channel

798 pp. 798-816

Mohamed S IALA ** Eckhard PAPPROTH **

Ka'fs HAJTAIEB **

Ghassan Kawas K A L E H **

A new iterative soft-output Viterbi and its application to the "dicode"

algorithm channel*

Abstract

We present an iterative soft-output decoding algo- rithm for serially concatenated coding systems. It has better performance than the conventional noniterative decoding algorithm. When applied together with an ou- ter convolutional code to the dicode channel with par- tial response (1 - D), we obtain an additional coding gain of about 1 dB at a bit-error rate of 10 -4 after two iterations. This new algorithm can also be applied advantageously to satellite communication and fading channels.

Key words : Error correcting code, Concatenation, Viterbi deco- ding, Iteration, Weighting, Partial response code, Convolutional code, Channel capacity, Error rate.

aussi ~tre avantageusement appliqu6 aux communica- tions par satellite et aux canaux d 6vanouissement.

Mots el~s : Code correcteur erreur, Concatrnation, D~codage Viterbi, Itrration, Pondrration, Code rrponse partielle, Code convo- lutif, Capacit6 canal, Taux erreur.

Contents

I. Introduction. II. System description.

III. Iterative decoding algorithm. IV. Application to the magnetic recording channel. V. Conclusion.

Appendix. References (27 ref ).

UN NOUVEL ALGORITHME DE VITERBI ITI~RATIF

ET A SORTIE PONDI~RI~E ET SON APPLICATION AU CANAL "DICODE"

R4sum4

On pr~sente un algorithme iMratif et d sortie pond~- r~e, de d~codage des systdmes de codage avec concate- nation s~rie. II a de meilleures performances que l'algo- rithme de d~codage habituel, non iteM. Quand on l'ap- plique avec un code convolutif exMrieur au canal dicode

de r~ponse partielle (1 - D), on obtient un gain addi- tionnel d' environ 1 dB pour un taux d' erreur binaire de 10 -4 , aprds deux iterations. Ce nouvel algorithme peut

I. INTRODUCTION

We are concerned with an iterative soft-output deco-

ding algori thm for serially concatenated coding systems. This algori thm is useful in applications where good inner

codes can be designed for the channel . It is composed

of an inner decoder (or detector) and an outer decoder

(or detector). Dur ing an iteration, the inner detector uses

a modified vers ion of the simplified soft-output Viterbi

algori thm (SOVA) [1, 2, 3] that takes into account the

intrinsic contr ibut ions of the outer detector at the pre- vious iteration. The outer detector uses also a simplified

SOVA. It exploits the fact that the sequence delivered by

* This paper is essentially Chapter X of the doctoral dissertation of M. Siala withheld on February 13, 1995, for obtaining the title of "Docteur de l'Ecole nationale suprrieure des tE16communications". ** Ecole nationale sup~rieure des t414communications, Drpartement Communications and UgA 820 of CNRS, 46, rue Barrault, F-75634 Paris Cedex 13, France.

ANN. TI~LI~COMMUN., 50, n ~ 9-10, 1995 1/19

M. SIALA. -- NEW ITERATIVE SOFT-OUTPUT VITERBI ALGORITHM

the inner detector, if correct, must be a codeword of the outer encoder. Therefore, it provides enhanced soft out- puts for the sequence received from the inner detector. The intrinsic contribution used by the inner detector at the next iteration is a function of the difference between the enhanced soft outputs delivered by the outer detector and the soft outputs delivered by the inner one. At the final iteration, the decoder uses the soft outputs produ- ced by the inner detector to generate an estimate of the data sequence at the input of the outer encoder.

This iterative algorithm performs better than the conventional noniterative decoder for concatenated coding systems using the SOVA in the inner detector. It can be used whenever some form of serial concatenation is applied. This can include

�9 partial-response channels such as the magnetic recording channel [4, 5] or the frequency-selective chan- nel ;

�9 trellis-coded modulations for the Gaussian channel [6, 7], the very noisy Gaussian channel [8], the fading channel [9, 10] or the magnetic recording channel [4, 5] ;

�9 convolutional codes such as non systematic convo- lutional codes [ 12] or recursive systematic convolutional codes [13] ;

and all possible combinations thereof. When applied together with an outer convolutional code to the dicode channel 1 - D, an additional coding gain of about 1 dB is obtained at a bit-error rate of 10 -4 after two iterations.

The proposed algorithm can use other decoding algo- rithms which provide soft outputs instead of the SOVA. One can cite the optimum symbol-by-symbol maximum a posteriori (MAP) algorithm proposed by Bahl et al. [14] and the weighted output Viterbi algorithm of Bat- tail [2, 3].

A related work [13] proposed an iterative soft-output decoding algorithm for parallel concatenated coding systems. The concatenated coding system is composed of two punctured recursive systematic convolutional codes. Our serial concatenation scheme allows us to design a suitable inner code for the investigated channel.

Another work [15] proposed an iterative hard-output decoding algorithm for improving the standard serially- concatenated coding system for deep space missions. Our scheme is more advantageous because soft-output decoding guarantees a significant additional coding gain over hard-output decoding.

In Section II we describe the serially concatenated coding system together with our iterative detector. The theoretical considerations on which this iterative deco- ding algorithm is based are then presented in Section III. In Section IV, we illustrate the iterative algorithm by applying it to the dicode channel. We study in detail the precoded and nonprecoded dicode channel. We give for each of these channels a simple difference-metric SOVA for the inner detector. We show that the difference-metric algorithm for the nonprecoded dicode channel is simpler than that obtained for the precoded one. From this algo- rithm we derive an explicit expression for the probability distribution function of the metric-difference. To war-

799

rant a good synchronization for our concatenated sys- tem, we propose a solution to bound the zero runlength at the output of the dicode channel. Based on simulation results, we show that the capacity of the precoded dicode channel is lower than that of the nonprecoded one. We show also that in terms of bit-error rate (BER), the non- precoded dicode channel outperforms the precoded one. Finally, we give some simulation results when a punctu- red convolutional code of high rate is used as the outer code. We show that for a bit-error-rate (BER) of 10 -4, the additional coding gain achieved by our iterative scheme after two iterations is nearly 1 dB.

II. S Y S T E M D E S C R I P T I O N

Figure 1 shows the block diagram of a concate- nated coding system with iterative soft-output Viterbi detection. The binary data sequence {ai} is fed into the outer encoder of rate Ro. The binary sequence {bj} at the output of this encoder is first interleaved by a block interleaver. This interleaver is a sufficiently large E • H rectangular matrix which is filled row-by-row and read column-by-column. The resulting sequence, {ck}, is then serially encoded into the sequence {dz} by the inner encoder of rate Ri. This encoder can be replaced either by a precoded or nonprecoded partial-response channel or by an inner encoder followed by the channel itself. The sequence {dz} is sent over the channel and produces at its output the noiseless sequence {xz) and the noisy sequence {yt}, where

dt for the Gaussian channel, Xl

d l - dl-a for the dicode channel,

and

Yl = Xl q- h i .

The {nz) denotes an additive white Gaussian noise sequence with zero mean and variance a 2.

{a~, ~ ~ dl} Channel {xl}

/

I{ 7 '}

SOVA I~-ql'775Sqinterleaverl~-"TSSqr~_,~ SOVA I - - 1

1 algorithm [

FIG. l. - - Concatenated coding system with iterative decoding.

Systdme de codage concatdn~ avec d~codage itOratif.

2/19 ANN. TI~LI~COMMUN., 50, n ~ 9-10, 1995

800

II.1. First iteration.

During the first stage of the first iteration, soft or reliability informations, ~(1), k . . . . , - 1 , 0 , 1 , . . . ,

associated with the estimated symbols ~1), k . . . . , - 1 , 0 , 1 , . . . , are computed by the modified version of the SOVA [1] taking into account the structure of the combined inner encoder and channel trellises. This algorithm is referred to as the inner SOVA. Let T/denote the trellis which takes into account both the structure of the inner encoder and that of the channel. The metric adopted at this stage is the Euclidean metric

- 2, l

where ~t is an arbitrary path symbol with the same position as Yl in Ti. The soft-information variables A~ 1) represent, up to a multiplicative factor, an approximation of the log-likelihood ratios

In e(ck = "0" I{Y*}) P(Ck = "l"l{yl}) '

The sequence of estimated symbols and the associa- ted reliability information are deinterleaved using the reverse procedure of the (E, II)-block interleaver. The

resulting sequences are denoted by {/9~ 1)} and {~1)}, respectively.

In the second stage of the first iteration, enhanced soft information, ~1) associated with the new estimated

symbols ~1) are computed by the simplified version o f

the SOVA taking into account this time the structure of the outer encoder trellis. This algorithm is referred to as the outer SOVA. It fulfills the same function as the MAP filtering of [16]. The metric adopted by this stage is the simple correlation metric

- - E ( 2 / 3 j - - 1)(2/9~ 1) - - 1)F~ 1), J

where/3j E {"0", "1"} is an arbitrary path symbol with

the same position as ~1) in the outer encoder trellis To.

Once again, the enhanced soft-information variables ~,(1) 3 represent, up to a multiplicative factor, an approximation of the log-likelihood ratios

in P(bj "0" ^(1) ^ (a-) -- I{by },{r j , })

P(bj = "l"l{/~)}, {F~})})

This second stage exploits the fact that the sequence

{~1) }, if correct, must be a codeword sequence of the

outer encoder. Denote respectively by {~(k 1) } and {~1)}

the interleaved versions of the sequences {~1)} and

{~1) }. As explained later, the intrinsic contribution of the outer SOVA in comparison to the inner one is measured by the difference

A (1) = (2~ (1) - 1)/~ (1) -- (2~ (1) - 1)/~ (1)"

ANN. TI~LI~COMMUN., 50, n ~ 9-10, 1995

M. SIALA. -- NEW ITERATIVE SOFT-OUTPUT VITERBI ALGORITHM

II.2. Second iteration.

In order to use the previous intrinsic contribution in the first stage of the second iteration, the new metric used by the inner SOVA is

E ( Y , - _ Z(2 k _

l k

where Xk and ~l are respectively the inputs and noiseless outputs of an arbitrary path in the inner trellis. For a given SNR, the positive parameter ~0) should be chosen to minimize the bit-error-probability of the sequence {&i} at the end of the iterative decoding process. The symbol (z has always the same position as the received symbol Yr. The new soft-information variables generated by the inner SOVA and its associated decisions are denoted by fik (2) and ~2), respectively.

In the second stage of the second iteration, the metric used by the outer SOVA is

- E ( 2 / 3 j - 1)(2/~ 2) - 1)F~ 2). J

II.3. n-th iteration.

We denote by {A(k m)} and {~(k m) } respectively the soft information and estimated sequences at the output of the inner SOVA after the first stage of the m-th itera- tion. We denote also by {~m)} and {~m)} respectively

d ~

the enhanced soft information and estimated sequences at the output of the outer SOVA after the second stage of the same iteration. As for the first iteration, denote respectively by { ~ m ) } and {A(k m) } the interleaved ver-

sions of the sequences {~m)} and {~m)}. At the m-th iteration, the intrinsic contribution of the outer SOVA in comparison with the inner one is defined as

A~ m) = (2~ (m) - 1 ) ~ m) - (2~ ( '0 - 1)X (m).

The metric used by the inner SOVA at iteration n is given by

E(y , -~ l ) 2 - E(2Xk-- 1) l k

(s + s

Once again, the positive parameters e (t), e(2), -.. , ~ (n -1 )

should be chosen to minimize the bit-error-probability of the sequence {ai} at the end of the iterative decoding process.

II.4. Final iteration.

At the final f-th iteration of the iterative decoding process, an outer conventional Viterbi decoder delivers an estimate, {5i}, of the information sequence. The metric used by this decoder is

- E ( 2 / 3 j - 1)(2/~ y) - 1)F~ y). J

3/19

M. SIALA. -- NEW ITERATIVE S O F T - O U T P U T VITERBI ALGORITHM

III. ITERATIVE DECODING ALGORITHM

III.1. Inner detector.

III.l.1. Inner detector metric.

Assume for now that the sequence {ck } at the input of the inner encoder is finite with length K . We denote by L the length of the corresponding noiseless and noisy chan- nel output sequences. We write interchangeably {ok} or Cl K = (c l , c2 , ' " ,c t , : ) , {Xl} or x L = ( X l , X 2 , ' ' ' , X L ) , and {Yt } or yl L = (Yl, Y2, " " " , YL).

The SOVA is based on the metric

-2(7: In (P(r lyl~)),

or equivalently

-2(7 2 In (p(yr I~L)p(sL)) .

If the inputs of the inner encoder are independent and governed by the probabilities

Pk(c) = Pr{ck = c},

then P ( ~ ( ) L = 1-Ik=l Pk(xk) . The channel is assumed to be Gaussian. Therefore, the metric used by the SOVA can be written as

(1) Z (yl - ~t)2 - 2(r2 Z in (Pk(xk)) . l k

Let c E {"0", "1"}. We denote by ~ the comple- mentary element of c in {"0", " l" }. Consider two arbi- trary paths, 79 and 79', in the inner trellis T/ with input

tK sequences X1K and X ~ , and noiseless output sequences ~L and ~, i . If the metric difference

is negative, then the SOVA chooses the path 79, otherwise it chooses 7:". As a consequence, the metric in (1) can be replaced by the more symmetric

Z ( y l -- ~l)2 -- (72 Z l n ( Pk(xk)'~

without any change in the metric difference of (2). This metric is rewritten as

(3) Z(Yt--~t)2--Z(2xk--l)[(721n(Pk(ck="l"))] z k \Pk(ck "0") '

and adopted in the rest of the paper. The binary inputs of the inner encoder take equipro-

bably the values "0" and "1". Therefore, the metric used in the first stage of the first iteration is simply

4/19

801

Z(Y~ - ~)~ l

At the n-th iteration, the intrinsic contributions of the outer SOVA are used to compute the log-ratios

(4) in ( P k ( c k = " 1 " ) ) Pk(ck "o") '

in the metric of (3). For instance, if the intrinsic contri- butions of the outer SOVA are favorable to ck = "1", then the log-ratio in (4) is positive. For the paths where Xk = "1", the Euclidean metric in (3) is reduced by

(72 in ( P k ( c k -- "1") ) \Pk(~k "o") "

For the other paths where Xk = "0", the Euclidean metric is increased by the same term. Recalling that the SOVA chooses the path with least metric, it follows that the paths with Xk = "1" are favored in comparison with the other remaining paths.

III.1.2. Computation of reliability information.

The SOVA is an augmented Viterbi algorithm which outputs the maximum-likelihood sequence together with a reliability measure for each decision. The SOVA can only be used in trellises with two branches ending in each state. For trellises where this condition is not satis- fied, we propose a modified version of this algorithm.

The soft-output Viterbi algorithm : We use the simplest version of the SOVA [1]. The soft information

A.~) corresponding to the k-th decision d~ m) is first initialized to infinity. Let 7:' and 79' denote the two paths merging after their n-th inputs into the same arbitrary state in the inner detector trellis. We denote by X~ and X 1 their inputs before the merging and by (1 ~ and

( '~ their corresponding noiseless channel outputs. We denote by Al,~ and M~ the metrics accumulated in paths 7 9 and 79! until the n-th input. According to (3), the metrics M~ and M~ are expressed as

A

(5) M~ : Z(Y~ - ~)2- / = 1

t~

E ( 2 X a _ 1)a2 in (Pa(ck = "1")"~ k:l \ Pk(ck "0")] '

A

(6) M'~ = ~-~.(Yt - ~ ) 2 l = 1

Z ( 2 ~ ~ _ 1/(72 in (Pk(~k = "1" k=l \Pk (ck = "0" "

Let 79 be the path with smaller metric. The higher is the metric difference

(7) A M ~ ' = M ~ - M~,

the more reliable is the survivor path 79. Let (n - 6) denote the position of the inner encoder input at which 79' deviates from 79. For k = 1 , 2 , . - . , n - 6 - 1, we have Xk = X~. For k = n - 5 , . - . , n , denote by e the

ANN. TI~LfiCOMMUN., 50, n ~ 9 -10 , 1995

802

number of binary inputs verifying Xk r X~, and by kl, k2,.. �9 ke their indices. Assume that we have stored

the soft information/~(m) of the previous decisions with the survivor path ;O. Under the assumption that path ;O has been selected we can update these soft information for the e differing decisions on this path according to

(8) lit(k m) ~- min(A~m),AM), k - - k l , k 2 , . . . , k e.

Assume now that 7 9 is the maximum-likelihood path. Let (A - 7) denote the position of the noiseless channel output at which ;O~ deviates from 7). For high st, a , it is highly probable that the portion ~;~_.~,... ,(;~ is equal to the actual noiseless sequence portion x~_.y, . . . , xx. Also, with high probability, the Euclidean distance bet-

i ween 7) and ;O' is the free Euclidean distance dfree of the inner trellis Ti. We therefore can write

Y l = ~ t + n t , I = A - 7 , . . . , A ,

A

(aLo) : = r l= )~ - 7

Based on these equalities and using (7) together with (5) and (6), we have for the first iteration

)~

A M = E ( ( ~ t - ~ + n z ) 2 - ( n ' ) 2 ) /=A--"/

l = A - y l = A - y

= (d~ree) 2 + r',

where v is Gaussian with variance i 2 2 4 (d f r ee ) cr . It follows from the previous equation that A M is Gaussian with mean (d}ree) 2 and variance 4(d~ree)2~r 2. Making abstrac- tion of the rain operator, a proper choice to achieve

for high SNR is to replace (8> by estimate L ~

(see [1])

(o) ?t (m) ~ min(A~m),AM/Ni), k = kl ,k2, . . . ,k , ,

where

g i = (d~ree) 2.

The average of AM/N~ is 1 and its variance is 2 i 2 4a /(dfree ) . For an inner code with bipolar output

alphabet and no correction capability i 2 (dfree) is equal to 4 when the Gaussian channel is considered. Making always abstraction of the min operator, we can say that the noise variance at the output of the inner detector is reduced by the f a c t o r (d~ree) 2.

The modified soft-output Viterbi algorithm : Let ;oh, h = 1, 2 , . . . , H, denote the H paths merging after their n-th inputs into the same arbitrary state in the inner detector trellis. Let M ) denote the metric associated with path ;oh. We assume that this metric is an increasing

M. SIALA. -- NEW ITERATIVE SOFt-OUTPUT V1TERBI ALGORITHM

function of h. Let 6 denote the decision delay of the inner detector and fl~ denote the k-th input corresponding to path ;oh. To update the soft reliability information corresponding to the input fl~, k = a - 6 + 1 , . . - , a, of the minimum metric path ;Ol, the modified SOVA executes the following operations :

�9 Among the paths ;oh, h = 2 , . . . , H find the one whose metric M h is the closest to M 1 and whose input X h is different from X~. Denote by 7 9" this path and by M2 its cumulated metric.

�9 The soft reliability information ~m ) associated to the input X~ at the first stage of the first iteration is updated according to

~k (k m) ~ min( A~m), A M /Ni),

where A M = M2 - M 1.

To simplify this modified algorithm one can consider only the path ;Ol with minimum metric and the path ;O2

with the closest metric to M 1.

111.2. Outer detector.

For the sake of simplicity we restrict ourselves to outer trellises with two branches ending in each state. More precisely, we consider a rate-(n - 1)/n outer code punctured from a rate-l/2 convolutional code. This code uses the trellis of the rate-l/2 convolutional code and therefore has two branches ending in each state.

As we have said before, the metric used by the outer detector during the m-th iteration is [1]

(10) - E ( 2 3 j - 1)(2D~ m) - 1)ry (̂m), J

where, as usual, ~i E {"0", "1"} is an arbitrary path

symbol with the same position as D} m)- in the outer encoder trellis To. We recall from (9) that for high SNR

we have asymptotically est imate/~1)/= 1.

Denote by ;O the maximum-likelihood path in the outer trellis at the second stage of the m-th iteration. For the sake of simplicity, we denote by ~j the maximum-

likelihood path decision ~(m) and by ;O' another path v 3

that intervenes in the computation of f,~m). We assume that both paths merge after the J-th output of the outer encoder. We denote by /3~ and fl'~ their outputs and by Mj and Mtj their cumulated metrics until this J-th output. According to (10), these metrics are expressed a s

J

(11) Mj -- - E ( 2 f l j - 1)(2D~ m) - 1)Fj^(m), j = l

(12) Mj E ( 2 / ~ j 1)(2D~m) ^(m) t = _ t _ -- 1)Fj .

j=l

Since ;OJ intervenes in the determination of the enhanced soft information f,~m), we are sure that flj ~ ~j. The soft

ANN. Tg:LI~COMMUN., 50, n ~ 9-10, 1995 5/19

M. S1ALA. -- NEW ITERATIVE SOFT-OUTPUT VITERBI ALGORITHM

information 1~ m) is updated at the end of the J-th output according to

-(m) min(l~ "~), AM), (13) Vj +--

where

(14) A M a , = M j - M j .

Let j t , j 2 , ' " , Je, denote the indices of the binary out- put symbols of the sequence/3( which differ from those

of fl'J. We assume that the index j under investigation is equal to j l . Based on (11) and (12), the difference metric in (14) can be expressed as

e

A M - 2 9Jo)(2b ? ) = ' -- - 1)Fjr .

For high SNR, we have with high probability flj. = bj~ o for e = 1, 2,.- �9 e. Also, with high probability e = dfr~e,

where df~ is the free Hamming distance of the outer code. Based on these considerations the previous metric difference becomes

d~ro~ A M 2 Z ^ (m) = Fj~ .

r

Recalling that e s t i m a t e [1~ )] is asymptotically equal

to 1 as the SiR increases, we should replace (13) by

-(m) +- min(~J~) A M ~ N o ) , (15) r j

with

No o = 2dfree ,

in order to have asymptotically e s t i m a t e [ F ~ 1)] = 1. L ~

Making abstraction of the rain operator in (15) and assuming perfect deinterleaving, one can see that the

~m) is equal to that of 1~ m) divided by variance of

df~ Assume now that the j-th output fly of the maximum-

likelihood path T' is equal to bj, the decision delivered by the inner detector. The previous metric difference can be written as

~'~ df~ F ) - - rr~) A.~e=2

A M ~ N o = + df~

Once again, making abstraction of the rain operator in (15) and using the previous equation, we can write

(16) ~Jm) _ ;J,~) = ; - F ~ df~

In this case, the intrinsic information added by the outer SOVA is, up to a positive multiplicative factor, equal to

Assume for instance that the soft information )~m)

~('~) delivered by the corresponding to the decision bj inner sovA is very small. This means that this decision is unreliable. Assume also that the soft information

6/19

803

~,(m) e = 2,"-,d~ree, is larger than 1. This implies

that the decisions bJ~ m), e = 2 , . . . , df~ delivered by the inner SOVA are reliable. In this case the portion of the maximum-likelihood path ~' which differs from the

other path T" is approved. Therefore the decision ~]m) which was estimated to be not reliable by the inner SOVA is now supported by the outer SOVA. Therefore its new

-(m) is greater than its previous one. This reliability Fj can be seen from (16) and (17) because in this example ^ ^ (m) o F~? ) - I'j > 0, e = 2, . . . ,dfree.

Assume now that the soft information l)~ m) delivered by the inner SOVA is very large. This indicates that the

SOVA estimates the corresponding decision, ~m), inner to be reliable. Assume also that the soft information ~(m) e = 2,'--,df~ is very small. This signifies Je

that the decisions vj~ , e = 2 , . . . , dfree, are estimated by the inner SOVA to be not reliable. It follows that

~ m ) _ ~m) < 0, e = 2,. . . ,df~ Using (16) one

conclude that the new reliability 1~ m) of [)~m) is can

lower than the previous one la~ m) This expresses the --j �9 doubt that the outer SOVA has about the high reliability

~m) assigned by the inner SOVA to the decision Denote by Sk the section in the inner encoder trellis

for which one of the inputs, let's say Xk, corresponds

to o(m). Assume that 6~ m) = b~m). Intuitively, we find

that if the reliability of o~m) is enhanced (respectively, degraded) by the outer SOVA, then the branches of Sk with input Xk equal to (respectively, different from)

d~ "~) should be favored in comparison with the other remaining branches. Based on this discussion and the considerations at the end of Section III.l.1, one can heuristically replace the metric in (3) by the one given in Section II.3 during the n-th iteration, i.e.,

(18) Z ( y l _ r _ E ( 2 X k _ 1) l k

(~(1)A~1) + E ( 2 ) A ~ 2) + . . . + ~ ( n - 1 ) A ~ n - 1 ) ) ,

where

(19) A~ ~) = (25~ m) - 1)X~ m) - (2~ m) - 1)X~ m)

and e O), e(2), .-. , e (n-l) are positive parameters that should be optimized to reduce the bit-error-probability of the sequence {gi } at the end of the iterative decoding process.

Based on (18) and the previous discussion we can say that the branch of section Sk in the inner encoder trellis with input flk is favored (respectively, not favored) if

(20) a (2x - 1)

(~(1)A(kl)-~-g(2)A(2) _~_ . . . @ s )

is positive (respectively, negative). For the n-th iteration we have the following possibilities :

- (n-- l ) ^(n--l) 1. c a = C k :

- (n-- l ) ~(~-1) (i.e., the decision c k = �9 > Ak ,

o(n--1) ~(n 1) ^(n--l) (respec- k is favored) : If Xk = ck = %

ANN. TI~LECOMMUN., 50, 13 ~ 9-10, 1995

804 ~ ( n - l ) z (n-1)x

tively, Xk ~ ck = c k ), then, based on (19), it

follows that (2Xk - 1)A~ n - l ) is positive (respectively,

negative). This means that value of Q(k n) used in the n-th iteration is increased (respectively, decreased) in favor

- (n- - l ) ^ ( n - l ) of the branch with Xk = ck = c k as input.

4n-1) ? , (n- l ) s (i.e., the decision c k �9 ~ k < ~ k '

^(n--l) is not favored) : If Xk - (n- l ) ^(n-l) C k = C k = C k - (n- - l ) ~(n--1),

(respectively, Xk :~ ck = c k ), then, based

on (19), it follows that (2Xk -- 1)A(k n - l ) is negative

(respectively, positive). This means that value of Q(k n) used in the n-th iteration is decreased (respectively,

: (n- - l ) increased) in favor of the branch with Xk ~ % = ^(n--l) c k as input.

2. c k-(n-1) ~ ck^(n-1) (i.e., the outer SOVA is not in favor t i n - l ) , A(n-1) has always of the inner SOVA decision % ) : ~ k

, - ( n - l ) - (n- l ) (respectively, the sign of z% - 1. I f Xk = Ck -(n--i), Xk ~ ck ), then, based on (19), it follows that

(2Xk--1)A~ n - l ) is positive (respectively, negative). This

means that value of Q~n) used in the first stage of the n-th iteration is increased (respectively, decreased) in

: (n- - l ) favor of the branches with Xk = % as input.

IV. A P P L I C A T I O N

T O T H E M A G N E T I C R E C O R D I N G C H A N N E L

We denote by D the unit time delay operator. In magnetic recording, the bipolar input channel is typically described by a class-4 partial response (PRO channel with transfer polynomial 1 - D 2 distorted by an additive white Gaussian noise [17]. The PR4 channel can be considered as two time-interleaved dicode partial response channels, each with polynomial 1 - D [ 18, 19]. A common method of applying the SOVA tO the eR4 channel is to deinterleave the samples of the readback waveform to form two streams of samples : a stream made up of the samples at odd time indices and the stream made up of the samples at even time indices. Then the two soft output detectors matched to the 1 - D channel can be used, one for each stream. In practice, one might even use just one detector in a pipelined fashion [20].

IV.I. The nonprecoded dicode channel.

The block diagram of the nonprecoded dicode channel is shown in Figure 2 [5]. The binary sequence {cl} is transformed into the bipolar sequence {dl}, where dt = 2ct - 1. This sequence of bipolar data symbols at the input of the dicode channel produces at its output the noiseless sequence {xl } and the noisy sequence {Yt }, where

X l = dl - d l - 1 ,

ffl = Xl q- ?Zl.

ANN. TI~LI~COMMUN., 50, n ~ 9-10, 1995

M. SIALA. -- NEW ITERATIVE SOFF-OUTPUT VITERBI ALGORITHM

FIG. 2. - - Block diagram of the nonprecoded dicode channel.

D i a g r a m m e synopt ique du canal 'dicode' non prg-codg.

The noiseless dicode channel is described by the two-state trellis diagram of Figure 3 [5]. Its states are labeled by " + " and " - " in accordance with the input dt-a = +1 or - 1 stored in the unit-delay cell at the time when dl is applied at the channel input, and xt is observed at the output. The symbol dl determines the next state in the trellis. Branches between two states are labeled by the noiseless output signal xl = dl - d r - l , which takes its values in { - 2 , 0, 2}. The dicode channel forces consecutive nonzero noiseless outputs to have alternating polarity [5].

c t~ .~ y - x l _ " 1 " / 0 "1

2 eoe~ 0 " 1 " / 0 �9 �9 �9

FIG. 3. - - Treillis diagram of the nonprecoded dicode channel.

D i a g r a m m e en treill is du canal 'dicode' non prg-codd.

The free Euclidean distance, i 2 (dfree) , of the dicode channel is equal to 8 [5]. We recall that the energy of the dicode channel partial-response polynomial (1 - D) is equal to 2. Therefore, for the same channel symbol energy the free Euclidean distance of the dicode channel is equal to that of the Gaussian channel.

Since the channel is equivalent to an encoder with rate 1, we work interchangeably with k or l. If the inputs dt are assumed to be odd integers in the range - ( m - 1) < dt <_ + ( m - 1), then at high SNR the symbol error probability in the estimation of the sequence {ct} is approximated by [21]

in magnetic recording we use exclusively bipolar inputs. Therefore, m = 2 and the previous expression simplifies to

7/19

M. SIALA. -- NEW ITERATIVE SOFT-OUTPUT VITERBI ALGORITHM

IV.2. The precoded dicode channel.

The block diagram of the precoded dicode channel is shown in Figure 4 [5] With | denoting the addition operator in ~F(2), the precoder for the dicode channel performs the function

FIG. 4. - - Block d iagram of the precoded dicode channel.

Diagramme synoptique du canal 'dicode' prd-cod~.

805

As mentioned by Forney [21 ], precoding is also used to prevent infinite error propagation in the recovery of the transmitted sequence {ct }. At high SNR, the symbol error probability in the estimation of the sequence {ct} is approximated by [21]

where rn is the size of the odd integer alphabet at the input of the dicode channel. For our case m = 2, and the previous expression of Pr(e) is the same as that obtained for the nonprecoded dicode channel. Therefore, at high SNR, the precoded and nonprecoded dicode channels are equivalent in terms of symbol error probability. This is not the case if the size, m, of the input alphabet is greater than or equal to 4.

The bipolar sequence, {dt = 2c~ - 1}, corresponding to the precoded sequence, {c~t }, is then serially transmitted over the dicode channel. Precoding does not increase the number of trellis states because of the equivalence of the signals stored simultaneously in the delay cells of the precoder and the dicode channel [5]. Figure 5 shows the two-state trellis diagram of the precoded dicode channel. The importance of the precoder is that the noiseless dicode channel produces the signals

= [ 0 f o r c z = 0 , Xl

L • f o r c l = 1.

c / ~ ~ - x t

_ "0"/0 "i Iloo~ /0 _ "0"/0 "0'70 �9 �9 �9

FTG. 5. - - Trel l is d iagram of the precoded d icode channel.

Diagramme en treillis du canal 'dicode" prd-codd.

IV.3. Metric difference soft-output Viterbi detection for the dicode channel.

A simplified Viterbi detection algorithm for the dicode channel was first proposed by Ferguson [18] (see also [19] and [20]). Denote by M r ( " + ") and M z ( " - " ) the accumulated state metrics corresponding to the survi- vor paths of states st = "+" and st = " - ' , respectively. This algorithm is based on the difference metric

DMt ~= (Mt(" + ") - Mr(" - ")) /4.

It is described in Table I. Note that only three of four possible survivor extensions can occur. This difference- metric algorithm was used in [22] to produce soft relia- bility information at the output of the precoded dicode channel.

Denote by 4AMz(sz),st E { " + ", " - " } the dif- ference between the metrics of the sequences which are the extensions of the survivor sequences of states s , - i = " + " and st-1 = " - " to state st. We have

( 2 2 ) A M L ( " + " ) ----- D]~Ii_ 1 + Yl-1 - 1,

(23) AMt(" - ") -- DMz-1 + Yt-1 + 1.

Therefore, the runs of zero symbols in the binary sequence, {ct}, at the input of the precoded dicode chan- nel involve the same runs of zero symbols in the dicode channel noiseless sequence {xl }. Consequently, to avoid the occurrence of unlimited runs of zeros in the sequence {xt} to guarantee satisfactory operation of timing and gain control from the received signal, one should sim- ply avoid the occurrence of unlimited runs of zeros in the sequence {cz }. This property is not warranted by the nonprecoded dicode channel.

For the nonprecoded dicode channel, the two survi- vor paths contain the same bipolar input symbols in all but the positions following and including the diverging position. Let A denote the index of the diverging posi- tion. According to (9), the reliability information about the survivor sequence associated with state st is updated using the following assignment [1, 22]

h ('~) +- min(h~ ~), 4AM~(sl)/Ni),

7 = A,A+ 1 , . - . , l - 1,

where now Ni i 2 ~-- (d f r ee ) ---- 8.

8/19 ANN. TI~LC:COMMUN., 50, n ~ 9-10, 1995

806

TABLE I. - - Difference metr ic a lgor i thm for the (precoded or not) d icode channel [18].

A l g o r i t h m e d e c a l c u l de la d i f f & e n c e d e s rnd t r iques p o u r le c a n a l ' d i c o d e ' , p r d - c o d d o u non .

Extens ion Condi t ion Update

D M t - 1 < - Y l - I - 1

- Y t - i - 1 < D M t _ 1 < - Yl-1 + 1

- Y I - 1 + 1 < D M t _ 1

D M I = - Yl-I - 1

D M t = D M t _ 1

D M I = - Yl-1 + 1

For the precoded dicode channel the two survivor paths contain the same bipolar input symbols in all but the diverging position [22]. Therefore, the detector only needs to store one of the survivor paths along with the index of the diverging position. Moreover, it only needs to update the reliability information corresponding to both the diverging position A and the ending position 1 - 1 .

Next, we give a simplified Viterbi detection algorithm for both the precoded and nonprecoded dicode channel when the intrinsic contribution of the outer SOVA given in (20) is taken into account in the inner SOVA. For this purpose, we have the following notations :

�9 The intrinsic contribution of the outer SOVA for the n-th iteration is given by

M. SIALA. -- NEW ITERATIVE SOFT-OUTPUT VITERBI ALGORITHM

�9 The accumulated state metrics corresponding to the survivor paths of states sz = " + " and sl - " "

at the n-th iteration are denoted by M~n)(" + ") and

M~ n) (" - "), respectively. �9 The difference metric at the n-th iteration is defined

a s

DM{n> A (M~n>(,, + , , ) (n) ,, = - M i ( - " ) ) / 4 .

�9 At the n-th iteration, the difference between the metrics of the sequences which are the extensions of the survivor sequences of states 8l-1 = " + " and

st-1 = " - " to state sl is denoted by 4AM~n)(s t ) , sl E {,, +,, ,_ , ,} .

IV.3.1. Nonprecoded dicode channel with intrinsic contri-

bution of the outer SOVA.

When the intrinsic contribution of the outer SOVA is taken into account in the inner SOVA, the simplified Viterbi detection algorithm for the nonprecoded dicode channel becomes :

P r o p o s i t i o n 1. The inner SOVA detector for the non- precoded dicode channel can be simplified using the dif- ference metric algorithm described in Table II. Note that only three extensions are allowed for the inner detector when intrinsic contribution of the outer SOVA is used. Note also that the conditions for these extensions are the same as the ones used in the difference metric algo- rithm when intrinsic contribution of the outer SOVA is not used. The metric difference Aln) (sl) is given by

(24) A}n)(" + " ) = DM} ~) + Yt-t - 1, (n) ,, _ ,,)

A l ( = D M [ n ) + Y l - 1 + 1 .

The proof of this proposition is deferred to the Appendix.

TABLE II. - - Dif ference metr ic a lgor i thm for the nonprecoded dicode channel us ing the intr insic contr ibut ion of the outer SOVA.

A I g o r i t h m e de c a l c u l d e la d i f f d r e n c e d e s m d t r i q u e s p o u r le c a n a l "dicode ' n o n p r ~ - c o d ~ u t i l i s a n t la c o n t r i b u t i o n i n t r i n s d q u e d u d d c o d e u r de V i t e r b i d so r t i e p o n d ~ r d e ex t~r i eur .

Extens ion Condi t ion Update

G r o g

D M t(~ > < - Yl-1 - 1

- Y l - I - 1 < D M I(~ ) < _ - Y l - I + 1

- Yt - t + 1 < D M t(_nl )

D M fn) = _ YI-1 -- 1 - R i(_t]-l)

DA,r n ) - nA,~ (n) o (n-l)

D M [n) = _ Yl-1 + 1 - R l(_n1-1)

ANN. TELI~COMMUN., 50, n ~ 9-I0, 1995 9/19

M. SIALA. -- NEW ITERATIVE S O F T - O U T P U T VITERBI A L G O R I T H M

IV.3.2. Precoded dicode channel with intrinsic contribu- tion of the outer SOVA.

When the intrinsic contribution of the outer SOVA is taken into account in the inner SOVA, the simplified Viterbi detection algorithm for the precoded dicode channel is as follows :

Proposit ion 2. The inner sovn detector for the pre- coded dicode channel can be simplified using the diffe- rence metric algorithm described in Table IIl. Note that the four possible extensions are allowed. We recall that only three extensions are allowed for the inner detector when intrinsic contribution of the outer SOVA is not used. The metric difference A}n) (Sl) is given by

= __ / ~ ( n 1) (25) AI~)(" + " ) DM} ~) + Yl--1 1 + "~z ,

(26) A}n)( '' - ")

The proof of Appendix.

= DM[ n) + Yl-1 + 1 - R} n-l) .

this proposition is deferred to the

IV.4.3. Probability distribution of the metric difference.

For the first iteration of the iterative soft-output deco- ding algorithm, the metric difference algorithms shown in Table II and Table III are identical to that of Table I. This is because R} O) = 0. Therefore, we pro- pose to determine the probability distribution of DMI, AM1 (" + ") and AMI (" - "). The probability distri- bution of AMz(sz 1) gives us an idea about the soft outputs delivered by the difference metric inner SOVA.

Denote by 1

9(u) - v / ~ exp ~ ,

807

the probability density function (PDF) of the Gaussian noise nz and by

? a ( . ) = OC

its cumulative distribution function (CDF) [23]. Denote also by p(.) the PDF of DMz and by P(.) its CDF. We have the following proposition :

Proposit ion 3. I f dl takes equally likely the values - 1 and +1 independently of the index l, then for d e { -1 ,+1} ,

�9 the CDF P(DM~ = DMIdz_I = d) is given explicitly by

(27) P(DM1 = DMIdz_I = d)

= [ ( 2 - G ( - D M + 1 ) - G ( - D M + 1 - 2d) )x

(2 + G ( - D M - 1 ) - G ( - D M + 1 ) ) -

( G ( - D M - 1 - 2 d ) - G ( - D M + 1 - 2d) )x

( 2 - G ( - D M + 1 ) - G ( - D M + I + 2d))] •

[(2 + G ( - D M - 1 ) - G ( - D M + 1)) 2 -

( G ( - D M - 1 - 2 d ) - G ( - D M + 1 - 2d) )x )]_1 ( G ( - D M - 1 + 2d) - G ( - D M + 1 + 2d) ,

�9 the PDF p(DM1 = DMldz_I = d) is obtained by

p(DMt = DMIdt_I = d) d

- d D M P ( D M l = DMIdl_l = d),

TABLE I I I . - - D i f f e r e n c e m e t r i c a l g o r i t h m f o r t he p r e c o d e d d i c o d e c h a n n e l u s i n g the i n t r i n s i c c o n t r i b u t i o n o f t he o u t e r SOVA.

Algor i t hme de calcul de la di f fdrence des mdtr iques p o u r le canal 'dicode' prd-codd uti l isant la contr ibut ion intr insdque du ddcodeur de Viterbi gl sort ie ponddr~e ext~rieur.

E x t e n s i o n C o n d i t i o n U p d a t e

G

R l( n1-1) >_ l , D M l(_nl ) < - Y l-1 + 1 - R t~_n1-1)

R l~- l ) < 1, DMl(_nl ) < - YI-I - 1 + R/(hi 1)

R/_'~ 1) < 1,

1 • (n-l) < ~ , . (n) < - - Y I - I - - �9 T ' X l - - 1 - - L ) I V l I - - I - - - - Y l - - 1 + 1 -Rl(_n1-1)

R / ~ 1) > 1,

- Yl 1 + 1 - gl(_n1-1) < D M I~) < - Yl-I - 1 + R l~_n1-1)

Rl(_n1-1) < 1, - Y l - 1 + 1 -Rl(_nl 1) <DMl(nl )

gl(-n1-1) > l ' -- Y'-I -- 1 + R/('~ 1) < DM/n l )

D M [ ") = - Y ~ l - 1 + Rl(~ -1)

D M I n) = D M l(~l )

D M I n) = - D M l(nl ) - 2y I 1

D M I n) = - Yl 1 + 1 - R t(_nl 1)

10/19 ANN. TI~LI~COMMUN., 50, n ~ 9-10, 1995

8O8

�9 the PDF p(DMt = D M ) is given by

p(DMt = D M ) = (p(DMt = DMldt_ l = + 1 ) +

p(DMt = DM[dt-1 = - 1 ) ) / 2 .

The proof of this proposition is deferred to the Appendix.

When the noise is null (i.e., a = 0), 8( ') is equal to the Dirac function 5(.) and G(.) is equal to the Heaviside function. Based on Proposition 3, we have P ( DMt = DMld t_ l = d) = Y ( D M + d). Therefore, p(DMt = DMld t_ l = d) = 6 ( D M + d). Using the trellis of Figure 3 or Figure 5, one can confirm this PDF by computing the metric difference DMt when the noise is null.

For a nonnull a, the explicit expression of p(DMt = D M ) is very complicated. Therefore, we have recourse to numerical calculation. We recall that the average dicode channel symbol energy Es is equal to 2. Conse- quently, the signal-to-noise ratio

E~ (2s) s ~ = ~ ,

can be rewritten as s ~ = 1/a 2. We have plotted in Figure 6 the PDF p(DMt = D M ) as a function of D M for sr~ = 0, 5 and 10 dB. One can see that for high s ~ , p( D Ml = D M) is concentrated around D M = - 1 and +1.

0.7

0.6

0.5

~ 0 A II

g 0.3

0.2

0.1

0 -4

Parameter: SNR (in riB)

�9 c 10

-3 -2 -I 0 1 2 3 DM

FIG. 6. - - Represen ta t ion o f p ( D M t = D M ) as a funct ion o f D M for s~n~ = 0, 5 and 10 dB.

Representat ion de p ( D M l = DM) en fonc t ion de D M p o u r SNR = O, 5 et 10 dB.

Denote by %, (.) the PDF of AMt(sl) . We have also the following proposition :

P r o p o s i t i o n 4. I f dt takes equally likely the values + 1 and - 1 independently of the index l, then

(29) rcs,(AMt(st) = A M )

= ~ (DMt - 1 = AMldt_2 = +1) �9 9 ( A M + 1)+

p(DMt-1 = AMld t -2 = - 1 ) * 9 ( A M - 1)]/2.

P r o o f . For st = " + ", dr-1 is necessarily equal to +1.

M. SIALA. -- NEW ITERATIVE SOFT-OUTPUT VITERBI ALGORITHM

�9 If dr-2 = +1, then

Yt-1 = (dr-1 - dr-2) + nt-1 = hi-1.

Therefore, Yt-1 - 1 = n t -1 - 1 is independent of Mr-1 and has g( ' + 1) as PDF. Using (22), we have

7r,,+,, (AMt(" + " ) = AMldt_2 = +1)

= p(DMt-1 = AMldt-2 -- + I ) 9 ( A M + 1).

�9 If dr-2 = - 1 , then

Yt-1 = (dr-1 - d l - 2 ) q- n t - 1 = 2 + n t - 1 .

Therefore, yt-1 - 1 = n l -1 + 1 is independent of Ml-1 and has 9(" - 1) as PDF. Once again, using (22), we have

7r,,+,, (AMt ( " + ") = AMId l -2 = - 1 )

= p(DMl-1 = AMId t -2 = - 1 ) g ( A M - 1).

Since dl-2 takes equally likely the values +1 and - 1 , we obtain (29) for st = " + ". Following the same considerations and using (23), we get also (29) for

81 : -- []

For a null noise (i.e., a = 0), 9( ') is equal to the Dirac function 6(.). We have seen previously that in this case p(DMI = DMIdz-1 = d) -- 6 ( D M + d). Therefore,

%,(AMt( s t ) = A M )

= ( 6 ( A M + I ) 6 ( A M + I ) + 6 ( A M - 1 ) 5 ( A M - 1 ) ) / 2

= ( 6 ( A M + 2) + 6 ( A M - 2 ) ) /2 .

Once again, using the trellis of Figure 3 or Figure 5, one can confirm this PDF by computing the metric difference AMt(s l ) when the noise is null.

For a nonnull a, the explicit expression of 7rs~ ( A M (st) = A M ) is very complicated. Therefore, we have once again recourse to numerical calculation. We have plotted in Figure 7 the POE 7rst (AM(s t ) = A M ) as a function of A M for s ~ = 0, 5 and 10 dB. One can see that for high sty, %, ( A M ( s l ) = A M ) is concentrated around A M = - 2 and +2.

0.5

0.4

0.3 II

g 0.2

0 -6

0.1

Parameter: SNR(in dB)

10

-4

AM

FIG. 7. - - Represen ta t ion o f nst(AM(st) = AM) as a funct ion of AM

for SNR = 0, 5 and 10 dB.

Repr#sentat ion de ~s /AM(Sl) = AM) en fonc t ion de A M

pour SNR = O, 5 et 10 dB.

ANN. T~L~COMMUN., 50, n ~ 9-10, 1995 11/19

M. SIALA. -- NEW ITERATIVE SOFT-OUTPUT VITERBI ALGORITHM 809

IV.4. Capacity and cutoff rate of the dicode channel.

We consider the system in Figure 8a. The inputs of the interleaver of the precoded or nonprecoded dicode channel are the binary symbols bj with probabilities P(b). After inner detection, the deinterleaver delivers

o u t p u t s zJ 1) 2x (2~51) _ 1)~,~1) with PDF p ( z ) . Assuming perfect interleaving, the overall outer channel depicted in Figure 8b and specified by the PDF p(z/b) is a discrete memoryless channel.

{b~ ~ {ck} , Precoded or non recoded ' {xl} I Interleave@2-~ dicode channel

.,[

{z,l { i ~ , ~ } ~ {a,,,,} . I I ' ,l{~,~}llntefleaverl~ SOVA I{y,}

Noise {.,}

P(b) ~ p(z): Y~ p(zlb)P(;) be("o"," r'}

FIG. 8. - - (a) Discre te- input memory le s s outer channel for the nonprecoded or precoded channel.

(b) Transi t ion probabi l i ty model for the outer channel .

(a) Canal extdrieur sans mdmoire gt entrde discrOte pour le canal non prg-codd ou prd-codd.

(b) Moddle de la probabilitd de transition pour le canal extdrieur.

We use the approach of [1] to determine the quality

outputs z} 1)." As a first criterion, we evaluate o f the s o f t

the outer channel capacity given by [24, 25]

(30) C = m a x Z P(b) P(b)

bG { "0", "1" }

p(zlb ) log 2 dz. \ p(z) )

For the nonprecoded dicode channel the two values "0" and "1" taken by bt play symmetric roles. There- fore, the uniform probability distribution is the one that maximizes the sum in the expression of the outer chan- nel capacity. This result is confirmed by the simulation results of the next section.

For the precoded dicode channel the two values "0" and "1" taken by bz don't play symmetric roles. This is also confirmed by the simulation results of the next section.

As a second criterion, we evaluate the outer channel cutoff rate [23, 26]. The cutoff rate can be interpreted as the rate below which the error rate starts to decrease rapidly. It is given for the outer channel by

(31) Ro =

m a x - l o g 2 f~j_~ P(b) p~--~ dz. P(b) b e { ' "1"}

IV.5. Synchronization.

As we have noted in Section IV.2, for the precoded dicode channel the runs of zeros in the output noise- less sequence {xt} are the same as those of the input sequence {el }. In [5], Wolf & Ungerboeck exploited this characteristic to design trellis codes for the dicode chan- nel with bounded zero runlength. They found that the trellis codes they obtained by set partitioning [7] can be constructed using convolutional codes with precoding. Therefore, to avoid sequences of infinite zero runlength at the output of the dicode channel, they considered non- null coset~ of these convolutional codes with bounded zero runlength. We recall that a convolutional code is a linear code and that the all-zero sequence belongs to its codeword set.

For concatenated coding systems, this property of precoded dicode channels cannot be exploited. One can see that even if the sequence {bj } generated by the outer encoder has bounded zero runlength, the corresponding sequence {ok} at the output of the interleaver has not necessarily the same property. We recall that a detailed descriptipn of the generation of soft information at the output of the precoded dicode channel is found in [22]. However, this paper doesn't propose a solution for bounding the zero runlength at the output of the dicode channel.

To warrant a bounded zero runlength at the output of the dicode channel, we propose to insert a synchronizing bipolar symbol after every L successive symbols in the sequence {dl}. To guarantee that the dicode channel output generated by this symbol is not zero, it should take the opposite value of the symbol preceding it in the sequence {dz}. For the precoded dicode channel, this is equivalent to inserting the binary symbol "1" after L successive symbols in the sequence {cl}. The trellis used by the inner detector is depicted in Figure 9. It is obtained from that of the dicode channel by removing the parallel branches of one section after every L successive sections. One can see from this figure that the zero runlength is bounded by L.

When working with T tracks, where T _> 2, the maximum zero runlength of the readback signal of each track is bounded by L. In this case, we define the maximum zero runlength as the maximum number of successive output indices for which the noiseless outputs of the T tracks are zero at the same time. This parameter essentially determines the quality of timing and gain control [27]. Denote by LxJ the integer part of x. To reduce this parameter, one can shift the trellis corresponding to track t = 0, 1,-- �9 T - 1, by

Lt(L + 1 ) / T J,

sections to the right. In this case, one is sure that the maximum zero runlength is at most equal to [(L + 1)/TJ + 1. Consequently, one can reduce substantially the maximum global zero runlength by jointly conside- ring a great number of parallel tracks. We illustrate this

12/19 ANN. TI~LI~COMMUN., 50, n ~ 9-10, 1995

810 M. SIALA. -- NEW ITERATIVE SOFF-OUTPUT VITERBI ALGORITHM

2 0 0 0 0

o o o o o o o o o o o o

L sections L sections

F~G. 9. - - Inner detector trellis when the parallel transitions of one section after every L sections are removed.

Treillis du d~tecteur int~rieur quand les transitions parallOles d'une section sur L sont supprim~es.

technique in Figure 10 by considering the case where T = 2 and L = 3. We have represented in the top (bot- tom) of this figure the detector trellis of the first (second) track. Notice that a section with removed parallel tran- sitions in one trellis occurs between two sections of the same type in the other trellis.

For the single-track case, one can view the insertion of a symbol after every L symbols in the/sequence {eL} or {dr} as a kind of coding since redundancy is added. The rate of the resulting coded dicode channel is L/(L + 1). According to Figure 9 and following the same approach as in [19, 21] we give next the asymptotic symbol error rate of this code.

For L = 1, the trellis of the coded dicode channel is that of the biphase coded dicode channel [4, 20]. Assuming a nonprecoded dicode channel, the asymptotic symbol error rate is given in this case by

For L _> 2 and according to Figure 9, only signal error sequences of Euclidean weight 8 need be considered for high st,~. These error sequences are the set of sequences of the form -t-2(1-D)') , A = 1 ,2 , . . . , L - 1 . They result from the input error sequences +2 (1 + D + . . . + D ~- 1 ), A = 1 , 2 , . . . , L - 1. Following the considerations of [19, 21] and assuming a nonprecoded dicode channel, we have as asymptotic symbol error probability

Pr(e) ~ Q 2A A=I

: ( 4 - ( L + 1 ) ( 1 ) L - 2 ) Q ( - - ~ ) .

Note that the multiplicative factor in front of the Q- function is an increasing function of L. We conclude that this factor is lower than that obtained in (21) for L = cx~.

IV.6. Simulation results.

To compare the nonprecoded dicode channel with the precoded one, we have calculated the capacity and the cutoff rate of their associated discrete-input memoryless outer channel shown in Figure 8. For this purpose we have obtained by computer simulation an approximation of the conditional transition probability p(zlb) of each of these outer channels. The number of binary symbols used at the input of the outer channel is 106 . Based on (30) and (31), we have estimated the capacity and the cutoff rate of the outer channel from these condi- tional transition probabilities. We have also estimated the optimum probability distribution Po(') of P(.) that maximizes each of these quantities.

Figure 11 shows the capacities of the nonprecoded and precoded dicode channel as a function of the SNR. We recall from (28) that SNR = Es /2a 2 = 1/~r 2 for the dicode channel. For comparison, we have estimated the conditional transition probability p(zlb) as well as the capacity the Gaussian channel. Moreover, to compare the optimum MAP algorithm [14] with that of the SOVA, we have shown the capacity of the precoded dicode channel when this algorithm is used. One can see from this figure that the capacity of the precoded dicode channel is almost the same for the MAP algorithm and the SOVA. Furthermore, one can note that the capacity of the nonprecoded dicode channel is better than that of the precoded one, especially at low SNR. This is an important

--2 0 0 0 0 ,

1st track o o o ~ ~ O Q O

-2 0 0 0 0 0 0

2nd t r a c k o o o ~

FZG. 10. - - Illustration of the maximum zero runlength reduction for T = 2 and L = 3.

Illustration de la r~duction de la Iongueur maximale de z~ros successifs pour T = 2 et L = 3.

ANN. TI~LI~COMMUN., 50, n ~ 9-10, 1995 13/19

M. SIALA. -- NEW ITERATIVE SOFT-OUTPUT VITERBI ALGORITHM

1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0 , . . . . . . . . . . . . -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6

SNR

FIG. 1 1. - - Compar ison of channel capacities, based on simulation results.

Comparaison de capacitds de canal, basde sur des rdsultats de simulation.

result that will be corroborated by our next simulation results concerning the bit-error-rate (BER).

Based on our simulation results, we have also shown in Figure 12 the optimum probability distributions Po (') that maximizes the capacity of the Gaussian and the pre- coded dicode channel. Note that the optimum probabi- lity distribution corresponding to the Gaussian channel is always uniform. We have also found the same result for the nonprecoded dicode channel. This fact is not strange because the binary symbols "0" and "1" at the input of the nonprecoded dicode channel play symmetric roles. This is not the case for the precoded dicode channel because the symbol "0" is associated with parallel transi- tions and the symbol "1" with crossing transitions. This dissymmetry is confirmed by the curves of Figure 12. One can see from this figure that the probability of "0" at the input of the precoded dicode channel should be greater than that of "1" in order to achieve the channel capacity.

For comparison, we have also shown in Figure 13 the cutoff rates of the nonprecoded and precoded dicode channel as a function of the SNR. Also for comparison,

0.53

0.5251

0.52

0.515

L a,o 0.51

0.505

0.5'

0.495

�9 Gaussian channel

-7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6

SNR

FIG. 12. - - Compar i son of probabil i ty distributions max imiz ing the capaci ty, based on simulat ion results.

Comparaison des distributions de probabilit( qui maximisent la capacit(, basde sur des rdsultats de simulation.

1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1 !

~ - Gaussian channel ~ - - ~

- ~ - - Nonprecoded (1-D)-channel, MAP J ) ( ~ - / / ) f ;~ Ga_uss_ian channel

-7 -6 -5 -4 -3 -2 1 0 1 2 3 4 5 6

SNR

FIG. 13. - - Compar i son of channel cutoff rates, based on simulat ion and theoretical results.

Comparaison de ddbits de coupure de canal, basde sur des rdsultats de simulation et thdoriques.

811

we have estimated by simulation the cutoff rate of the Gaussian channel. To validate this estimate we have also plotted the theoretical cutoff rate

R0 = 1 - log 2 (1 + exp(--SNR)),

of the Gaussian channel [26]. Once again, to compare the optimum MAP algorithm with that of the SOVA, we have shown the cutoff rate of the precoded dicode channel when this algorithm is used. For the second time, one can see that the cutoff rate of the precoded dicode channel is almost the same for the MAP algorithm and the SOVA. One can note also that the cutoff rate of the nonprecoded dicode channel is better than that of the precoded one. This observation is identical to the one made above when the capacity is considered instead of the cutoff rate. Let

= exp(--SNR). In Figure A.1, we have also added the theoretical cutoff rate

1 s 3j ] ,

of the (precoded or nonprecoded) dicode channel [25]. Notice that this theoretical cutoff rate is always greater than that obtained by simulation for both the precoded and nonprecoded dicode channel. This is due to the fact that a brute use of the outer channel is not optimum. Recall that the inner SOVA (or MAP algorithm) used inside the outer channel doesn't take into account the fact that the sequences at the input of this channel are codewords of an arbitrary code. Since the theoretical cutoff rate of the dicode channel assumes a global optimum decoding at the output of the dicode channel [25], it is not surprising that this cutoff rate be better than those obtained by simulation for the outer channels. Therefore, a way to enhance the performance of the cutoff rate of the outer channels is to use the iterative decoding algorithm proposed previously. However, this enhancement cannot lead to the same cutoff rate as the theoretical one because our iterative algorithm is not optimum.

14/19 A N N . T t~LI~COMMUN. , 5 0 , n ~ 9-10, 1995

812

To illustrate our iterative algorithm, we have consi- dered both the precoded and nonprecoded dicode chan- nel and an outer convolutional code. Denote by Eb the energy per transmitted bit. We have Eb = E~/R, where

R ~ RoRi is the global code rate. From (28), we have E b E s SNR

2t7 2 2Rcr 2 R

We have considered in our simulations the simple rate R = Ro = 1/2 4-state convolutional code with generator matrix

G = [I + D2,1+ D + D2] ,

and free Hamming distance df~ = 5 [12]. Figure 14 shows for some values of Eb/2a 2 the

behavior of the BER after two iterations as a function of the parameter c 0). For e(1) = 0, the resulting BER is the same as the one obtained at the end of the first iteration. One can therefore see that, after one iteration, the nonprecoded dicode channel has better performance than the precoded one. This corroborates the simulation results obtained above concerning the capacity and the cutoff rate of the dicode channel. Moreover, one can notice the enhancement in ,ER at the end of the second iteration when the parameter e(1) is correctly chosen. The optimum value of e (1) depends on Eb/2Cr 2 and on whether the channel is precoded or not.

M . S I A L A . - - N E W I T E R A T 1 V E S O F T - O U T P U T V I T E R B I A L G O R I T H M

10 -2

10-3

10 -4

10 -5

1 1.5 2 2.5 3 3.5 4

E0/2c 2

FIG. 15. - - Optimum BER obtained at the first and the second iteration versus E~/2~ 2 for the rate-1/2 4-state convolutional code.

Taux d'erreur binaire optimal obtenu gtla premidre et glla deuxidme itrration en fonction de Eb/2 ff ?

pour le code convolutif gz 4 dtats de taux d'~mission 1/2.

102

N o n p r e c o d e d d i c o d e c h a n n e l

Parameter: E. / 2 0 2 ( in dB)

Prer dicodechannel

10 �84

032 064 096 125 16 0 032 064 0 ~ 12g

E( I ) C II}

FIG. 14. - - BER obtained at the end of the second iteration versus e(1) for the rate-l /2 4-state convolutional code.

Taux d'erreur binaire obtenu d latin de la deuxidme iteration en fonction de ~1) pour le code convolutif

4 dtats de taux d'dmission 1/2.

Notice that the best BER obtained for the nonprecoded dicode channel after two iterations for Eb/2a 2 = 3 dB is better than the one obtained for Eb/2a 2 = 4 dB after one iteration. Therefore, we are sure that an additional coding gain of up to 1 dB is achieved at the BER obtained for Eb/2~r 2 = 4 at the end of the first iteration. To make this gain clear, we have represented in Figure 15 the optimum BER versus E b / 2 a 2 for the first and the second iteration.

V. C O N C L U S I O N

We have presented an iterative soft-output decoding algorithm for serially concatenated coding systems. We have shown that it has better performance than the conventional noniterative decoding algorithm for serially concatenated codes. Our simulation results show that after two iterations an additional coding gain of up to 1 dB is obtained for the dicode channel at a bit-error rate of 10 -4. As a continuation of this work, one can investigate the additional coding gain resulting from the concatenation of an inner code designed specifically for the dicode [4, 5], Gaussian [6, 7] or fading [9, 10, 11] channel with an outer convolutional code [12]. One can also investigate the serial concatenation of two recursive systematic convolutional codes [13].

ACKNOWLEDGMENTS

The authors are grateful to Professor G. Battail for his careful reading of the manuscript and for his contribution to improve it.

APPENDIX

Proof of Proposition 1. According to the trellis of Figure 3 and equation (18), the metrics of states " + " and " - " at channel input index l are expressed as

ANN. T~LI~COMMUN., 50, n ~ 9-10, 1995 15/19

M. SIALA. -- NEW ITERATIVE SOFT-OUTPUT VITERBI ALGORITHM

u("+")

{ ^ (n) . . . . . 2 oR(n-l)" M}n)("+") : man M,_I ( + ) + Y , - l - - * ~ l - 1 ,

u( ..... )

^ } (n/ . . . . _ 2 R } n T * / , M/_I( - ) + ( y , - 1 - 2) 2

v(',+")

{ ^ r

M } n ) ( " - " ) = m i n M}~l ( "+" )+ (y l l+2)2+2R}n71),

~/f(n){ . . . . ~ . ~ 2 }n 1) ~ * z - x v - J q - y l - l + 2 R 1 �9

We have the following four cases :

1. u ( " + " ) _< u ( " - " ) , v ( " + " ) < v ( " - " ) (this corresponds to the first extension in Table II) :

The two conditions imply

DM~n-~ <- -Yt-1 + 1, DM~n-I < --Yl 1-- 1.

These conditions are summarized in the first condition (n) . . . . ) of Table II. The metrics hl~ n) (" + ") and lt l 1 ( -

are given by = 9jQ(n- l )

M{'~)( '' + " ) M{-~( '' § 2 4 7 YL1 -- ~*~l--1 '

M { n ) ( " - ' ) = M{n~(" + " ) + (Yl 1 § 2) 2 § 2R}n_l 1).

We have therefore

DM[ n) = (M/n) (" + ") - M[n)( " - " ) ) / 4

~v(n-l) = --Yt-1 -- 1 --*~l 1 "

This is the first update of Table II. 2. u ( " + " ) _< u ( " - " ) , v ( " + " ) _> v ( " - " ) (the

second extension in Table II) : From these conditions, we have

DM~n-I < -Yl-1 + 1, DM[~-I > -Y l -J . - 1.

These conditions constitute the second entry in the

conditions' column of Table II. The metrics M[ n) ( "+" ) and M[n)( ' ' - ") are given by

Mt ( . . . . ) = (n) . . . . __ 9jQ(n 1) Ml 1( -1- ) § "*~l 1 '

(n) . . . . ) _ ,,) 2R}n_71). M1 ( -- : "*l--l'll/]-(n)(a § Y?--I §

It follows that

D M } n ) : O a } n ~ - 1~( n - l ,

This gives the second update of Table II.

3. u ( " + " ) > u ( " - " ) , v(" + ") < v ( " - " ) this extension is not shown in Table II) :

These conditions are equivalent to

DM~:~ > -yl-1 + 1, DV~n~ < -Y, - I - 1,

which are never jointly verified. 4. u ( " + " ) > u ( " - " ) , v ( " + " ) _> v ( " - " ) (the

third extension in Table II) :

16/19

813

It follows from these conditions that

D r } ? > - > 1 + 1, _> - > 1 - 1

These conditions are summarized in the third entry in the

conditions' column of Table II. The metrics M } n ) ( " + " )

and M}~)( " - ") are given by

M{ n)(" § ") ----- M { _ n ~ ( " - " ) § 2 - 2 R l n _ 7 1 ) ,

M~n)( " - ") = M~n_l( " - " ) § y12_l § 2Rl_n_l 1).

As a consequence, we have = _ p(n-1) DM~ n) -Yl-1 + 1 *~l 1 "

This is the last update of Table II. These conditions are identical to those of the simple

detection algorithm of Ferguson [18]. According to the trellis of Figure 3, the metric differences Ak+l ( " + ") and A k + l ( " - ") are expressed as

( M : 2 ~ ( . . . . 2 In 1)) 4 A k + l ( " + " ) = + ) + Y z - I + 2 R - 1 -

(M}_~ ( ' ' - ' ' ) + (Yl-a- 2) 2 - 2R}_nll)) ,

4 A . . . . . . [~,-(n), , .§ . . . . . 2 ) - 2 R t _ 1 ) - - a-ak+l( -- ) : ~lVll_l( )-t'-(Yl-1 § 2 ( n - l )

( M : 2 ~ ( " - " ) + Y? 1 § 2 /~1271)) "

These equations simplify to those in (24). []

Proof of Proposit ion 2. According to the trellis of Figure 5 and equation (18), the metrics of states " + " and " - " at channel input index 1 are expressed as

,,(,,+,,)

M}n)( . + , , ) = min ~/t(n) ~ . . . . ) 2 9/v(n-x; - * l - l , + + Y t - l + ~ * ~ z - 1 ,

u( ..... )

^ } r

M}n_~(" - " ) § (Yl-1 - 2) 2 - 2R}_nl 1) ,

v("+")

M~'~)("-'') = r a in . . . . 2 ~ _(n-l)" ( § ) § 2 4 7 --Z-~l_ 1 ,

v( ..... ) ^ -}

~/I-(n) (" -- ") § Y?--I § 2]~}211)

We have the four following cases :

1. u ( " + " ) _< u ( " - " ) , v ( " + " ) < v ( " - " ) (this corresponds to the first extension in Table III) :

The two conditions imply

DM~-~ -< -Yl-1 + 1 - R I : I 1),

DM~-~ < - Y , - 1 - 1 + RI_~71).

If RI_~71) _> 1, these conditions are summarized in the first entry in the conditions' column of Table III.

ANN. TELgCOMMUN., 50, n ~ 9-10, 1995

814

Otherwise, they are summarized in the second entry in the conditions' column of the same table. The metrics M}n)("+ ") and M}n)( ' ' - ") are given by

(n) . . . . 7Dr(n){ . . . . ) 2 9 / : ? ( n - l ) Ml ( + ) . . . . l -1~ + + Y l - l + " * ~ l - 1 ,

(n) . . . . ) ^/r(n)(, , ,)+ _2R}~_71). M I ( - = -'}- ( Y l - 1 q- 2) 2 ~ ' * / - l k

We have therefore

DM[ n) = (M[n)(" - ~ " ) - M}n)( ' ' - ")) /4

i~(n - 1 ) = -Yl-1 - 1 +.~1_1 "

This is the first update of Table III. 2. u ( " + " ) <_ u ( " - " ) , v ( " + " ) _> v ( " - " ) (the

second extension in Table Ill) : From these conditions, we have

_ -- i t ? ( n - l ) DM}n__~ < - Y l - I + 1 "~1-1 ' _ __ -- i t ? ( n - l ) D M ~ > -Y, -1 1 + ~ , _ 1 �9

Therefore, we have necessarily p(n-1) o(,~-1) - Y t - l - l + ,~z_l <_ - Y t - l + l - - ~ l _ l ,

p(n-1) < 1. These equations constitute or equivalently, ,~t_l - the third entry in the conditions' column of Table III. The metrics M[n)( '' + " ) and M [ n ) ( ' ' - ' ' ) are given by

(~) . . . . ) (" ,,) o n ( ~ - ~ ) Mi ( + = M[n_ ~ + +yt2_1+--~,_1 ,

M / Z ) ( ' ' - ' ' ) = - ") + + 2RIn_I

It follows that

z)M?) = w : r l

This gives the second update of Table III. 3. u ( " + ") > u ( " - " ) , v ( " + ") < v ( " - " ) (this is

the third extension in Table III) : These conditions are equivalent to

DM[:~ > -y l -1 + 1 - R}n; ~), _ - r ~ ( n - l ) DM[n~ < -Yl-1 1+~1_1 �9

These equations are verified simultaneously if the condition

/:~(n-- 1) ..l- jQ(n-- 1) - - Y l - l + l - - ' ~ l - ~ <--Yl- -z- - l - - 'o l_Z '

is verified. This condition is equivalent to R}nl 1) > 1. The previous equations lead to the fourth entry in the

conditions' column of Table III. The metrics M~ ~) ("+")

and M~n)(" - ") are given by

/ [ " ) ( " + " ) = M/_~{( '' - " ) + (Yl-z - 2) 2 - 2RI-~1-1),

M[n)( " - ") = M}n_~(" + " ) + (Yt-1 + 2) 2 - 2n}n] -1)-

We have consequently

DM[ n) = -DMin~ - 2yl-1.

This is the third update in Table III. Note that this exten- sion is not allowed in the simple detection algorithm proposed by Ferguson [18].

M. SIALA. -- NEW ITERATIVE SOFF-OUTPUT VITERBI ALGORITHM

4. u ( " + " ) > u ( " - " ) , v ( " + " ) > v ( " - " ) (the fourth extension in Table III):

It follows from these conditions that _ p ( n - 1 ) DM~n_~ > -Y,-1 + 1 *',-1 ,

DM}~_~ >_ -Y l - I - 1 + R}~_; 1).

n(n-1) If "~t-1 _< 1, these conditions are summarized in the fifth entry in the conditions' column of Table III. Otherwise, they are summarized in the last entry in the conditions' column of the same figure. The metrics

M ~ ) ( '' + " ) and M ~ ) ( ' ' - ") are given by ,, ,, 71/f( n ) { " ,, ) M}n)( + ) "'*,-1, -- -I- (Yl--1 -- 2) 2 9 D ( n - - 1 )

(~) . . . . ~/t(~) r . . . . ) 2R}n_1-1) �9 Mi ( - ) . . . . 1-1~ - + Y ~ - I +

As a consequence, we have = -- R ( n - l ) DM~ n) -Yl-1 + 1 "~z-i �9

This is the last update of Table III. One can observe from Tables II and III that the condi-

tions for the nonprecoded dicode channel are simpler than those obtained for the precoded dicode channel.

rp(~-l) (n) The six conditions on the couple ~'~t-1 , DMI_I) and their corresponding extensions are plotted in Figure A. 1. One can see that the six conditions split the (r~(~-1) 1-)a/r(n) ~ plane into four parts each one corres- ~" l -1 ~ . . . . 1-11 ponding to one extension. Therefore, these conditions are more difficult to verify at each detection step than those of the simple detection algorithm of Ferguson [ 18].

FIG. A. 1. - - Four regions in the (R l(n1-1), D M ~nl)-plane represent ing the condi t ions of Table III and their four extensions.

Quatre rdgions du plan (R /(n -1),_ DM (n)l , reprdsentant les conditions du tableau III et leurs quatre extensions.

Moreover, if 1RI~_71)1 < 1, which is most probable, then only the three extensions of the algorithm of Ferguson remain possible.

According to the trellis of Figure 5, the metric dif- ferences Ak+l(" + ") and Ak+l(" -- ") are expressed a s

4Ak+l(,, +, , ) _-- (a%(n)( . . . . ) _ ~ " ~ l - - l ' q- ) -~- Y L 1 -~- 2R127 I)

(\M~_I (__(")" - ") + (Yt-1 - 2) 2 - 2R}n_71)) ,

ANN. TI~LI~COMMUN., 50, n ~ 9-10, 1995 17/19

M. SIALA. -- NEW ITERATIVE SOFT-OUTPUT VITERBI ALGORITHM

A / .... ~ _ {a~(~)~ ..... - ( "~

" - ") + Y l - 1 + 2 R

These equations simplify to those in (25) and (26). []

P roof of Proposi t ion 3. According to Table I, the curve representing DMl as a function of Yl-1, for a fixed value of DMI_ 1, is plotted in Figure A.2. Next, we would like to determine p(DMlldt 1) as a function of p(DMt_lldt_2). This will enable us to determine p(DMz) and %~_~(AMl(sz_l)). Denoting by 0(.) the PDF of Yl-1 and by O(.) its CDe, we have

Yz-1 = (dl-1 - dl-2) -k hi- l ,

and consequently

O(Yl-1 =-- y l d l - 1 ~- d, dl 2 = d') = g ( y - (d -- d')).

Denote by Y(.) the Heaviside function and by ~5(.) the Dirac function. Then, based on Figure A.2, it follows that

p(DMz = DMldz-1 -- d, DMz-1 = DM' , dl-2 = d')

= O ( - D M - lldl-1 = d,d~_2 = d ' ) Y ( D M - D M ' ) +

( 9 ( - D M ' + lldl_l = d, dt-2 = d ' ) -

O ( - D M ' - l l d t - ~ = d, dt-2 = d')) x S ( D M - D M ' ) +

O ( - D M + lidt_~ = d, dt_~ = d ' ) Y ( - D M + DM') .

= - _ -

DMI = DMt 1

\ \ \ \ \

~ " \ l DM/

= - y l i+1

\ .

DM~ •D•I-I + 1 h-i

FIG. A . 2 . - - Represen ta t i on o f DM l as a func t ion o f Yl-1 for a f ixed va lue of DMl_l, a c c o r d i n g to Tab le I.

Reprdsentation de DM 1 en fonction de Yl-1 pour une valeur fixe de DMI_ 1, selon le tableau I.

More explicitly, we have

p(DMz = DM[d~_I = d, DMI-1 = DM' , dz-2 = d') [ \

= g [ - D M - 1 - ( d - d ' ) ) Y ( D M - D M ' ) +

( G ( - D M ' + I - ( d - d ' ) ) - G ( - D M ' - l - ( d - d ' ) ) ) •

5 ( D M - D M ' ) + g ( - D M + I - ( d - d ' ) ) Y ( - D M + D M ' ) .

Denote by ft(-) the probability distribution of dr. Since the PDF of DMz is independent of l, we have

815

p(DMI = DMldt_l = d) = f ? ~ dDM'

E p(DM1 = DMldl_l d ' G { - 1 , + l }

= d, DMI- I = DM' , dr-2 = d') •

p(DMz-1 = DM'Idt-2 = d')f~(dt-2 = d').

If we assume that dz takes equally likely the values +1 and -1 (i.e., ft(.) is the uniform distribution), then we have 2p(DMt = DM[dl_l = d)

z d ' e { - 1 , + l }

p(DM~_l = DM']dl_2 = d ' ) g ( - D M - 1 - ( d - d') )+

( G ( - D M + I - ( d - d ' ) ) - G ( - D M - I - ( d - d ' ) ) ) •

/? p(DMI_I = DMldg_2 = d') + dDM'p(DMI_I M

= DM'ldt_2 = d ' ) g ( - D M + 1 - (d - d ' ) )) . /

This equation simplifies to

2p(DM~ = DMldz-1 = d)

= E d ' E { - 1 , + l }

P(DMt_I = DMldt_2 = d')+

G @ D M - X - ( d - f ) ) ) x p ( D M l _ 1 = D M [ d l _ 2 = dt)-[ -

g ( - P M - - ~ - l - ( d - - d t ) ) ( 1 - P ( P M l _ l . ~ - P M I d l _ 2 = d ' ) ) ) .

Noting that p(DMI_I = DM[d~_2 = d') = p(DMt = DM[dl_l -- d') and P(DMI_I = DM[dz_2 = d') = P(DMt = DMIdz_l = d'), we get

2p(DMl = DM[dl_l = g)

d,c{-1,+l} \ -

P ( D M I = D M [ d l _ I = d ' ) + ( G ( - D M + 1 - ( d - d ' ) ) -

/

g ( - D M - 4 - l - ( d - d ' ) ) ( 1 - P ( D M l = D M [ d l _ l = d t ) ) ) .

This equation can be rewritten as

d ( ( 2 + G ( - D M - I ) - G ( - D M + I ) ) • d D M

P(DMt = DMId~_I = d)+

( G ( - D M - 1 - 2 d ) - G ( - D M + 1 - 2 d ) ) x \

P(DMI = DMIdt_I = - d ) ) /

= g ( - D M + 1) + g ( - D M + 1 - 2d).

18/19 ANN. T~L~COMMUN., 50, n ~ 9-10, 1995

816

Consequently,

(A.1) ( 2 + G ( - D M - 1 ) - G ( - D M + I ) ) •

P(DM1 = DMldl-1 = d)+

(G(-DM- I- 2d)-C(-DM + i- 2d))x

P(DMt : DMId~-~ = -d) = 2 - G ( - D M + 1) - G ( - D M + 1 - 2d).

Since d takes the values +1 and - 1 , the previous equation is also valid if we replace d by -d . We have therefore

(2 + G ( - D M - 1) - G ( - D M + 1))

P(DMt = DMldl-1 = -d)+

( G ( - D M - 1 + 2d) - G ( - D M + 1 + 2d))

P(DMt = DMIdt-1 = d)

: 2 - G ( - D M + 1) - G ( - D M + 1 + 2d).

Solving this equation and (A.1) we get (27). The other relations in the proposition follow immediately from this equation. []

Manuscrit refu le 29 mars 1995.

REFERENCES

[1] HAGENAUER (J.), HOEHER (P.). A Viterbi algorithm with soft- decision outputs and its applications. Globecom'89, Dallas, Texas (Nov. 1989), pp. 47.1.1-47.1.7.

[2] BATrA1L (G.). Weighting the symbols decoded by the Viterbi algorithm. IEEE Int. Symp. IT, Ann Arbor, Michigan (Oct 1986), p. 141.

[3] BATFAIL (G.). Pond6ration des symboles d6cod6s par l'algorithme de Viterbi. Ann. Tdldcommunic. (Jam-Feb. 1987), 42, pp. 31-38.

[4] KARABED (R.), SIEGEL (E). Matched spectral-null codes for partial-response channels. IEEE Trans. IT (May 1991), 37, n ~ 3, pt II, pp. 818-855.

[5] WOLF (J. K.), UNGERBOECK (G.). Trellis coding for partial- response channels. IEEE Trans. COM (Aug. 1986), 34, n ~ 8, pp. 765-773.

M. SIALA. -- NEW ITERATIVE SOFT-OUTPUT VITERBI ALGORITHM

[6] FORNEY (G. D. Jr), GALLAGER (R. G.), LANe (G. R.), LONGSTAFF (E M.), QURESHI (S. U.). Efficient modulation for band-limited channels. IEEE J. SAC (Sep. 1984), 2, n ~ 5.

[7] UNGERBOECK (G.). Channel coding with multilevel/phase signals. 1EEE Trans. IT (Jan. 1982), 28, n ~ 1, pp. 55-67.

[8] MASENG (T.), RISLOW (B.). Codes for very noisy channels. 4-th Joint Swedish-Soviet Int. Workshop on Inform. Theory, Gotland, Sweden (Aug. 27 - Sep. 1, 1989).

[9] DIVSALAR (D.), SIMON (M. K.). Trellis coded modulation for 4800 and 9600 bps transmission over a fading satellite channel. IEEE J. SAC (Feb. 1987), 5, n ~ 2.

[10] DlVSALAR (D.), SIMON (M. K.). The design of trellis coded MPSK for fading channels : performance criteria. 1EEE Trans. COM (Sep. 1986), 36, n ~ 9.

[11] DlVSALAR (D.), SIMON (M. K.). The design of trellis coded MPSK for fading channels : set partitioning for optimum code design. IEEE Trans. COM (Sep. 1986), 36, n ~ 9.

[12] YASUDA (Y.), KASHIKI (K.), HIRATA (Y.). High-rate punctured convolutional codes for soft-decision Viterbi decoding. IEEE Trans. COM (March 1984), 32, n ~ 3.

[13] BERROU (C.), GLAVIEUX (A.), THITIMAJSHIMA (P.). Near Shannon limit error-correcting coding and decoding : turbo-codes. Proc. ICC'93, Geneva (23-26 May 1993), pp. 1064-1070.

[14] BAHL (L.), COCKE (J.), JELINEK (E), RAVlV (J.). Optimal decoding of linear codes for minimizing symbol error rate. 1EEE Trans. IT (March 1974), 20, pp. 284-287.

[15] HAGENAUER (J.), OFFER (E.), PAPKE (L.). Improving the standard coding system for deep space missions. Proc. ICC'93, Geneva (23-26 May 1993), pp. 1092-1097.

[16] LODGE (J.), YOUNG (R.), HOEBER (P.), HAGENAUER (J.). Separable MAP 'filters' for the decoding of product and concatenated codes. Proc. 1CC'93, Geneva (May 1993), pp. 1740-1745.

[17] KOBAYASHI (H.), TANG (D. T.). Application of partial-response channel coding to magnetic recording systems. IBM J. Res. Dev. (July 1970), 14, pp. 368-375.

[18] FERGUSON (M. M.). Optimal reception for binary partial-response channel. Bell Syst. Tech. J. (Feb. 1972), 51, n ~ 2, pp. 493-505.

[19] WOOD (R.), PETERSEN (D.). Viterbi detection of class IV partial- response on a magnetic recording channel. IEEE Trans. COM (May 1986), 34, n ~ 5, pp. 454-461.

[20] SIEGEL (P. H.), WOLF (J. K;). Modulation and coding for infor- mation storage. IEEE Communic. Magazine (Dec. 1991).

[21] FORNEY (J. D. Jr). Maximum-likelihood sequence estimation of digital sequences in the presence of intersymbol interference. 1EEE Trans. IT (May 1972), 18, n ~ 3, pp. 363-378.

[22] KNUDSON (K. J.), WOLF (J. K.), MILSTEIN (L. B.). Producing soft- decision information at the output of a class IV partial response Viterbi detector. ICC'91, Denver, CO (June 1991).

[23] PROAKIS (J. G.). Digital communications. MacGraw-Hill, New York, 2nd ed. (1989).

[24] GALLAGER (R. G.). Information theory and reliable communica- tion. J. Wiley & sons, New York (1968).

[25] VITERBI (A. J.), OMURA (J. K;). Principles of communication engineering. MacGraw-Hill, New York (1979).

[26] WOZENCRAFr (J. M.), JACOBS (I. M.). Principles of digital com- munication and coding. J. Wiley & sons, New York (1965).

[27] MA (A. K.), MARCELLIN (M. W.). Timing recovery for two- dimensional modulation codes. IEEE Int. Conf. COM Chicago, IL (June 1992), pp. 1361-1365.

ANN. TI~LI~COMMUN., 50, n ~ 9-10, 1995 19/19