Final Oral Defense 040109
-
Upload
hakimhussien -
Category
Documents
-
view
737 -
download
0
Transcript of Final Oral Defense 040109
April 2009
1
Channel Matched Iterative Decoding for
Magnetic Recording Systems
Final Oral Examination
Hakim Alhussien, PhD Candidate
Adviser: Jae Moon
Communications and Data Storage (CDS) Laboratory
Department of Electrical and Computer Engineering
University of Minnesota
April 06, 2009
Hakim, April 2009
2
Outline
� Perpendicular magnetic recording channel.
• ECC for recording channels.
• Error Pattern Correction Coding (EPCC).
� EPCC enhanced TE (TE-EPCC).
• Error rate analysis of TE-EPCC.
• TE-EPCC and TP-EPCC for PMRC.
� Tensor product parity codes (TPPC).
• Linear-time Encoding of tensor product codes.
• Hard decoding of EPC-RS tensor product codes.
• Error rate analysis of EPC-RS tensor product codes.
� EPC-LDPC tensor product codes.
• Soft-syndrome decoding of EPC-LDPC tensor product code.
• Simulation study of EPC-LDPC.
� Thesis contributions.
Hakim, April 2009
3
Perpendicular Magnetic Recording (PMR) Channel
� Recording channel is “transition-response fixed”
• To achieve the same normalized user density at a lower coding rate, the SNR is degrader by � use high rate codes.
� Saturated-level recording (binary-constrained input)
• Optimal-precoding or SNR water-filling not possible.
� Channel impaired by long error bursts.
• Due to ISI, disk defects, and thermal asperities.
• Symbol-correcting codes effective in burst correction,
such as RS, LDPC over GF(q).
� Data reread is expensive in terms of latency
• Standard frame error rate is very low
� Fixed ISI channel with dominant odd and even error events
• Utilize ECC targeting dominant errors events after ML detection.
� DC full PRML target
• A DC wandering compensation loop is required.
� Transition-dependent medium noise due to zigzag domain boundaries
• Channel detector trellis incorporates PDNP.
( )210110 log
R×∼
13 1410 10− −∼
Hakim, April 2009
4
ECC for PMR Read Channel
� Reed Solomon (RS)
• Minimum distance of RS > LDPC for same block length and rate.
• ML decoding of RS outperforms ML decoding LDPC.
• Iterative Belief propagation decoding approaches ML performance.
• RS parity check matrix very dense – large number of 4-cycles.
• Iterative decoding of LDPC significantly outperforms RS iterative decoding.
� RS with inner LDPC or turbo codes
• Error behavior of LDPC is catastrophic for strong codes.
• Requires high-rate low-column weight LDPC – weak family of codes.
• Convolutional based Turbo: long tail in symbol-error distribution.
� Stand-alone LDPC
• Extensive research on lowering the SER error floor.
• LDPC with sector-wide codeword has low minimum distance.
• Sparse LDPC: improved iterative decoding – larger girth.
• Dense or large-block length LDPC: better Hamming weight spectrum
• Consider: Sparse non-binary LDPC of sector-length codeword!
Hakim, April 2009
The error-pattern correcting code (EPCC)
Hakim, April 2009
6
The Channel Matched ECC Paradigm
High density
perpendicular
recording channel
Write head/medium/read headWrite head/medium/read headStrong general
ECC encoder
Strong general
ECC encoder
Equalizer/DetectorEqualizer/Detector
focuses on correcting a few dominant error patterns
correctsremaining errors
� Premise: for a given ISI channel all dominant error patterns are known a
priori.
• Hyperbolic tangent transition response at a channel density of 1.4,
• 10% AWGN and 90% jitter noise.,
• Target response: 1+0.9D,
• Bit error rates: 2.3276×10-3 (1 PDNP) ,
• Captured # of error patterns: 223,676,
• Edt/N90 = 13.5 dB
Channel-matched
EPC encoder
Channel-matched
EPC encoder
Channel-matched
EPC encoder
Channel-matched
EPC encoderStrong general
ECC decoder
Strong general
ECC decoder
Hakim, April 2009
7
EPCC Design: Target List =5 most dominant errors
� Target the 5 most dominant errors,
which account for 92.04% of possible errors.
� Syndrome set produced by g(x) = 1 + x +x3 + x5 + x6
• Order of g(x)=12.
• Total number of distinct syndrome sets: 5.
• 5 distinct, non-overlapping syndrome sets are utilized to distinguish 5 target error.
� Cyclic generator polynomial used to design a cyclic
(12,6) code of rate=0.5, and code cord length 12.
� Single occurrences of error types {1,2,4,5} decoded
without ambiguity.
� Via channel reliability information and the polarity of
data support, error type 3 can be decoded reliably.
� Unique syndrome-error mapping via channel side
information.
Target Error Polynomial Syndrome
Period
12
12
6
12
12
1 x+
1
2 31 x x x+ + +
21 x x+ +
2 3 41 x x x x+ + + +
Hakim, April 2009
8
EPCC Design: Target List =10 most dominant errors
� Target the 10 most dominant errors,
account for 99.67% of possible errors.
� g1(x) = 1 + x2 +x3 + x5 + x6 +x8
• Order of g1(x)=18.
• 10 distinct syndrome sets.
• Cyclic generator polynomial used to design a cyclic (18,10) code of rate=0.56, and codeword length 18.
� g2(x) = 1 +x3 + x5 + x8
• Order of g2(x)=30.
• 10 distinct syndrome sets.
• Cyclic generator polynomial used to design a cyclic (30,22) code of rate=0.73, and codeword length 30.
� Unique syndrome-error mapping via
channel side information.
Target Error Polynomial Syndrome
Period g1(x)
Syndrome
Period g2(x)
18 30
9 15
18 10
9 15
18 6
9 5
18 30
9 15
2 10
9 32 3 4 5 6 7 8 91 x x x x x x x x x+ + + ++ + ++ +
2 3 4 5 6 7 81 x x x x xx x x+ + + + + + + +
2 3 4 5 6 71 x x x x x xx+ + + + ++ +
2 3 4 5 61 x x x xx x+ + + + ++
2 3 4 51 x x x x x+ + + + +
2 3 41 x x x x+ + + +
1 x+
21 x x+ +
2 31 x x x+ + +
1
Hakim, April 2009
9
� Syndrome sets produced by g(x) = 1 + x3 + x5 + x8
• Order of g(x): 30 → (30, 22) base cyclic code
• 10+3 extra distinct, non-overlapping syndrome sets are utilized to distinguish 13 target error patterns.
� Multiply g(x) by a degree 6 primitive polynomial which is not a factor of
any target error polynomials :
• The extended code is a (630,616) code of rate 0.98.
Extended periods of syndrome
sets produced by ( )g x′
Approaches to Increase Code Rate of EPCC
� Tensor product coding paradigm.
• Short codeword length (outer ECC symbol length), very high total code rate.
Hakim, April 2009
Error Rate Analysis of TE-EPCC
Hakim, April 2009
1
1
2
Mm
W
m m
m
m
P QM σ′= ≠
′− ≤
∑ ∑
x x� �
,
1 1
1
2
( )2
E
E
E m in
ME
W m d
m d
EE
d d
dP T Q
M
dT d Q
σ
σ
∞
= =
∞
=
≤
=
∑ ∑
∑1x
2x
3x
4x
2 2
E mind d=
WER ML Bound
min1,dT
Decrease number of codewords at
Euclidean minimum distance
(Turbo codes)
Increase Euclidean minimum
distance
(Trellis coded modulation)
Word Error Probability:
11
Hakim, April 2009
1
( , ) ( )Pr( | , )N
E E
d
T d d d d=
= ∑ A� �C C
1
1( , ) ( ) ( )Pr( | , )
( , )
N
E E
dE
w d d d d dT d =
= ∑ A A� ��
C CC
1
( ) ( )Pr( | , )
2E min
NE E
b
d d d
d d d d dP Q
K σ
∞
= =
≤
∑ ∑
A A �C
BER ML Bound
( ) ( )
2E min
E E Eb
d d
T d w d dP Q
K σ
∞
=
≤
∑• Bit Error Probability:
• Average number of codeword sequences of channel noiseless outputs
separated by dE:
• Average Hamming distance between information words that generate codewords
of channel noiseless outputs separated by dE2:
# of codeword sequences of weight dAverage input Hamming weight
of codewords of weight d
12
Hakim, April 2009
13
0
1
0/0
0/ 0
1/1 1/ 1−
Turbo Equalization and Enhancements
Convolutional Encoder (RSCC) ∏
Dicode Channel
(1-D)
TE-EPCC
EPCC
Convolutional Encoder(RSCC)
∏Dicode Channel
(1-D)
TE0
1
0/0
1/0
1/1 0/ 1−
Convolutional Encoder (RSCC) ∏
Dicode Channel
(1-D)
Precoded TE
1 1 D⊕
Hakim, April 2009
Partial Response Class-1 (PR1) Channel (1+D)
0
1
0 / 0
1/ 2
1/1 0 /1
21Ed =
2 1Ed =
21Ed =
21Ed =
2 4Ed =
2 0Ed =
…
…
Trellis of Dicode channel
non-dominant error pattern
dominant error pattern
22Ed =
22 4E crd b= + ×
14
Hakim, April 2009
Dicode Channel (1-D)
0
1
0 / 0
1/ 0
1/1 0 / 1−
21Ed =
2 1Ed =
21Ed =
21Ed =
2 0Ed =
2 4Ed =
…
…
Trellis of Dicode channel
dominant error pattern
non-dominant error pattern
22Ed =
22 4E crd b= + ×
15
Hakim, April 2009
…4
11 1
……
1 1
…
1 1
m: # of error patterns in EPCC sub-code
∑ 2
Ed
• Merging branches correspond to
zero error Hamming weight
A Dicode multiple error occurrence
16
Hakim, April 2009
Distribution of dE given d and m
2
2
2
21, 0 integer,2
2 44
1Pr( | , ) , 2 ,
2
0 , otherwsie
dom
d m
EdomE
d m
E E dom dom
d md m
m md m
d d m d m m m
−
−
− − > <−
= = =
# of ways we can have
crossing branches# of crossing branches
d: Hamming weight of multiple error,
m: # of error patterns, mdom: # of dominant error patterns
17
Hakim, April 2009
Π
( )H
d e d=
( )H
d e i=
( )H i
d e d=
A( , )d i
1
N
d
cL N×
K
N
1
1
i i
i i
c
i
N d d
m m
N
d
− −
−
c cN P+
1
1 1
i i
i i
c
i
N d d
m m
N
d
− −
− −
c
i
N
d
Information sequence
RSCC codeword
Interleaved RSCC codeword
EPCC codeword
closed
error patterns
closed
+
open
error patterns
N( )H
d e d=
1
( )L
H i
i
d e d=
=∑
Enumerators for error Hamming weights
18
Hakim, April 2009
1 1
1 1
1 1 1
Pr( | ) Pr( | , ,..., ) Pr( ,..., | )
Pr( | , ,..., , , ,..., )
Pr( , ,..., | ,..., , ) Pr( ,..., | )
E E L L
E L L
L L L
d d d d d d d d d
d d d d m m m
m m m d d d d d d
= ×
=
× ×
( )
( )
1
1
1
1
1
1
0 0
1 0 1
... Pr
... Pr
Pr( | ) ,..., |
| Pr ), ( |
LL
ii
L
LL
ii
d d
E L
d d
d d
d d Ld
E i i
m m m i
m m
d d d d d
d md m d
=
=
= =
=
= = =
=
∑
×
=
∑
∑ ∑
∑ ∑ ∑ ∏
Distribution of Euclidean distance given
Hamming weight of sub-codes
Distribution of sub-code Hamming weights
given Hamming weight of outer code
Distribution of sub-code multiple error patterns
given Hamming weight of sub-codes
Enumerators for error Hamming weights
19
Hakim, April 2009
1 2
1
...
Pr( ,..., | )
c c c
L
L
N N N
d d dd d d
N
d
× ×
=
1
1Pr( | )
c i i
i i
i i
c
i
N d d
m mm d
N
d
− − ×
− =
Joint distribution of the sub-code Hamming weights:
Distribution of the number of error patterns per sub-code:
# of interleaved RSCC words
of Hamming weight d
# of sub-code words
of Hamming weight dL
# of sub-code words
of Hamming weight di
# of ways di is decomposed
into mi error patterns.
# of ways mi error patterns are
arranged in sub-code i.
Enumerators for error Hamming weights
20
Hakim, April 2009
1
1 1
21
1
0 0 1 0 0
: 2 0 mod 4,
2
1
1Pr( | ) ...
112
124
...L
L LL L
ii E i
i
d dd d d
E
d d m m m
d dm d m m m
d m Lc j j
Ej jj
d dN
d
d mN d d
d mm m
=
=
= = = = =
= − = =
−
=
= ∑ ∑
− − − × − −
∑ ∑ ∑ ∑ ∑
∏
1
1 1
21
1
min( , ) min( , ) min( ) min( )
0 0 1 0 0
: 2,
1
, ,1
Pr( | , ) ...
11
..
2
.
1
c c c L c
L LL L
ii E i
i
d d d d d dd
E
d d m m m
d dm m d m m
d m Lc
m
j
j
m
j
jj
d dN
d
N d d
m m
=
=
= = = = =
= = =
−
=
= ∑ ∑
− − × −
∑ ∑ ∑ ∑ ∑
∏
C
Pr( | , ) Pr( | ) Pr( | , )E E E
d d d d d d= −�C C
Euclidean distance Enumerator of TE-EPCC, when EPCC is tuned off:
Euclidean distance Enumerator of all correctable TE-EPCC codewords:
Euclidean distance Enumerator of non-correctable TE-EPCC codewords:
21
Hakim, April 2009
Interleaver Gain Exponent of TE
( )1
! !
ddN N d N
d d d
− + >
�
11
11
N d N dm
m mN d
µ
µ µ
− − + − +=
− − +− +
( )1 11 1
1 ( 1)! ( 1)!
mmN d N d N
m m m
µ µ
µ µ µ
− + − +− + − + <
− + − + − + �
2
241
.2 2
Ed
Ed
Q e σ
σ
− ≤
2
2
2
1
4, , ,
1 2 0 1: 2 0mod 4
1
2
ET
E
E
E
dd dm d
b d d m
d d mm d m
P N eK
µ σµ
µµ
∞ −− −
= = = =− + =
< ∑∑∑ ∑ B
2, , ,
1! 1( ) ( ) 2
1( )! 24
E
d m
d d m E
d mdd
d d d mmm
µ µµ
− − − = − + −−
B A A
22
Approximations:
Modified TE bound:
Hakim, April 2009
Interleaver Gain Exponent of
TE-EPCC(dc = 10, mc = 3, L = 1)2
2
2 2
1
4
1 0
min( ) min( , )
, , , , , ,
2 1 2 1: 2 0mod 4 : 2
,
1
2
E
E
T c cT
E E
E E
d
b
d
d d md dm d m d
d d m d d m
d m d mm d m m d
d
m
P eK
N N
σ
µ
µ µµ µ
µ µ
∞ −
= =
− − − −
= = = =− + −==
<
−
∑ ∑
∑ ∑ ∑ ∑B B
23
Hakim, April 2009
Interleaver Gain Exponent of
TE-EPCC(dc = 10, mc = 3, L = 1)
24
Hakim, April 2009
Interleaver Gain Exponent of
TE-EPCC(dc = 10, mc = 3, L = 1)
25
Hakim, April 2009
Interleaver Gain Exponent of
TE-EPCC(dc = 10, mc = 3, L = 1)
26
Hakim, April 2009
Interleaver Gain Exponent of
TE-EPCC(dc = 10, mc = 3, L = 1)
27
Hakim, April 2009
2 2 2
2 2 2
2
1 1 3
4 2 4
1 5 3
4 2
7
2
4
(2) (2) (2) (2) (2) (2)
(2) (2) 3 (3) (3) (3) (3)
2 (4)
2 2
2 2 2
(4)
bP e e eKN KN KN
e e eK KN K
eKN
O
σ σ σ
σ σ σ
σ
− − −
− − −
−
+ +
+ +
+ +
<
+
A A A A A A
A A A A A A
A A
2 2 2
2
2
2 2
1 1 3
4 2 411 10 10
1 5 3
49
7
4
22
155925 (10) (10) 155925 (10) (10) 779625 (10) (10)
8 8 2
779625 (10) ( (2) (2) (210) ) (2)
2 2
) )
4
2 (4 (4
b
e
e eKN
e e eKN KN K
OKN
N
N
P
Ke
NK
σ σ σ
σ
σ
σ σ
−
−
− −
−
−
−
+ +
+ +
<
+ +
+
A A A A
A A
A A
A
A A
A
AA
Asymptotic BER bound of conventional TE:
Asymptotic BER bound of TE-EPCC(dc = 10, mc = 3, L = 1):
Interleaver Gain Exponent of
TE-EPCC(dc = 10, mc = 3, L = 1)
28
Hakim, April 2009
29
• TE: K = 4096, punctured R=8/9, (31, 33) RSCC.
• TE-EPCC: (L = 7)EPCC, mc = 3, dc = 10.
• EPCC sub-code: (630, 616), R = 0.98.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25-50
-40
-30
-20
-10
0
10
20
30
d2
E
log
T(d
E)
precoded Dicode, TE
unprecoded Dicode, TE
unprecoded Dicode, TE-EPCC
“Spectral Thinning” of TE-EPCC
Hakim, April 2009
30
0
1
0/0
0/ 0
1/1 1/ 1−
Precoded TE
Convolutional Encoder (RSCC) ∏
Dicode Channel
(1-D)
Precoded TE
1 1 D⊕
� Unprecoded Dicode: trellis paths corresponding to different code bits
are at 0 Euclidean distance → long error events have a high
probability of generating low Euclidean distance errors.
� Precoded Dicode: trellis paths corresponding to different code bits
accumulate Euclidean distance → ONLY low Hamming weight
errors generate low Euclidean distance errors.
� The average number of Hamming weight 2 errors that generate dE2 =2
is more for precoded compared to unprecoded Dicode.
• Unprecoded TE achieves a lower error floor compared to precoded TE.
Hakim, April 2009
4 5 6 7 8 9 1010
-8
10-6
10-4
10-2
100
SNR (dB)
BE
R
unprecoded TE, 8/9, sim.
unprecoded TE, 8/9, Bound
precoded TE, 8/9, sim.
precoded TE, 8/9, Bound
616/630 EPCC-TE, sim.
616/630 EPCC-TE, Bound
31
• TE: K = 4096, punctured R=8/9, (31, 33) RSCC.
• TE-EPCC: (L = 7)EPCC, mc = 3, dc = 10.
• EPCC sub-code: (630, 616), R = 0.98.
Hakim, April 2009
4 5 6 7 8 9 10 11 12 13 1410
-10
10-8
10-6
10-4
10-2
100
SNR (dB)
Bo
un
d o
n B
ER
precoded Dicode, TE
unprecoded Dicode, TE-EPCC
unprecoded Dicode, TE
4 5 6 7 8 9 10 11 12 13 14 1510
-10
10-8
10-6
10-4
10-2
100
SNR (dB)
Bo
un
d o
n B
ER
precoded Dicode, TE
unprecoded Dicode,TE-EPCC
unprecoded Dicode, TE
32
TE
-EP
CC
, R
SC
C (
7,5
)
TE
-EP
CC
, R
SC
C (
5,7
)
interleaver size N = 600,
punctured rate R = 8/9,
and (L = 1)EPCC with
mc = 3 and dc = 10.
N=
10
0, …
, 2
00
0
N=
10
0, …
, 2
00
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25-30
-20
-10
0
10
20
30
d2
E
log
T(d
E)
unprec. Dicode, TE, (7,5) RSCC
unprec. Dicode, TE-EPCC, (7,5) RSCC
prec. Dicode, TE, (7,5) RSCC
unprec. Dicode, TE, (5,7) RSCC
unprec. Dicode, TE-EPCC, (5,7) RSCC
prec. Dicode, TE, (5,7) RSCC
Hakim, April 2009
4 5 6 7 8 9 10 11 12 1310
-10
10-8
10-6
10-4
10-2
100
SNR (dB)
Bo
un
d o
n B
ER
precoded Dicode, TE
unprecoded Dicode, TE-EPCC
unprecoded Dicode, TE
33
• TE: N = 1200, punctured R=8/9, (7, 5) RSCC.
• TE-EPCC: (L = 1)EPCC, mc = 1:10, dc = 10.
• EPCC sub-code: (630, 616), R = 0.98.
Hakim, April 2009
34
7 7.5 8 8.5 9 9.510
-10
10-9
10-8
10-7
10-6
10-5
SNR (dB)
BE
R
L=2, m=1
L=3, m=1
L=4, m=1
L=6, m=1
L=2, m=2
L=3, m=2
L=4, m=2
L=6, m=2
L=2, m=3
L=3, m=3
L=4, m=3
L=6, m=3
L=2, m=4
L=3, m=4
L=4, m=4
L=6, m=4
• TE: N = 1200, punctured R=8/9, (7, 5) RSCC.
• TE-EPCC: (L = 1)EPCC, dc = 10.
• EPCC sub-code: (630, 616), R = 0.98.
Hakim, April 2009
5 5.5 6 6.5 7 7.5 8
10-6
10-5
10-4
10-3
10-2
SNR dB
BE
R
TE 1/2
TE 2/3
TE 3/4
TE 5/6
TE 6/7
TE 10/11
TE-EPCC 616/630,5/6 RSCC
TE-EPCC 199/210,5/6 RSCC
35
• TE: N = 4312, (7, 5) RSCC.
• TE-EPCC: (L = 1)EPCC, mc = 3, dc = 10.
• EPCC sub-code: (630, 616) & (210, 199).
Hakim, April 2009
36
2/3 3/4 4/5 5/6 6/7 7/8 8/9 9/106
7
8
9
10
11
12
Punctured Rate
min
imu
m S
NR
(1
×1
0-7 B
ER
)TE, no precoding
TE, precoded
TE-EPCC
• TE: N = 1200, (7, 5) RSCC.
• TE-EPCC: (L = 1)EPCC, mc = 3, dc = 10.
Hakim, April 2009
EPCC Based Turbo Code Performance in PMR
Hakim, April 2009
38
Perpendicular Magnetic Recording (PMR) channel
� Hyperbolic tangent transition response for perpendicular recording
� Channel density Ds ≡ pw50 / T
• pw50 : −50% to 50% width of the transition response
• T : symbol period
50
2( ) tanh
0.5795
th t
pwπ
=
⋅ ⋅
(H. Sawaguchi et al., “Performance analysis of modified PRML channels for perpendicular recording systems,” J. Magn. Magn. Mater., 2001.)
Hakim, April 2009
39
� Continuous-time channel model
• h(t) : Hyperbolic tangent transition response, i.e.,
• s(t) : dibit response, i.e.,
• h'(t) : First-order time derivative of h(t), i.e.,
• p(t) : Front-end band-limiting filter (7th-order butterworth filter)
• n(t) : Additive white Gaussian noise
• jk : Random transition position jitter
• Definition of energy Edt: [ ]2
( )dtE h t dt∞−∞∫ ′=
2 2( ) sech ( )h t tλ λ′ =
( ) tanh( )h t tλ=
PMR Continuous-time Channel Model
[ ]1
( ) ( ) ( )2
s t h t h t T= − −
Hakim, April 2009
40
� Discrete-time channel model
• , ,
• Variance of the additive white Gaussian noise (AWGN) sequence nk :
• Variance of the jitter noise jk :
• Spectral height for the mixed noise: Nα = No + Mo
− Nα signifies α % jitter noise, i.e.,
• SNR can be defined as
[ ]( ) ( )s
k t kTs s t p t
=≡ ∗ [ ]( ) ( )j
k t kTh h t p t
=′≡ ∗ [ ]( ) ( )n n
k k t kTh h p t p t− =
∗ = ∗ −
PMR Discrete-time Channel Model
2
2
on
Nσ =
2
2
oj
Mσ =
100o
o o
M
N Mα = ×
+
SNR dtE
Nα
≡
Hakim, April 2009
41
Partial Response Maximum Likelihood System
0 2 4 6 80
0.1
0.2
0.3
0.4
k
Discrete Time Dibit Response at Ds=1.1
0 5 10 15
-1
0
1
2
3
k
15-tap RLS Equlizer
0 0.5 1-40
-30
-20
-10
0
10
fT
dB
Dibit Vs Taregt (Frequency)
0 0.5 1-40
-30
-20
-10
0
10
fT
dB
Target Vs Dibit*Equalizer (Frequency)
Dibit
Target Equalized Dibit
Target
� Channel density: 1.1
• Mixed noise: 10% AWGN and 90% jitter noise, DC full dibit response.
• Target response: 1+0.85D , optimized to whiten noise for the all-transition input.
Hakim, April 2009
Write head/medium/read head
1+0.9D PR,
90% media noise + 10% electronic noise
(11,10)Convolutional
Encoder (RSCC)
(11,10) RSCCSISO decoder4 state BCJR
RS Encoder
t = 20
RS Decoder
t = 20
SISO Equalizer
4 state BCJR,
1 PDNP tap
∏
∏
∏-1
EPCC enhanced Turbo Equalizer
(630,616)
EPCC
encoder
EPCC SISO
List decoder
rate ≈ 1
Encoder
Decoder
kx
ˆkx
e
kλe
kλ�
EPCC-TE
42
Hakim, April 2009
9 9.5 10 10.5 11 11.5 12 12.5 13 13.5 14 14.5 1510
-7
10-6
10-5
10-4
10-3
10-2
Edt
/ N90
dB
BE
R
Cu.i.d.
EPCC-TE,
[0, 1, 5, 7, & 14 iters]
conv TE, [0, 1, & 5 iters]
QC-LDPC,
[0, 5, & 9 iters]
× 15 LDPC iters
uncoded
BCJR/PDNP
TE-EPCC Performance
43
Hakim, April 2009
Shortened-(330,316) EPCC,
13 codewordsData block
Parity on
parity
Write head/medium/read head
1+0.9D PR
90% media noise + 10% electronic noise
Column
SPC decoder
Column by
column
(14,13) SPC
RS Encoder
t = 20
RS Decoder
t = 20 Channel detector
4 state BCJR
1 PDNP tap
Row by row
EPCC encoder
(330,316)
EPCC list
SISO decoder
TPC decoder
Shortened-EPCC (330,316),
14 codewords
(14,13) SPC,
330 codewords
Columns to
Rows
Rows to
columns
Encoder
Decoder
EPCC based TP
44
Hakim, April 2009
9.5 10 10.5 11 11.5 12 12.5 13 13.5 14 14.5 1510
-8
10-7
10-6
10-5
10-4
10-3
10-2
Edt
/ N90
BE
R
TP-EPCC
[0, 2, 4, 6,
9, & 14 iters]
uncoded
BCJR/PDNP
conv TE,
[0, 1, & 5 iters]
Cu.i.d.
TP-EPCC Performance
45
Hakim, April 2009
Tensor Product Parity Codes
EPC-RS
Hakim, April 2009
47
An EPC- Tensor Product Code
� Chaichanavong and Siegel (2006) proposed a tensor product code based on a
single parity code + BCH as an inner code for outer RS ECC.
• Suitable for low density longitudinal recording channels were dominant errors have odd weight of the form , .
• Code combined with MTR for perpendicular recording channels.
• Tensor product code has much higher rate than a short parity code.
• Parity code on the symbol-level – less multiple error occurrences.
� To achieve performance gains with respect to QLDPC we will investigate a
tensor product code based on a short inner multiparty code (EPCC) and
outer QLDPC ECC.
• The EPC multiparty code corrects any single occurrence of a dominant targeted error in a tensor symbol.
• An EPCC sequence of syndromes forms a codeword for QLDPC.
• EPCC is decoded jointly with the channel using post processing techniques that generate a soft “syndrome-codeword” to be decoded by the QLDPC non-binary message-passing decoder.
• Via channel side information, EPCC has a unique syndrome per dominant error single occurrence. A list decoding scheme increases the decoding sphere radius of EPCC to target multiple error occurrences.
[ 2]+ [ 2, 2, 2]+ − +
Hakim, April 2009
48
Introduction to Tensor Product CodesJack K. Wolf, “On Codes Derivable form the Tensor Product of check Matrices,” IT 1965.
� Constituent Codes:
• Binary (3,1) single error correcting code,
• Doubly-extended t=1 (5,3) RS on GF(22),
� The tensor product code parity check matrix in GF(22) is
21 0 1
10 1 1
H α α
= =
2
2
1 0 1
0 1 1H
α α
α α
=
2
2 2 2 2
2 2 2 2(2 )
1 0 0 0 1 1 1
0 0 0 1 1 1 1GFH
α α α α α α α α
α α α α α α α α
=
(2)
101 000 101 011 110
011 000 011 110 101
000 101 101 110 011
000 011 011 101 110
GFH
=
Tensor Symbol
1. This binary (15,11) tensor product code
corrects any single tensor symbol error
provided it contains a single bit error.
2. Binary constituent code rate is 0.34 and
codeword length is 3 bits.
3. Tensor product code rate is 0.74
and codeword length is 15 bits.
Hakim, April 2009
49
Encoding of Tensor Product Codes
� Encoding of a tensor product code of binary code C1: (n1,k1), and non-binary
code C2: (n2,k2)
• Divide n1k2 information bits into k2 columns.
• Encode each column using C1 .
• Convert to .
• Encode intermediate non-binary syndromes using C2 .
• Convert back to .
• Use remaining p2k1 information bits
and the calculated syndromes
bits to calculate p1p2 parity bits using
back substitution and systematic H1.
• Result : If C1 and C2 are linear time
encodable, then
is linear time encodable! 2 2
1 1 0
1 0 1 1 0
0
1 0 0
1
1 1
1 1
0 1
1 1 1
0 1 1
1 0α α α
( )12pGF
( )2GF
1 2C C⊗
transm
itte
d c
odew
ord
Inte
rmed
iate
synd
rom
es
Hakim, April 2009
50
An EPC-RS Tensor Product code
� EPC-RS constituent codes
• (18,10) EPCC over GF(2), Rate=0.556, 8 parity bits.
• (255,195) RS over GF(28), Rate=0.765, t=30, 60 parity symbols.
� EPC-RS tensor product code is a binary (4590,4110) code, Rate=0.895, 480
parity bits.
• Codeword length = 18×255 bits, parity = 8×60 bits.
� Tensor code can correct any combination of 30 or less tensor symbol errors,
given that each 18-bit tensor symbol has a single occurrence of a dominant
error that is correctable by EPCC.
2 3 4 5 6 7 133 134 96 90 82 236 234 217 92 931H α α α α α α α α α α α α α α α α α =
Tensor symbol (1) Tensor symbol (2) Tensor symbol (3) … Tensor symbol (255)
18 bits
Hakim, April 2009
51
Hard Decoding of RS-EPC tensor product code
Tensor symbol (1) Tensor symbol (2) Tensor symbol (3) … Tensor symbol (255)
EPCC Syndrome (1) EPCC Syndrome (2) EPCC Syndrome (3) … EPCC Syndrome (255)
18 bits
Compute EPCC binary
syndromes and convert to GF(28)
RS hard decoding in GF(28)
(or any list soft decoding algorithm ).
EPCC Error Synd (1) EPCC Error Synd (2) EPCC Error Synd (3) … EPCC Error Synd (255)
8 bits
Likely dominant
Error (1)
Likely dominant
Error (2)
Likely dominant
Error (3)…
Likely dominant Error
(255)
18 bits
Convert back to corrected binary
EPCC syndromes.
Find most likely single and
double errors.
Add to ML word
RS symbol (1) RS symbol (2) RS symbol (3) … RS symbol (255)
8 bits or GF(28) symbol
Hakim, April 2009
52
RS-EPC TPPC Residual Errors
� Non-targeted single error occurrences.
� More than double multiple error occurrences.
� Double error occurrences that have a zero EPCC syndrome, since RS generates
syndromes of errors as input to EPCC.
� Residual errors can be corrected by an outer RS code of small correction power,
since the number of residual tensor symbols in error is small.
� EPCC can work as an error locating code: Erasure decoding of outer RS.
…
18 bits
13( )e x
13 bits
…
18 bits3 bits1 bit2 bits
Hakim, April 2009
53
EPC-RS Hard Decoder
BinaryViterbi
−
+
ˆk
c
hk
RS Decoder t=27 GF(28)
kq
kr
EPCC Syndrome Generator
EPCC list decoder
25 test words
Tensor Product Hard Decoder
Modulo 2
RS Decoder t=3 GF(210)
ˆk
b
Hakim, April 2009
Semi-Analytic & Fully-Analytic Multinomial SER estimations
� Step 1: Estimate P1, …, Pm
• Simulation:
1. slide a window of size m symbols over the channel detector’s simulated hard output and count occurrences of 1 to m consecutive symbol errors.
2. divide the m cumulative sums by the number of simulated symbols.
• Analytic:
1. P1=∑ (probability of 1 dominant error-pattern that spans 1 symbol).
2. P2=∑ (probability of 1 dominant error-pattern that spans 2 symbols)+ ∑(probability of 2 dominant error-patterns encapsulated in two separate consecutive symbols).
3. P3=∑ (probability of 1 dominant error-pattern that spans 3 symbols)+ ∑ (probability of 1 dominant error-pattern that spans 2 consecutive symbols)
×(probability of 1 dominant error-pattern that spans a 3rd succeeding symbol )
+ ∑ (probability of 3 dominant error-patterns encapsulated in 3 separate consecutive symbols).
� Step 2:0 1
0 1
0 1
0 1
0
0 0 1
!1
! ! !
: , ; 1 .
m
m
s ss
W m
s s s m
m m m
i i i i
i i i
nP P P P
s s s
s is t s n P P= = =
≥ − … ……
∀ ≤ = = −
∑∑ ∑
∑ ∑ ∑54
Hakim, April 2009
6.5 7 7.5 8 8.5 9 9.510
-8
10-7
10-6
10-5
10-4
10-3
10-2
10-1
SNR (dB)
Sy
mb
ol
Err
or
Ev
ent
Pro
bab
ilit
y
P1, 10 bit sybmol
P2, 10 bit sybmol
P3, 10 bit sybmol
P1, 18 bit sybmol
P2, 18 bit sybmol
P3, 18 bit sybmol
Symbol Error Event ProbabilitiesSingle-level RS Vs EPC-RS
• ISI channel 5+6D-D3, AWGN.
• Shortened (450, 450-2T) RS
over GF(210).
• (18, 10) EPCC + shortened
(250,250-2Ttp) RS over GF(28).
55
Hakim, April 2009
10 20 30 40 50 60 70 80 90 1007
7.5
8
8.5
9
9.5
10
10.5
11
11.5
12
Correction power, t
SN
R (
dB
)
RS, ~1/R2 penalty
RS, ~1/R penalty
TP-RS, ~1/R2 penalty
TP-RS, ~1/R penalty
Minimum SNR Required for SFR=10-13
Single-level RS Vs EPC-RS
• ISI channel 5+6D-D3, AWGN.
• Shortened (450, 450-2T) RS
over GF(210).
• (18, 10) EPCC + shortened
(250,250-2Ttp) RS over GF(28).
56
Hakim, April 2009
0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 10.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
Rate, R
min
SN
R(R
S)
- m
inS
NR
(TP
RS
)• ISI channel 5+6D-D3, AWGN.
• Shortened (450, 450-2T) RS
over GF(210).
• (18, 10) EPCC + shortened
(250,250-2Ttp) RS over GF(28).
Difference of Minimum SNR Required for SFR=10-13
Single-level RS Vs EPC-RS
57
Hakim, April 2009
1/2 K 1 K 3/2 K 2 K 5/2 K 3 K 7/2 K 4 K7.2
7.4
7.6
7.8
8
8.2
8.4
8.6
8.8
9
Sector size
SN
R (
dB
)
Minimum SNR Required for SFR=10-13
Single-level RS Vs EPC-RS
• ISI channel 5+6D-D3, AWGN.
• Shortened RS, GF(212), R=0.89.
• (24, 14) EPCC + shortened RS
over GF(210), total R=0.89.
58
Hakim, April 2009
Tensor Product Parity CodesEPC-QLDPC
Hakim, April 2009
60
Non-binary LDPC: Complexity and Performance
� Davey and MacKay (1998) have shown that the near Shannon limit
performance of binary LDPC codes in AWGN can be significantly enhanced
by a move to fields of higher order.
� For monotonic improvement in performance with field order the parity
check matrix for short blocks has to be very sparse
• Column weight 3 codes over GF(q) exhibit worse BER as q increases.
• Column weight 2 codes over GF(q) exhibit monotonically lower BER as q
increases.
• Results confirmed by Hu, Eleftheriou, and Arnold (2005): optimum degree
sequence favors a regular graph of degree-2 in all symbol nodes.
� Chang and Cruz (2008) studied the decoding time complexity of non-binary
LDPC for PR channels
• Moving from binary to non-binary LDPC results in a gain of around 1 dB.
• Size of the Galois field does not affect the decoding complexity.
• The decoding complexity ratios of non-binary to binary LDPC-coded system can be as high as 7.42 (in the number of FLP ops).
• Time complexity ratios are always smaller than the ratios of FLP ops.
Hakim, April 2009
61
Soft Decoding of EPC-LDPC tensor product code
BinaryViterbi
−
+
ˆkc
hk Correlator(e1)
Correlator(e2)
Correlator(elmax
)
kq
kr
Tensor symbol i
::
1 390i≤ ≤
Syndrome j
630 j α≤ ≤
::
List of likely errors and reliabilities
6
( )
(2 )
ch
iSyn j
j GF
γ =
∈
1 390i≤ ≤
LDPC FFT-basedSPA
over GF(26)
Map tobit-level
a priori info
Global iteration
Convolve
EPCC listdecoder
RS decodert=6
ˆk
b
630 j α≤ ≤
6
( )
(2 )
e
iSyn j
j GF
γ =
∈6(2 )GF
kλ kλ
LDPC iteration
Hakim, April 2009
Ho
w t
o g
ener
ate
Sy
nd
rom
e p
.m.f
. fo
r
each
Ten
sor
Sy
mb
ol?
0 50 100 150 200 2500
0.2
0.4
0.6
0.8
j
Pr[
Syndro
me(
i)]=
j
p.m.f. of Tensor Symbol i
j=127
j=66
j=233
62
Hakim, April 2009
3.8 4 4.2 4.4 4.6 4.8 5 5.2 5.4 5.6 5.810
-5
10-4
10-3
10-2
10-1
100
Eb / N
o dB
Secto
r E
rro
r R
ate
CU.I.D
GF(256) LDPC, Col wt 2.
GF(64) LDPC, Col wt 2.
GF(64) LDPC, Col wt 3.
GF(2) LDPC, Col wt 5.
63
SFR Comparison of Single-level LDPC Systems
• (4550, 4095) GF(2)-LDPC,
col. wt.= 5, cycle size= 91, and
binary BCJR. 10×50 TE.
• (570, 510) GF(28)-LDPC, 4560
bits, col. wt.= 2, cycle size= 15
symbols, and GF(28)-BCJR.
0×50 TE.
•(760, 684) GF(26)-LDPC, 4560
bits, col. wt.= 2, cycle size= 19
symbols, and GF(26)-BCJR.
0×50 TE.
• (775, 700) GF(26)-LDPC, 4650
bits, col. wt.= 3, cycle size= 25
symbols, and GF(26)-BCJR.
0×50 TE.
Hakim, April 2009
4.3 4.4 4.5 4.6 4.7 4.8 4.9 5 5.1 5.2 5.3 5.4 5.5 5.6 5.710
-5
10-4
10-3
10-2
10-1
100
Eb / N
o dB
Sect
or
Err
or
Rat
e
GF(256) LDPC, Col wt 2.
GF(64) LDPC, Col wt 3.
Binary LDPC, Col wt 5.
1/2 K T-EPCC-GF(64)LDPC.
1 K T-EPCC-GF(64)LDPC.
64
SFR of Ideal T-EPCC-QLDPC
• (4550, 4095) GF(2)-LDPC,
col. wt.= 5, cycle size= 91, and
binary BCJR. 10×50 TE.
_____________________________
• (570, 510) GF(28)-LDPC, 4560
bits, col. wt.= 2, cycle size= 15
symbols, and GF(28)-BCJR.
0×50 TE.
_____________________________
• (775, 700) GF(26)-LDPC, 4650
bits, col. wt.= 3, cycle size= 25
symbols, and GF(26)-BCJR.
0×50 TE.
_____________________________
EPCC (12, 6), R=1/2:
• 1/2KB T-EPCC-QLDPC:
(4680, 4212) TPPC, R=0.9,
(390, 312) PEG-QC GF(26) LDPC,
R=0.8, col. wt.= 3, cycle size 26.
• 1KB T-EPCC-QLDPC:
(936, 8424) TPPC, R=0.9,
(780, 624) PEG-QC GF(26) LDPC,
R=0.8, col. wt.= 3, cycle size 52.
Hakim, April 2009
65
SFR of Practical T-EPCC-QLDPC in 1/R penalty
4.3 4.4 4.5 4.6 4.7 4.8 4.9 5 5.1 5.2 5.3 5.4 5.5 5.6 5.710
-5
10-4
10-3
10-2
10-1
100
Eb / N
o dB
Sec
tor
Err
or
Rat
e
GF(256) LDPC, Col wt 2.
GF(64) LDPC, Col wt 3.
Binary LDPC, Col wt 5.
Ideal, 1/2K T-EPCC-GF(64)LDPC.
Real, 1/2K T-EPCC-GF(64)LDPC.
Ideal, 1 K T-EPCC-GF(64)LDPC.
Real, 1 K T-EPCC-GF(64)LDPC.
• 1/2KB T-EPCC-QLDPC:
(4680, 4212) TPPC, R=0.9.
(390, 312) PEG-QC GF(26) LDPC,
R=0.8, col. wt.= 3, cycle size 26.
EPCC (12, 6), R=1/2.
3×50 TE.
Outer RS, t=6.
• 1KB T-EPCC-QLDPC:
(936, 8424) TPPC, R=0.9.
(780, 624) PEG-QC GF(26) LDPC,
R=0.8, col. wt.= 3, cycle size 52.
EPCC (12, 6), R=1/2.
3×50 TE.
Outer RS, t=12.
Hakim, April 2009
66
SFR of Practical T-EPCC-QLDPC in 1/R2 penalty
4.4 4.5 4.6 4.7 4.8 4.9 5 5.1 5.2 5.3 5.4 5.5 5.6 5.710
-5
10-4
10-3
10-2
10-1
100
Eb / N
o dB
Secto
r E
rror
Rate
GF(256) LDPC, Col wt 2.
GF(64) LDPC, Col wt 3.
Binary LDPC, Col wt 5.
Ideal, 1/2K T-EPCC-GF(64)LDPC.
Real, 1/2K T-EPCC-GF(64)LDPC.
Ideal, 1 K T-EPCC-GF(64)LDPC.
Real, 1 K T-EPCC-GF(64)LDPC.
• 1/2KB T-EPCC-QLDPC:
(4680, 4212) TPPC, R=0.9.
(390, 312) PEG-QC GF(26) LDPC,
R=0.8, col. wt.= 3, cycle size 26.
EPCC (12, 6), R=1/2.
3×50 TE.
Outer RS, t=6.
• 1KB T-EPCC-QLDPC:
(936, 8424) TPPC, R=0.9.
(780, 624) PEG-QC GF(26) LDPC,
R=0.8, col. wt.= 3, cycle size 52.
EPCC (12, 6), R=1/2.
3×50 TE.
Outer RS, t=12.
Hakim, April 2009
Thesis Contributions
67
� Proposed a channel matched turbo equalization scheme based on the SISO list
decoder of EPCC, termed TE-EPCC.
� Demonstrated the “Spectral Thinning” effect achieved by incorporating EPCC in
TE of the Dicode channel.
� Derived an upper bound on the ML BER of TE-EPCC.
� Proposed a turbo-product code based on EPCC.
� Proposed an error-pattern correcting tensor product code that is linear time
encodable.
� Derived a fully analytic multinomial method to estimate the SER of RS in ISI.
� Designed a two-level coding scheme based on the tensor product of EPCC and
QLDPC that achieves a better complexity-performance trade-off compared to
single-level QLDPC.
� Designed a soft iterative decoder of T-EPCC-QLDPC.
Thank you !
Hakim, April 2009
69
6.5 7 7.5 8 8.5 9 9.510
-7
10-6
10-5
10-4
10-3
10-2
10-1
SNR (dB)S
ym
bo
l E
rro
r E
ven
t P
rob
abil
ity
P1, simulation
P2, simulation
P3, simulation
P1, analytic
P2, analytic
P3, analytic
6.5 7 7.5 8 8.5 9 9.510
-8
10-7
10-6
10-5
10-4
10-3
10-2
10-1
SNR (dB)
Sy
mb
ol
Err
or
Ev
ent
Pro
bab
ilit
y
P1, simulation
P2, simulation
P3, simulation
P1, analytic
P2, analytic
P3, analytic
Hakim, April 2009
6.5 7 7.5 8 8.5 9 9.5 1010
-7
10-6
10-5
10-4
10-3
10-2
10-1
SNR (dB)
SE
R
RS, t=5, analytic
RS, t=5, count
TP-RS, t=6, analytic
TP-RS, t=6, count
RS, t=10, analytic
RS, t=10, count
TP-RS, t=12, analytic
TP-RS, t=12, count
70
Hakim, April 2009
7 7.5 8 8.5 9 9.5 1010
-20
10-15
10-10
10-5
SNR (dB)
SE
R
Analytic t=5:5:40.
Semi-Analytict = 5:5:40.
71
Hakim, April 2009
Non-binary LDPC: Hamming Weight Spectrum Hu(2004)
72