The input–output weight enumeration of binary Hamming codes

6
EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS Euro. Trans. Telecomms. 2006; 17:483–488 Published online 15 June 2006 in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/ett.1133 Letter Transmission Systems The input–output weight enumeration of binary Hamming codes Pavel Loskot and Norman C. Beaulieu iCORE Wireless Communication Laboratory, ECE Department, University of Alberta, Edmonton, Alberta, Canada, T6G 2V4 SUMMARY We show that binary Hamming codes can be constructed recursively. The recursive structure is used to efficiently enumerate the input–output weights. Hence, the bit error rate of binary Hamming codes with antipodal signalling and hard-decision demodulation used on additive white Gaussian noise channels can be evaluated exactly. The numerically computed coding gain of Hamming codes reveals a surprising fact that the coding gain is not monotonically increasing with signal-to-noise ratio. Copyright © 2006 AEIT 1. INTRODUCTION We make an important observation that binary Hamming codes can be constructed recursively from binary block rep- etition codes with a parity bit. The recursive construction can be exploited to compute the input–output weight enu- merator (IOWE) for any block-length whereas brute-force enumeration can only be used for short block-lengths. Al- though the IOWE function has been obtained recently in Reference [1], the recursive evaluation of the IOWE pre- sented in this letter appears to be more computationally effi- cient. The IOWE can be used to implement encoding and de- coding, and to evaluate the exact probability of decoded bit error for binary antipodal signalling and hard-decision de- modulation over an additive white Gaussian noise (AWGN) channel. The numerically computed coding gain is shown not to be monotonic in the signal-to-noise ratio (SNR), but rather exhibits a minimum for a particular small value of the SNR. This contradicts statements found in the literature. *Correspondence to: Pavel Loskot, 2nd Floor, ECERF, University of Alberta, Edmonton, Alberta, Canada, T6G 2V4. E-mail: [email protected] Contract/grant sponsors: Alberta Ingenuity Studentship; iCORE Graduate Student Scholarship; i CORE Research Chair in Broadband Wireless Communications. 2. CODE CONSTRUCTION The binary Hamming code is denoted as m = [n, k, d min ] where n = 2 m 1 is the codeword length, k = 2 m 1 m is the code dimensionality (length of input information vec- tor), m = n k 2 is the number of parity bits, and d min = 3 is the minimum Hamming distance between any two codewords [2]. In a systematic form, G m = [I (k) . . . P (k,m) ] Z (k,n) 2 is the generator matrix where Z 2 ={0, 1}, P (k,m) Z (k,m) 2 is the parity matrix, I (k) Z (k,k) 2 is the identity ma- trix, and the corresponding parity check matrix is H m = [P T (k,m) . . . I (m) ] T Z (n,m) 2 . The Hamming codes are single- error correcting and two-error detecting perfect codes. Hence, the parity check matrix is composed of all non- zero m-tuples and can be readily rearranged into a sys- tematic form. Denote the all-ones matrix as 1, and m = [n 1 ,k 1 , 3], then m+1 = [n 2 ,k 2 , 3] has the generator matrix Received 20 July 2004 Revised 8 March 2005 Copyright © 2006 AEIT Accepted 14 March 2006

Transcript of The input–output weight enumeration of binary Hamming codes

Page 1: The input–output weight enumeration of binary Hamming codes

EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONSEuro. Trans. Telecomms. 2006; 17:483–488Published online 15 June 2006 in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/ett.1133

Letter

Transmission Systems

The input–output weight enumeration of binary Hamming codes

Pavel Loskot∗ and Norman C. Beaulieu

iCORE Wireless Communication Laboratory, ECE Department, University of Alberta, Edmonton, Alberta, Canada, T6G 2V4

SUMMARY

We show that binary Hamming codes can be constructed recursively. The recursive structure is used toefficiently enumerate the input–output weights. Hence, the bit error rate of binary Hamming codes withantipodal signalling and hard-decision demodulation used on additive white Gaussian noise channels can beevaluated exactly. The numerically computed coding gain of Hamming codes reveals a surprising fact thatthe coding gain is not monotonically increasing with signal-to-noise ratio. Copyright © 2006 AEIT

1. INTRODUCTION

We make an important observation that binary Hammingcodes can be constructed recursively from binary block rep-etition codes with a parity bit. The recursive constructioncan be exploited to compute the input–output weight enu-merator (IOWE) for any block-length whereas brute-forceenumeration can only be used for short block-lengths. Al-though the IOWE function has been obtained recently inReference [1], the recursive evaluation of the IOWE pre-sented in this letter appears to be more computationally effi-cient. The IOWE can be used to implement encoding and de-coding, and to evaluate the exact probability of decoded biterror for binary antipodal signalling and hard-decision de-modulation over an additive white Gaussian noise (AWGN)channel. The numerically computed coding gain is shownnot to be monotonic in the signal-to-noise ratio (SNR), butrather exhibits a minimum for a particular small value of theSNR. This contradicts statements found in the literature.

* Correspondence to: Pavel Loskot, 2nd Floor, ECERF, University of Alberta, Edmonton, Alberta, Canada, T6G 2V4. E-mail: [email protected]

Contract/grant sponsors: Alberta Ingenuity Studentship; iCORE Graduate Student Scholarship; iCORE Research Chair in Broadband WirelessCommunications.

2. CODE CONSTRUCTION

The binary Hamming code is denoted as �m = [n, k, dmin]where n = 2m − 1 is the codeword length, k = 2m − 1 − m

is the code dimensionality (length of input information vec-tor), m = n − k ≥ 2 is the number of parity bits, and dmin =3 is the minimum Hamming distance between any twocodewords [2]. In a systematic form,Gm = [I(k)

...P(k,m)] ∈Z

(k,n)2 is the generator matrix where Z2 = {0, 1}, P(k,m) ∈

Z(k,m)2 is the parity matrix, I(k) ∈ Z

(k,k)2 is the identity ma-

trix, and the corresponding parity check matrix is Hm =[PT

(k,m)

... I(m)]T ∈ Z(n,m)2 . The Hamming codes are single-

error correcting and two-error detecting perfect codes.Hence, the parity check matrix is composed of all non-zero m-tuples and can be readily rearranged into a sys-tematic form. Denote the all-ones matrix as 1, and �m =[n1, k1, 3], then �m+1 = [n2, k2, 3] has the generatormatrix

Received 20 July 2004Revised 8 March 2005

Copyright © 2006 AEIT Accepted 14 March 2006

Page 2: The input–output weight enumeration of binary Hamming codes

484 P. LOSKOT AND N. C. BEAULIEU

Gm+1 =

I(k1) 0(k1,m) 0(k1,k1)

0(m,k1) I(m) 0(m,k1)

0(k1,k1) 0(k1,m) I(k1)

∣∣∣∣∣∣∣︸ ︷︷ ︸I(k2)

0(k1,1) P(k1,m)

1(m,1) I(m)

1(k1,1) P(k1,m)

︸ ︷︷ ︸P(k2,m+1)

and the parity check matrix

Hm+1 =

[0(1,k1) 1(1,m) 1(1,k1)

PT(k1,m) I(m) PT

(k1,m)

∣∣∣∣∣︸ ︷︷ ︸PT

(k2,m+1)

1 0(1,m)

0(m,1) I(m)

]T

︸ ︷︷ ︸I(m+1)

where

n2 = 2m + 2k1 + 1 = 2m+1 − 1

k2 = 2m+1 − m − 2 = n1 + k1.

Based on the structure of the generator matrix,Gm+1, wecan show that the codewords of the �m+1 code can be con-structed as a modulo 2 sum of the zero-padded codewordsof the �m code, and the block repetition code of rate 1/2,�k

2 = (Zk2, Z

k2) = [2k, k, 2] (i.e. the block of k information

bits and its copy are concatenated), and appending a paritybit. This is proved in the following theorem.

Theorem 1. The Hamming code �m+1 = (�m ◦Z

(2m−1)2 , π)={(x ◦ y, π(y)),x ∈ �m,y ∈ Z

(2m−1)2 } where

the operator, ◦, combines the codewords as

x ◦ y = (x0, x1, . . . , xn1−1) ◦ (y0, y1, . . . , yn1−1)

= (y0, . . . , yn1−1, x0 + y0, . . . , xn1−1 + yn1−1)

= (y,x+ y)

and n1 = 2m − 1, π(y) = ∑n1−1i=0 yi is the even-parity bit,

and all additions are modulo 2.

Proof. Let the input information vector, u, of the �m+1code having dimensionality, dim(u) = k2, be partitionedas u = (u1,u2,u3) where dim(u1) = dim(u3) = k1 anddim(u2) = m. Then, the codeword uGm+1 = (u1 + u3 +u3,u2,u3

... π(u2) + π(u3),u1P(k1,m) + u2 + u3P(k1,m)).Hence, x = (u1 + u3,u1P(k1,m) + u3P(k1,m)) is a code-word of the �m code, and (y,y) = (u3,u2,u3,u2)is the block repetition code �n1

2 with the parity bitπ = π(y) = π(u3) + π(u2). Finally, applying the opera-tor, ◦, concludes the proof. �

The construction using the operator, ◦, was introduced inReference [3, p. 447], and later used independently in Ref-erence [4, p. 717], and generalised in Reference [5]. Similarto the construction of Theorem 1, Vasil’ev [6, p. 77] appliesa strictly nonlinear mapping as the parity bit to constructperfect single-error correcting codes which are not equiva-lent to any linear code. However, the Vasil’ev constructionusing a linear mapping to generate binary Hamming codesis not considered in Reference [6, p. 77]. The constructionof Theorem 1 can be used for the encoding and decoding ofHamming codes as follows.

Proposition 1. The Hamming code �m+1 can be encoded(decoded) using the encoder (decoder) for the code �m.

Proof. Encoding proceeds directly according to Theo-rem 1. Assume a binary symmetric channel. We will showthat one bit error in a �m+1 codeword can be corrected usingthe decoder for the �m code. Let (x,y, π) be the transmittedbinary codeword of the �m+1 code. In a systematic form,according to Theorem 1, let dim(x) = dim(y) = 2m − 1and dim(π) = 1. Then, let (x′,y′, π′) be the correspond-ing received vector. If an error is within (x′, π′) bits, theeven-parity of (x′, π′) is 1, and the nonzero syndrome(x′ + y′)Hm corresponds to the position of the error withinx′. Otherwise, the zero syndrome (x′ + y′)Hm indicates π′is in error. If an error is within y′, the even-parity of (x′, π′)is zero, and the syndrome (x′ + y′)Hm corresponds to theposition of the error within y′. �

Therefore, the encoding (decoding) of the Hamming code�m+1 can be done using the less complex encoder (decoder)for the Hamming code �m.

3. INPUT–OUTPUT WEIGHT ENUMERATOR

For convenience, we represent the IOWE [7, p. 513] of thecode [n, k, dmin] as the matrix A(iowe) having the o-th rowand w-th column element, [A(iowe)]o,w, equal to the numberof codewords of input information vector weight o andthe total (output) weight w where o = 0, 1, . . . , k and w =0, 1, . . . , n. Note that [A(owe)]w = [1(1,k+1)A(iowe)]w is theoutput weight enumerator which is well-known [2, p. 81].The IOWE can be related to the input-redundancy weightenumerator (IRWE) [8] as [A(irwe)]o,p = [A(iowe)]o,o+p

where p = (w − o) ∈ 0, 1, . . . , m is the weight of paritycheck bits. Note also that contrary to the output weightenumerator, in general, even for a systematic representa-tion, the code IOWE is not unique but depends on the formof the generator matrix. However, in the particular case ofHamming codes, the IOWE is unique.

Copyright © 2006 AEIT Euro. Trans. Telecomms. 2006; 17:483–488

Page 3: The input–output weight enumeration of binary Hamming codes

BINARY HAMMING CODES 485

The IOWE of the binary Hamming code �m can be ob-tained by a brute-force method, say, for m ≤ 5. For largervalues of m (i.e. m > 5), recursive evaluation of the IOWEexploiting the recursive structure of binary Hamming codesis computationally efficient. The derivation is performed inthe Appendix using the following claims.

Claim 1. The IOWE matrix of serially concatenated codes�1 and �2 having the IOWE matricesA�1

(iowe) andA�2(iowe),

respectively, is given by two-dimensional (2D) convolution,that is,

A(�1,�2)(iowe) = A�1

(iowe) ⊗A�2(iowe).

Proof. Let x ∈ �1 have input and output weights o1 andw1, respectively. Similarly, y ∈ �2 has weights o2 and w2.The serially concatenated codeword (x,y) ∈ (�1, �2) hasweights o = o1 + o2 and w = w1 + w2. For every pair o

and w, we sum over all permissible codewords x and y,that is,[A

(�1,�2)(iowe)

]o,w

=∑u

∑v

[A�1

(iowe)

]u,v

[A�2

(iowe)

]o−u,w−v

.

�Claim 2. If the all-ones vector is a codeword of a linearbinary block code, then the IOWE matrix of the code issymmetric, that is, [A(iowe)]o,w = [A(iowe)]k−o,n−w.

Proof. The code is linear, and hence, for every codeword,c ∈ �, there exists exactly one complementary codeword,c ∈ �. �Corollary 1. [A(iowe)]0,0 = [A(iowe)]k,n = 1.

Hence, the IOWE matrix, A�m+1(iowe), of the �m+1 =

[n2, k2, 3] code is constructed from the IOWE matrix,A�m

(iowe), of the �m = [n1, k1, 3] code as

[A

�m+1

(iowe)

]o,w

=

1 for w = 0, o = 0 and w = n2, o = k2

�(o, w, I ′

1,A�m

(iowe)

)+�

(o, w − 1, I ′′

1 ,A�m

(iowe)

)for

w = 3, 4, . . . , n1

o = max(1, w − m − 1), . . . , w[A

�m+1

(iowe)

]k2−o,n2−w

forw = n1 + 1, . . . , n2 − 3

o = 0, 1, . . . , k2

0 otherwise

(1)

where

�(o, w, I

′, ′′1 ,A�m

(iowe)

)

=min(w,2k1,o)∑

x=max(w−2m,o−m,0)

{(k1

x/2

)(m

o − x

)I

′, ′′1

+(

m

o − x − w−x−m

2

)2k1−1I2

+n1−3∑v=3

min(v,k1−1)∑u=max(1,v−m)

[A�m

(iowe)

]u,v

2u−1

(k1 − u

x−u

2

)(m − v + u

w−x−v+u

2

)

×(

v − u

o − x − w−x−v+u

2

)I3

}

I ′1 = δo−x− w−x

2mod2(x + 1)

× (δmod4(x) mod2(o + 1) + δmod4(x−2) mod2(o)

)I ′′

1 = δo−x− w−x2

mod2(x + 1)

× (δmod4(x) mod2(o) + δmod4(x−2) mod2(o + 1)

)I2 = mod2(−x − 1 + k1) mod2(w − n1 − 1)

× δx−k1δw−x−m

I3 = mod2(−x − 1 + u) mod2(w − v − 1)

and δ is the Kronecker delta, modb(a) = a − �a/bb where�· is the floor function, and the binomial coefficient,

(a

b

)=

{a!

b!(a−b)! a ≥ b ≥ 0

0 otherwise.

4. PROBABILITY OF DECODED BIT-ERRORAND CODING GAIN

Assume binary antipodal signalling, an AWGN channel,hard-decision demodulation and complete (standard array)decoding. Then, for equally probable codewords, assumingthe all-zeros codeword was transmitted (i.e. the code islinear), the average probability of decoded bit error is[6, p. 20], [7, p. 513], [9, p. 244], [10, p. 804],

Pb(e) = 1

k

n∑j=dmin

Bj P(j) (2)

Copyright © 2006 AEIT Euro. Trans. Telecomms. 2006; 17:483–488

Page 4: The input–output weight enumeration of binary Hamming codes

486 P. LOSKOT AND N. C. BEAULIEU

where Bj = ∑ki=0 i [A(iowe)]i,j is the total weight of infor-

mation bits in all codewords of weight j, and P(j) is theprobability of decoding the codeword of Hamming weightj. The expression (2) implicitly assumes that the probabilityof decoding a particular codeword depends only on itsHamming weight. Thus, the expression (2) is also valid forbinary antipodal signalling and perfectly quantised soft-decisions (the Euclidean distance is directly proportional tothe Hamming distance). Note that

∑nj=dmin

Bj = k 2k−1. Ingeneral, the expression (2) is difficult to evaluate. However,for the class of perfect codes, P(j) equals the probabilitythat the codeword of weight j is the closest to the receivedword. In particular, P(j) = ∑�(dmin−1)/2

d=0 Pjd where Pj

d

is the probability that the received word is at Hammingdistance d from a codeword of weight j. Assuming i out ofj-ones bits and (d − i) out of (n − j)-zeros bits are invertedgiving the word at Hamming distance d from the original

weight j word, then Pjd =

d∑i=0

pj+d−2iqn−j+2i−d(ji

)(n−jd−i

)where p = Q

(√2 k

nγb

)is the probability of a bit error,

q = 1 − p, and γb is the SNR per uncoded antipodalsymbol [9, p. 244], [11]. For cyclic codes, the coefficients,Bj , can also be computed from the output weights using[12, Equation (13)] which can be used to obtain the exactprobability of decoded bit-error [11, 13]. A tight upperbound for the probability of decoded bit-error for binaryperfect codes is given in Reference [14].

The coding gain is defined as the decrease in the requiredaverage energy per transmitted bit for the coded systemsto maintain a given average probability of decoded bit er-ror [7, p. 456], [9, p. 11]. The coding gains of selectedHamming codes are shown in Figure 1. Observe that thecoding gain attains a minimum at a particular small valueof SNR for m > 5, and that the minimum coding gain isalways negative. Interestingly, if Rm denotes the rate of theHamming code �m, then the asymptotic coding gain at lowSNR is approximately computed as g0=Rm−1 for m ≥ 3. Itis commonly believed (or perhaps loosely stated) that cod-ing gain monotonically increases with SNR. For example,Jacobs [15] expects the coding gain to increase with SNR,while Benedetto and Biglieri [7, p. 457] states explicitly thatthe coding gain always increases with SNR. Jovanovic [16]conjectured that the coding gain might not be monotonicin SNR; however, only a trivial example of repetition cod-ing is given where the coding gain monotonically decreaseswith increasing SNR. Figure 1 provides counterexamplesproving that coding gain is not necessarily a monotonicallyincreasing function of SNR, although the gain is monoton-ically increasing with SNR in regions of positive coding

Figure 1. Coding gain of some Hamming codes versus SNR peruncoded bit, γb.

gain. Note that Jovanovic [16] also plots the coding gainversus SNR for Hamming codes �3 and �4, but for thesecodes and the SNR region chosen the minima cannot beobserved.

5. CONCLUSIONS

We showed that the IOWE of binary Hamming codes can beefficiently obtained using their recursive structure and thefact that the IOWE of serially concatenated codes is givenby 2D-convolution. Knowledge of the IOWE can be used tocompute the exact probability of decoded bit error of binaryHamming codes over AWGN channels using hard-decisiondemodulation. The numerically computed coding gains ofHamming codes revealed a surprising fact that the codinggain is minimum for a particular small value of SNR, and theminimum coding gain is always negative. This fact clarifiesstatements in the literature.

APPENDIX: DERIVATION OF THE IOWE

The IOWE matrix,A�m+1(iowe), of the �m+1 = [n2, k2, 3] code

given A�m

(iowe) of the �m = [n1, k1, 3] code is derived. De-note the i-th column and the j-th row of the matrix, A, bycol(A)i and row(A)j , respectively, and the diagonal ma-trix with elements given by the vector, a, as diag(a). Let

the vector, Ti =((

i0

),(i1

), . . . ,

(ii

)), be the i-th row of the

Pascal triangle. Let x be an arbitrary binary vector of length

Copyright © 2006 AEIT Euro. Trans. Telecomms. 2006; 17:483–488

Page 5: The input–output weight enumeration of binary Hamming codes

BINARY HAMMING CODES 487

n1 bits and weight wx. Given x, �(x) = {x ◦ y,y ∈ Zn12 }

is a [2n1, n1, 2] code having the IOWE matrix

col(A(iowe)(wx)

)j=

col([TT

wx⊗ diag(Tn1−wx

)])

(j−wx)/2

if j = wx, wx + 2, . . . , 2n1 − wx

0(n1+1,1) otherwise

for j = 0, 1, . . . , 2n1. Depending on parity π(y) beingeven or odd, we can write A(iowe)(wx) = Ae

(iowe)(wx) +Ao

(iowe)(wx) where

row(Ae

(iowe)(wx))

i=

{row(A(iowe)(wx))i i—even

0(1,2n1+1) otherwise

row(Ao

(iowe)(wx))

i=

{row(A(iowe)(wx))i i—odd

0(1,2n1+1) otherwise

for i = 0, 1, . . . , k. Let x = (x1,x2) be the codeword of�m = [n1, k1, 3] where x1 are information bits of weight,wx1 , and x2 are the corresponding parity bits of weight,wx2 . Using Theorem 1, the codewords of �m+1 are ob-tained by concatenation of the two codes, �1(x1) = {(x1 ◦y1),y1 ∈ Z

k12 }, and, �2(x2) = {(x2 ◦ y2),y2 ∈ Z

n1−k12 },

and appending the parity bit, π(y1) + π(y2). We expressthe corresponding IOWE matrices of the codes �1(x1) and�2(x2) asA(iowe)(wx1 ) = Ae�1

(iowe)(wx1 ) +Ao�1(iowe)(wx1 ) and

A(iowe)(wx2 ) = Ae�2(iowe)(wx2 ) +Ao�2

(iowe)(wx2 ). Then, usingClaim 1, and summing over all input–output weights ofthe Hamming code �m, the IOWE matrix of the �m+1code is

[A

�m+1(iowe)

]o,w

=k1∑

u=0

n1∑v=0

[A�m

(iowe)

]u,v

[A

(�1,�2,π)(iowe) (u, v)

]o,w

for o = 0, 1, . . . , k2 and w =0, 1, . . . , n2 where A

(�1,�2,π)(iowe) (u, v) =

{diag(1(1,k1+n1+1)A

e�1(iowe)(u)

)⊗Ae�2

(iowe)(v − u) +diag

(1(1,k1+n1+1)A

o�1(iowe)(u)

)⊗Ao�2

(iowe)(v − u)} ⊗(1, 0) + {diag

(1(1,k1+n1+1)A

e�1(iowe)(u)

)⊗Ao�2

(iowe)(v −u) + diag

(1(1,k1+n1+1)A

o�1(iowe)(u)

)⊗Ae�2

(iowe)(v − u)} ⊗

(0, 1). The computationally efficient expression (1) can beobtained by further manipulations and using Claim 2 andCorollary 1.

ACKNOWLEDGEMENT

The authors are grateful for suggestions from the anonymous re-viewers. This work was supported in part by an Alberta IngenuityStudentship and by an iCORE Graduate Student Scholarship, aswell as the iCORE Research Chair in Broadband Wireless Com-munications.

REFERENCES

1. Lu H-F, Kumar PV, Yang E-H. On the input-output weight enumera-tors of product accumulate codes. IEEE Communication Letter 2004;8(8):520–522.

2. Lin S, Costello DJ. Error Control Coding: Fundamentals and Appli-cations. Prentice-Hall, Englewood Cliffs, NJ, USA, 1983.

3. Plotkin M. Binary codes with specified minimum distance. IEEETransactions on Information Theory 1960; 445–450.

4. Sloane NJA, Whitehead DS. New family of single-error correctingcodes. IEEE Transactions on Information Theory 1970; IT-16(6):717–719.

5. Liu CL, Ong BG, Ruth GR. A construction scheme for linear andnon-linear codes. Discrete Mathematics 1973; 4:171–184.

6. MacWilliams FJ, Sloane NJA. The Theory of Error-Correcting Codes.North-Holland Publ., Amsterdam; New York, 1977.

7. Benedetto S, Biglieri E. Principles of Digital Transmission with Wire-less Applications (2nd edn). Kluwer Academic, New York, 1999.

8. Benedetto S, Montorsi G. Unveiling turbo codes: some results on par-allel concatenated coding schemes. IEEE Transactions on InformationTheory 1996; 42(2):409–428.

9. Wicker SB. Error Control Systems for Digital Communication andStorage. Prentice Hall, Englewood Cliffs, NJ, USA, 1995.

10. Simon MK, Hinedi SM, Lindsey WC. Digital Communication Tech-niques: Signal Design and Detection. Prentice Hall, Englewood Cliffs,NJ, USA, 1995.

11. Apple GG, Wintz PA. Exact determination of probability of bit errorfor perfect single error correcting codes. In Proceedings of HawaiiInternational Conference on System Sciences, 1970, pp. 922–924.

12. Torrieri D. Information-bit, information-symbol, and decoded-symbolerror rates for linear block codes. IEEE Transactions on Communica-tions 1988; 36(5):613–617.

13. van Lint JH. Coding Theory, Lecture Notes in Mathematics. Springer-Verlag, Berlin, Germany, 1971.

14. Vitthaladevuni PK, Alouini M-S. A tight upper bound on the BERof linear systematic block codes. IEEE Communication Letter 2004;8(5):299–301.

15. Jacobs IM. Practical applications of coding. IEEE Transactions onInformation Theory 1974; IT-20(3):305–310.

16. Jovanovic VM, Budisin SZ. On the coding gain of linear binaryblock codes. IEEE Transactions on Communications 1984; COM-32(5):635–638.

Copyright © 2006 AEIT Euro. Trans. Telecomms. 2006; 17:483–488

Page 6: The input–output weight enumeration of binary Hamming codes

488 P. LOSKOT AND N. C. BEAULIEU

AUTHORS’ BIOGRAPHIES

Pavel Loskot received the BSc. degree in biomedical engineering (honours) in 1996, and the M.Sc. degree in radioelectronics (honours)in 1998, both from the Czech Technical University of Prague, Prague, Czech Republic. He is currently working toward the Ph.D. degreewith the iCORE Wireless Communications Laboratory, University of Alberta, Edmonton, AB, Canada. From 1999 to 2001, he was aResearch Scientist in the Centre for Wireless Communications, University of Oulu, Oulu, Finland. His current research is focused onthe development of efficient performance-evaluation techniques for realistic performance predictions of communication systems, andon solving problems in the area of communication theory, in general. Mr. Loskot is a recipient of the Alberta Ingenuity Studentshipand the iCORE Graduate Student Scholarship.

Norman C. Beaulieu received the B.A.Sc. (honours), M.A.Sc., and Ph.D degrees in electrical engineering from the University ofBritish Columbia, Vancouver, BC, Canada, in 1980, 1983 and 1986, respectively. In September 2000, he became the iCORE ResearchChair in Broadband Wireless Communications with the University of Alberta, Edmonton, AB, Canada, and, in January 2001, theCanada Research Chair in Broadband Wireless Communications. His current research interests include broad-band digital commu-nications systems, ultrawide-bandwidth systems, fading channel modelling and simulation, diversity systems, interference predictionand cancellation, importance sampling and semi-analytical methods, and space time coding. Dr. Beaulieu is a Fellow of the IEEE, ofthe Engineering Institute of Canada, and of the Royal Society of Canada. He received the Natural Science and Engineering ResearchCouncil (NSERC) E. W. R. Steacie Memorial Fellowship in 1999, the Médaille K.Y. Lo Medal of the Engineering Institute of Canada in2004, the Thomas W. Eadie Medal of the Royal Society of Canada in 2005, and the University of British Columbia Special UniversityPrize in Applied Science in 1980.

Copyright © 2006 AEIT Euro. Trans. Telecomms. 2006; 17:483–488