[IEEE 2007 IEEE International Symposium on Information Theory - Nice (2007.06.24-2007.06.29)] 2007...

5
ISIT2007, Nice, France, June 24 - June 29, 2007 Finding the Closest Lattice Point by Iterative Slicing Naftali Sommer, Meir Feder and Ofir Shalvi Department of Electrical Engineering - Systems Tel-Aviv University, Tel-Aviv, Israel Email: [email protected] Abstract- Most of the existing methods to solve the closest lattice point problem are based on an efficient search of the lattice points. In this paper, a novel alternative approach is suggested where the closest point to a given vector is found by calculating which Voronoi cell contains this vector in an iterative manner. Each iteration is made of simple "slicing" operations, using a list of the Voronoi relevant vectors that define the basic Voronoi cell of the lattice. The algorithm is guaranteed to converge to the closest lattice point in a finite number of steps. The method is suitable, for example, for decoding of multi-input multi-output (MIMO) communication problems. The average computational complexity of the proposed method is comparable to that of the efficient variants of the sphere decoder, but its computational variability is smaller. I. INTRODUCTION A real lattice of dimension n is defined as the set of all linear combinations of n real basis vectors, where the coefficients of the linear combination are restricted to be integers: A(G) {Gu: u C 7n}. G is the m x n generator matrix of the lattice, whose columns are the basis vectors which are assumed to be linearly independent over Rm. The closest lattice point problem is the problem of finding, for a given lattice A and a given input point x C Rm, a vector c e A such that x z-c K x 1-vll for all v e A where denotes the Euclidean norm. The closest lattice point problem has applications in various fields, including number theory, cryptography and commu- nication theory [1]. Communication theory applications in- clude quantization, MIMO decoding and lattice coding for the Gaussian channel. Examples of such coding schemes are low-density lattice codes (LDLC), which are based on lattices whose generator matrix has a sparse inverse [2], and signal codes, which are based on lattices with a Toeplitz generator matrix [3]. The computational complexity of the general closest point problem is known to be exponential with the lattice dimension n [1]. Most of the existing methods to solve the problem try to search the lattice points in an efficient manner. The commonly used sphere decoder finds all the lattice points that lie within a hypersphere of a given radius around the input point. A thorough explanation of the sphere decoder and its variants can be found in [1]. Sequential decoding techniques can be used to approximate the sphere decoder with significantly reduced complexity [4][5]. In this paper we shall only address the problem of finding the exact solution for general lattices, and present a new algorithm, called "the lattice slicer". This algorithm breaks the complex n-dimensional problem to a series of one-dimensional simple problems. In each such problem, the Euclidean space is divided to parallel layers, and the algorithm has to decide to which layer the input point belongs. This basic operation is referred to as "slicing". Using the results of the slicing steps, the algorithm can find the Voronoi cell of the lattice that contains the input point. The computational complexity of the slicer algorithm is comparable to that of the sphere decoder, but it is significantly less sensitive to the eigenvalue spread of the lattice generator matrix G. This is an important advantage in several cases (such as MIMO decoding) where the coefficients of G are random. II. BASIC DEFINITIONS AND NOTATIONS The dot product of two vectors a, b G Rm is defined as A m-1 a * b = E ak bk. The Euclidean norm of a G Rm therefore k=O equals lall = a. The Voronoi region of a lattice point is the set of all vectors in Rm for which this point is the closest lattice point, namely Q(A,c) {xeRLm: 1x-cil < 1x-vil, Vv C } where c C A. It is known [6] that the Voronoi regions Q (A, c) are convex polytopes, that they are symmetrical with respect to reflection in c, and that they are translations of Q (A, 0), where 0 is the origin of Rm. The Voronoi-relevant vectors are defined as the minimal set N(A) C A for which Q(A,O) ={xeRL : 11xz1 < 1x -v Vv e N(A)}. A lattice of dimension n can have at most 2n+1 -2 Voronoi relevant vectors. This number is attained with probability 1 by a lattice whose basis is chosen at random from a continuous distribution (See [1] and references therein). Due to the symmetry of the Voronoi region, if v is a Voronoi-relevant vector then -v is also a Voronoi-relevant vector. It is therefore convenient to define a "compressed" set that contains only a single representative from each pair {v, -v}. The one-sided set of Voronoi-relevant vectors is thus defined as the minimal set N'(A) C A for which Q (A, O) = { x RIm: 11zx < K x vS , 11x < 1x +v Vv e N'(A)}. Evidently, for a lattice of dimension n, the set of one-sided Voronoi-relevant vectors can contain at most 2_ 1 vectors. 1-4244-1429-6/07/$25.00 ©2007 IEEE 206

Transcript of [IEEE 2007 IEEE International Symposium on Information Theory - Nice (2007.06.24-2007.06.29)] 2007...

Page 1: [IEEE 2007 IEEE International Symposium on Information Theory - Nice (2007.06.24-2007.06.29)] 2007 IEEE International Symposium on Information Theory - Finding the Closest Lattice

ISIT2007, Nice, France, June 24 - June 29, 2007

Finding the Closest Lattice Point by Iterative Slicing

Naftali Sommer, Meir Feder and Ofir ShalviDepartment of Electrical Engineering - Systems

Tel-Aviv University, Tel-Aviv, IsraelEmail: [email protected]

Abstract- Most of the existing methods to solve the closestlattice point problem are based on an efficient search of the latticepoints. In this paper, a novel alternative approach is suggestedwhere the closest point to a given vector is found by calculatingwhich Voronoi cell contains this vector in an iterative manner.Each iteration is made of simple "slicing" operations, using alist of the Voronoi relevant vectors that define the basic Voronoicell of the lattice. The algorithm is guaranteed to converge to theclosest lattice point in a finite number of steps. The method issuitable, for example, for decoding of multi-input multi-output(MIMO) communication problems. The average computationalcomplexity of the proposed method is comparable to that of theefficient variants of the sphere decoder, but its computationalvariability is smaller.

I. INTRODUCTION

A real lattice of dimension n is defined as the set of all linearcombinations of n real basis vectors, where the coefficients ofthe linear combination are restricted to be integers:

A(G) {Gu: u C 7n}.

G is the m x n generator matrix of the lattice, whosecolumns are the basis vectors which are assumed to be linearlyindependent over Rm. The closest lattice point problem is theproblem of finding, for a given lattice A and a given inputpoint x C Rm, a vector c e A such that

x z-c K x1-vll for all v eA

where denotes the Euclidean norm.The closest lattice point problem has applications in various

fields, including number theory, cryptography and commu-nication theory [1]. Communication theory applications in-clude quantization, MIMO decoding and lattice coding forthe Gaussian channel. Examples of such coding schemesare low-density lattice codes (LDLC), which are based onlattices whose generator matrix has a sparse inverse [2], andsignal codes, which are based on lattices with a Toeplitzgenerator matrix [3]. The computational complexity of thegeneral closest point problem is known to be exponential withthe lattice dimension n [1]. Most of the existing methodsto solve the problem try to search the lattice points in anefficient manner. The commonly used sphere decoder findsall the lattice points that lie within a hypersphere of a givenradius around the input point. A thorough explanation of thesphere decoder and its variants can be found in [1]. Sequentialdecoding techniques can be used to approximate the spheredecoder with significantly reduced complexity [4][5].

In this paper we shall only address the problem of findingthe exact solution for general lattices, and present a new

algorithm, called "the lattice slicer". This algorithm breaks thecomplex n-dimensional problem to a series of one-dimensionalsimple problems. In each such problem, the Euclidean spaceis divided to parallel layers, and the algorithm has to decideto which layer the input point belongs. This basic operationis referred to as "slicing". Using the results of the slicingsteps, the algorithm can find the Voronoi cell of the latticethat contains the input point.The computational complexity of the slicer algorithm is

comparable to that of the sphere decoder, but it is significantlyless sensitive to the eigenvalue spread of the lattice generatormatrix G. This is an important advantage in several cases(such as MIMO decoding) where the coefficients of G arerandom.

II. BASIC DEFINITIONS AND NOTATIONS

The dot product of two vectors a, b G Rm is defined asA m-1

a * b = E ak bk. The Euclidean norm of a G Rm thereforek=O

equals lall = a.The Voronoi region of a lattice point is the set of all vectors

in Rm for which this point is the closest lattice point, namely

Q(A,c) {xeRLm: 1x-cil < 1x-vil, Vv C }

where c C A. It is known [6] that the Voronoi regions Q(A, c)are convex polytopes, that they are symmetrical with respectto reflection in c, and that they are translations of Q(A, 0),where 0 is the origin of Rm.The Voronoi-relevant vectors are defined as the minimal set

N(A) C A for which

Q(A,O) ={xeRL : 11xz1 < 1x -v Vv e N(A)}.

A lattice of dimension n can have at most 2n+1 -2 Voronoirelevant vectors. This number is attained with probability 1 bya lattice whose basis is chosen at random from a continuousdistribution (See [1] and references therein).Due to the symmetry of the Voronoi region, if v is a

Voronoi-relevant vector then -v is also a Voronoi-relevantvector. It is therefore convenient to define a "compressed"set that contains only a single representative from each pair{v, -v}. The one-sided set of Voronoi-relevant vectors is thusdefined as the minimal set N'(A) C A for which Q(A, O) =

{x RIm: 11zx <K xvS , 11x < 1x +v Vv e N'(A)}.Evidently, for a lattice of dimension n, the set of one-sidedVoronoi-relevant vectors can contain at most 2_ 1 vectors.

1-4244-1429-6/07/$25.00 ©2007 IEEE 206

Page 2: [IEEE 2007 IEEE International Symposium on Information Theory - Nice (2007.06.24-2007.06.29)] 2007 IEEE International Symposium on Information Theory - Finding the Closest Lattice

ISIT2007, Nice, France, June 24 - June 29, 2007

Afacet is an (m- 1) dimensional face of an m-dimensionalpolytope. Each Voronoi-relevant vector v defines a facet ofQ(A, 0). This facet is perpendicular to the Voronoi-relevantvector v and intersects it in its midpoint 'v.

III. THE LATTICE SLICERA. Preliminaries

Before presenting the detailed algorithm description, weneed to prove some basic results. The following lemmagives an alternative definition to the one-sided set of Voronoirelevant vectors, where the conditions involve inner productsinstead of distances.Lemma 1: Let A be a lattice in Rm. Denote the Voronoi

region of the origin by Q(A, 0). The one-sided set of Voronoirelevant vectors can be alternatively defined as the minimalset N'(A) C A for which

Q(A, 0) = {x C Rm: xv* <I V l2 Vv C N'(A)}Lemma 1 can be given a simpie geometric interpretation.

The condition zX. Vl < 1~2v means that the length of theprojection of x along the direction of v is less than 1 v21This means that x is located between the two hyperplanesthat are perpendicular to v and intersect it at v and-'v,respectively. Therefore, if this condition is satisfied for all theVoronoi relevant vectors, then x is at the "correct" side of allthe facets that define the Voronoi region of the origin and istherefore inside this Voronoi region.

In the sequel, we shall use rounding operations, where thebasic rounding operation is defined as follows.

Definition 1: For every r C R, round(r) is defines as theclosest integer to r, i.e. the integer k e Z that minimizesr -k . If r is the midpoint between two consecutive integers,then the integer with smaller absolute value is chosen.

This definition is slightly different from the commonly usedrounding operation, where a midpoint between consecutiveintegers is rounded to the integer with larger absolute value.The need for this modification will be explained later.Lemma 1 imposed a set of conditions on a vector that

lies inside the Voronoi region of the origin. The next lemmasuggests a way to modify a given vector if one of theseconditions is not fulfilled.Lemma 2: Let e, v C Rm, and assume that 2e> 1 2

Define e -kv, where k = round (ev2) and where

round(.) is as in Definition 1. Then, e v l<1

||V| 2.2 VLemma 2 has a geometric interpretation: it can be seen that

if k was calculated without the rounding operation, kv is theprojection of e on v. Subtracting this projection from e wouldresult in a vector which is orthogonal to v, thus minimizinge v. However, due to the rounding operation, e will only be"approximately orthogonal" to v.The geometric interpretation of Lemma 2 is illustrated

in Figure 1. The figure shows the hyperplanes (for thiscase, straight lines) that intersect the Voronoi relevant vectorsV -vl at 1V,1-2 1'v, respectively. The vector e lies outsidev1 v1 at2 v 2vthe region that is bounded by the two straight lines, and thestars mark possible locations for e = -kv,. The value of

e

28 x, -4 -2 2 x 4 e

Fig. 1. Example of the modification step of lemma 2.

k according to Lemma 2 is k = 2, and the resulting eismarked in the figure. It can be seen that this is the only valueof k that puts e inside the desired region. Finding k is actuallya slicing operation: the plane is divided to slices (or layers)whose boundaries are straight lines. These lines are orthogonalto the Voronoi relevant vector v, and are spaced v, apart fromeach other. The integer k actually denotes to which of theseslices the vector e belongs.

Finally, the following claim suggests a condition on thenorms of two vectors that guarantees that their dot productsatisfies the condition of lemma 1.Claim].-Let xveIRm. If x2 < v2then xv <

1~~~~~~~~~~~~~~~ 2

2 V

B. Algorithm DescriptionGiven a vector x e Rn and a lattice A, we would like

to find the closest lattice point c to x in an iterative manner.We initialize c =0. At each iteration, we check the errorvector e c x. If this error vector is inside the Voronoiregion of the origin, then we know that c is the closest latticepoint to x. Now, from Lemma 1 we see that in order for e tobe inside the Voronoi region of the origin, the absolute valueof the dot pruduct of this errur vector with each une-sidedVoronoi relevant vector should be smaller than half the normof this Voronoi relevant vector. On the other hand, we see fromLemma 2 that if this condition is not fulfilled for a specificVoronoi relevant vector, then it can be fulfilled by modifyinge, where this modification is equivalent to adding a latticevector to c. Then, we can simply go over the list of the one-sided Voronoi relevant vectors (assume that there are M suchvectors). For each vector, if e does not satisfy the requirementof Lemma 1, then c is modified according to Lemma 2 suchthat the condition will he satisfied.

Using Claim 1, there is no need to check the conditionfor Voronoi relevant vectors whose norm is more than twicethe norm of the error. The list of Voronoi relevant vectors isassumed to be sorted by ascending norm, so the algorithm can

skip to the end of the list if it reaches a vector that is longerthan twice the current error vector.

After making the condition satisfied for a specific Voronoirelevant vector, subsequent modifications due to other vectorsmay cause this condition to be unfulfilled again. Therefore,

207

Page 3: [IEEE 2007 IEEE International Symposium on Information Theory - Nice (2007.06.24-2007.06.29)] 2007 IEEE International Symposium on Information Theory - Finding the Closest Lattice

ISIT2007, Nice, France, June 24 - June 29, 2007

_e112 1 lV 112?4-

Fig. 2. Flowchart of the lattice slicer algorithm.

after finishing all the M Voronoi relevant vectors, the al-gorithm starts again from the first vector, and so on. Everycheck against a Voronoi relevant vector will be referred to asa "step", where every pass on the whole list of M vectors willbe referred to as an iteration. If no change was required for Mconsecutive steps, then the error finally reached the Voronoiregion of the origin and the algorithm can be terminated.

The flowchart of the suggested algorithm is shown in figure2. The input to the algorithm is the vector x C Rm and thelist of M Voronoi relevant vectors of the lattice A, denotedby vo, vl, ...vM_ . It is assumed that this list was prepared atthe preprocessing stage of the algorithm (see Section V). Itis also assumed that the list is sorted by ascending Euclideannorm, i.e. fv0 < Ktl1|< < ||VM- 1 The output of thealgorithm is the vector c that holds the closest lattice point to x.The algorithm uses the counter n, which points to the currentVoronoi relevant vector, and the counter cnt which countshow many consecutive steps have passed without changing thevector c. If the norm of the Voronoi relevant vector is largerthan twice the norm of the current error vector, the algorithmjumps to the end of the list by updating n and cnt accordingly.If cnt reaches M, the error vector satisfies the conditions ofLemma 1 so the algorithm can be terminated.

Note that the calculation of k in Lemma 2 uses theround(.) operation of Definition 1, which takes special careof midpoints between integers. This is needed for singularsituations where e lies exactly on a facet of the Voronoi

region of the origin. In such a case, Kv2 1 for a specificVoronoi relevant vector v. Conventional rounding would round2to and to -1, which may result in endless "ping-pong" between two facets. The modified rounding prevents

-10

5 V3

-V3

10

15

-10 5 0 5 10

Fig. 3. Convergence of the slicer algorithm. Every facet of the Voronoiregion of the origin is marked with the name of the Voronoi relevant vectorthat corresponds to it.

this situation (see the convergence analysis in Section IV).Finally, it comes out that computational complexity can

be further reduced by initializing c to a better startingpoint, such as the rounded least squares solution c =

round ((G'G) 1G'x), where G is the generator matrix of thelattice and G' denotes the transpose of G. Such initializationcauses the error norm to start with a smaller value, so thealgorithm can terminate earlier.The operation of the algorithm is illustrated in Figure 3.

The initial error is outside the Voronoi region of the origin.At the first step, the error is entered between the two facetsthat are perpendicular to vl. At the second step, the resultingerror is entered between the two facets that are perpendicularto v2. Note that it is no longer between the two facets that areperpendicular to vl. The process goes on with v3. Then, thealgorithm starts again with v1. Continuing with v2, the erroris finally brought inside the Voronoi region of the origin andthe algorithm can terminate.The most computational intensive part of the algorithm

is the calculation of k, which is the only computation thatis performed every step and involves vector operations. Thecalculation of k involves a single dot product. Assuming ann-dimensional lattice in Rm, this is an 0(m) operation. Sincethere are 0(2f) Voronoi relevant vectors (see section II), thecomplexity of a full iteration is therefore 0(rm2'). This isthe worst case complexity, but the test of Claim 1 results insignificant reduction in the average complexity. The requiredstorage is also 0(m2n), since all the Voronoi relevant vectorsmust be prepared in advance and stored during the operationof the algorithm.

I CONVERGENCEIn this section we shall show that the lattice slicer algorithm

always converges to the closest lattice point within a finitenumber of steps. We shall first show that the norm of theerror vector strictly decreases whenever c is changed.

Claim 2: Let e, v G R[. Define e = e-kv, where kround (ll and round(.) is as in Definition 1. Then, k :o0implies e < eWe can now prove the main theorem of this section.

208

Page 4: [IEEE 2007 IEEE International Symposium on Information Theory - Nice (2007.06.24-2007.06.29)] 2007 IEEE International Symposium on Information Theory - Finding the Closest Lattice

ISIT2007, Nice, France, June 24 - June 29, 2007

Theorem 1: Given a lattice A and x C Rm, the lattice sliceralgorithm of Figure 2 always converges to the closest latticepoint of x in a finite number of steps.

Proof: The algorithm starts with c = 0 and updates calong the iterations. Claim 2 shows that whenever c is updatedwith k :t 0, the norm of the error vector e = cx- strictlydecreases. As a result, whenever c is changed, it can notassume the same lattice point again. Now, the initial error normis le x= 1 z 11, so the error norm is upper bounded by 11xz 1 forall the iterations of the algorithm. This means that c alwayslies within a hypersphere of radius 1 zx around x. Havinga finite radius, this hypersphere contains a finite number oflattice points. As long as the algorithm does not terminate, itmust update c with k :t 0 at least once every M steps. Such anupdate changes c from one lattice point to another. Since thealgorithm can not return to the same lattice point twice, andit can only visit a finite number of points, it must terminate.According to Lemma 1, when the algorithm terminates, theerror vector e is inside the Voronoi region of the origin, so cmust be the closest lattice point. That completes the proof ofthe theorem. gThe lattice slicer algorithm can be also interpreted as a

coordinate search solution for an optimization problem. Itcan be easily shown that choosing k according to Claim 2actually minimizes the error norm over all possible integerk values. Therefore, each step of the lattice slicer algorithmcan be regarded as minimization of the error norm, where theminimization is done along the direction of a Voronoi relevantvector (with an integer coefficient). The Voronoi relevantvectors define a "bank of directions", and the algorithm triesto minimize the error norm along each direction. It terminateswhen it can no longer minimize along any of the directions.It can be easily shown that the set of Voronoi relevant vectorsis the minimal set that guarantees convergence to the closestlattice point.

V. FINDING THE VORONOI RELEVANT VECTORS

The lattice slicer algorithm uses a list of the Voronoi relevantvectors of the lattice, which should be computed at the pre-processing stage. An efficient algorithm for the computationof the Voronoi relevant vectors of a lattice A with generatormatrix G is the RelevantVectors algorithm, proposed in [1].First, the set of midpoints M(G) is defined as

M(G) {s = Gz: z e {O, 1/2}1 {0}}. (1)

For each s C M(G), the algorithm finds all the closest latticepoints to s, using the AllClosestPoints algorithm, which is anenhanced sphere decoder. Then, if there are exactly two closestlattice points X1, x2 (i.e. two points at the same distance froms, such that this distance is strictly smaller than the distance ofany other lattice point from s), then 2(x1-s) and 2(2 -s) areVoronoi relevant vectors of A. If the one-sided set of Voronoirelevant vectors is required, it suffices to take only one of thesetwo lattice points. See [1] for a proof that this algorithm findsall the Voronoi relevant vectors.

1u

E 3 ;red Codea 00

Eo

10210- 10 10

input point standard deviation102

Fig. 4. Average complexity vs. input point standard deviation, latticedimension=8.

The computational complexity of this pre-processing stage,which requires 2' 1 runs of the sphere decoder, is signifi-cantly higher than that of the lattice slicer itself. However, inmany applications there is a need for solving several closestlattice points problems for the same lattice. In such cases,the pre-processing stage runs only once per many runs of thelattice slicer itself, so its complexity becomes negligible.

VI. SIMULATION RESULTSThe lattice slicer algorithm was simulated with the following

model. The elements of the n x n lattice generator matrix G, aswell as the elements of the n-dimensional input vector x, weregenerated as independent Gaussian random variables with zeromean and variances of 1 and u 2, respectively. Computationalcomplexity is measured by the average number of floatingpoint operations (flops). The lattice slicer is compared with theAllClosestPoints algorithm of [1], which is an enhanced spheredecoder. In all simulations, the lattice slicer is initialized withthe rounded least squares solution c =round ((G'G) 1G'x).

Figure 4 shows the average computational complexity of thelattice slicer and the enhanced sphere decoder as a function ofthe input point standard deviation, for a lattice dimension ofn = 8. Every point in the graph was generated by averagingover 1000 randomly generated G matrices with 50 randomlygenerated instances of the input point x per each G. It can beseen that both algorithms have comparable complexity, wherethe sphere decoder is better by a factor of 2. For small inputpoint standard deviations, the input point is still in the Voronoiregion of the origin. When the standard deviation is smaller,the lattice slicer can skip more Voronoi relevant vectors usingthe test of Claim 1, and the sphere decoder reaches thecorrect point earlier with higher probability. When the standarddeviation increases, the curves flatten and complexity becomesconstant, since the input point is now approximately uniformlydistributed along the Voronoi region of the closest lattice point(which is not necessarily the origin).

Figure 5 shows the probability density function (PDF) of thecomputational complexity, as estimated from the repspectivehistograms. Lattice dimension is 8 and the input point standarddeviation is 500, which is large enough to assume uniformdistribution of the input point along the Voronoi cell of theclosest lattice point. Each trial in the histogram correspondsto a different G. 5, 000 G matrices were generated, and thecomplexity was averaged for each G over 50 instances of the

209

Page 5: [IEEE 2007 IEEE International Symposium on Information Theory - Nice (2007.06.24-2007.06.29)] 2007 IEEE International Symposium on Information Theory - Finding the Closest Lattice

ISIT2007, Nice, France, June 24 - June 29, 2007

PDF of flops

lati,, slicer

shr er

103 104 105flops

108

0-0

cn

Q)l

Mo16

ci)102E

I0

complexity of pre-processing

6lattice dimension

Fig. 5. Probability Density Function (PDF) of the number of flops, latticedimension=8, input point standard deviation=500.

1 05

cn

io 1 04ltiescr

E 10

CZ 2110

CZ-

102 6lattice dimension

8 10

Fig. 6. Average complexity vs. lattice dimension, input point standarddeviation=500.

input point. The two PDF's have different behavior: the PDFof the lattice slicer is concentrated around its peak and decayssharply, where the PDF of the sphere decoder has a heavy tail,, which causes the sphere decoder to have larger computationalvariability than the lattice slicer. A possible measure for thecomputational variability is the peak-to-average ratio of thecomputational complexity for a finite set of G matrices,defined as the ratio between the average complexity for theworst-case G in the set and the average complexity for thewhole set. The sphere decoder's peak-to-average ratio for a

set of 5000 G matrices was measured to be larger by a factorof 20 than that of the lattice slicer. It comes out that the latticeslicer is much better for G matrices that are close to beingsingular, and these matrices are the cause for the large peaksin the computational complexity of the sphere decoder.

Figure 6 shows the average computational complexity ofboth algorithms as a function of lattice dimension (with thesimulation conditions as in Figure 4). The input point standarddeviation is 500, which is large enough such that the inputpoint is uniformly distributed along the Voronoi cell of theclosest lattice point. It can be seen that both algorithmshave the same complexity for small dimensions, where thesphere decoder has an advantage for large dimensions, whichincreases as the dimension increases.

All these simulations only measure the complexity of thelattice slicer itself, ignoring the pre-processing, since we

assume that many closest lattice point problems are solvedfor the same lattice. However, it is of interest to evaluate thecomplexity of the pre-processing stage as well. This stagefinds the Voronoi relevant vectors using the RelevantVectorsalgorithm of [1], described in Section V. This algorithm callsthe AllClosestPoints enhanced sphere decoder, described in

Fig. 7. Complexity of pre-processing stage vs. lattice dimension.

the appendix. In each call, it is sufficient to use a searchradius of half the norm of the longest Voronoi relevant vector(see Section V). However, the norms of the Voronoi relevantvectors are not known in advance. It came out by trial anderror that for this simulation setting, efficient implementationwas achieved by using a search radius of 1.6 in each callto the enhanced sphere decoder. If no point was found, thesearch radius was multiplied by 1.25, and so on, until at leastone point was found. Figure 7 shows the average complexityof the pre-processing stage as a function of lattice dimension.1000 random G matrices were generated for each dimensionand the complexity was averaged over all of them. ComparingFigures 7 and 6, it can be seen that the complexity of thepre-processing stage is approximately 3, 10 and 100 timesthe average complexity of the lattice slicer itself for latticedimensions of 2, 4, and 8, respectively.

VII. CONCLUSIONA novel algorithm was proposed for the closest lattice

point problem. The algorithm uses the list of Voronoi relevantvectors of the lattice in order to break the complex n-

dimensional problem to a series of one-dimensional simpleslicing problems. It was shown that the algorithm is guaranteedto converge in a finite number of steps, and that it can beinterpreted as a coordinate search solution for an optimizationproblem. Simulation results show that the lattice slicer algo-rithm has average complexity which is comparable to that ofthe enhanced sphere decoder, but its computational variabilityis smaller. The lattice slicer algorithm is an additional tool inthe "toolkit" of closest lattice point algorithms, and it gives

additional insight to the closest lattice point problem.ACKNOWLEDGMENT

Support and interesting discussions with Ehud Weinstein are

gratefully acknowledged.REFERENCES

[1] E. Agrell, T. Eriksson, A. Vardy, and K. Zeger, "Closest point search inlattices," IEEE Trans. Inform. Theory, vol. 48, pp. 2201-2214, Aug. 2002.

[2] N. Sommer, M. Feder and 0. Shalvi, "Low Density Lattice Codes,"submitted to IEEE Transactions on Information Theory.

[3] 0. Shalvi, N. Sommer and M. Feder, "Signal Codes," proceedings of theInformation theory Workshop, 2003, pp. 332-336.

[4] N. Sommer, M. Feder and 0. Shalvi, "Closest point search in latticesusing sequential decoding," proceedings of the International Symposiumon Information Theory (ISIT), 2005, pp. 1053-1057.

[5] M. 0. Damen, H. El Gamal, and G. Caire, "On maximum-likelihooddecoding and the search of the closest lattice point," IEEE Trans. Inform.Theory, vol. 49, pp. 2389-2402, Oct. 2003.

[6] J. H. Conway and N. J. A. Sloane, Sphere Packings, Lattices and Groups,3rd ed. New York: Springer-Verlag, 1999.

210

-2

410.L 10

0)

>. 6

-10

10

10o-10-1 02 10