[IEEE 2009 IEEE International Symposium on Information Theory - ISIT - Seoul, South Korea...

5

Click here to load reader

Transcript of [IEEE 2009 IEEE International Symposium on Information Theory - ISIT - Seoul, South Korea...

Page 1: [IEEE 2009 IEEE International Symposium on Information Theory - ISIT - Seoul, South Korea (2009.06.28-2009.07.3)] 2009 IEEE International Symposium on Information Theory - Explicit

ISIT 2009, Seoul, Korea, June 28 - July 3, 2009

Explicit thresholds for approximately sparsecompressed sensing via £I-optimization

Mihailo StojnicSchool of Industrial Engineering

Purdue UniversityWest Lafayette IN 47907, USA

Email: [email protected]

(3)

Abstract-It is well known that compressed sensing problemsreduce to solving large under-determined systems of equations. Ifwe choose the elements of the compressed measurement matrixaccording to some appropriate probability distribution and if thesignal is sparse enough then the iI-optimization can recover itwith overwhelming probability (see, e.g, [5], [9], (10)). In fact,[5], [9], [10] establish (in a statistical context) that if the numberof measurements is proportional to the length of the signal thenthere is a sparsity of the unknown signal proportional to its lengthfor which the success of the iI-optimization is guaranteed. In thispaper we consider a modification of this standard setup, namelythe case of the so-called approximately sparse unknown signals [7],[27]. We determine sharp lower bounds on the values of allowableapproximate sparsity for any given number (proportional to thelength of the unknown signal) of measurements. We introducea novel, very simple technique which provides very good valuesfor proportionality constants.Index Terms: compressed sensing, .e1-optimization, approxi­mately sparse

I. INTRODUCTION

In this paper we are interested in the mathematical back­ground of certain compressed sensing problems (more onthe compressed sensing problems and their importance theinterested reader can find in excellent references [6], [15],[19], [23], [29]). These problems are very easy to pose andvery difficult to solve. Namely, we would like to find x suchthat

Ax == y (1)where A is an m x n measurement matrix and y is an m x 1measurement vector. In usual compressed sensing context x isan n x 1 unknown k-sparse vector. In the rest of the paper wewill further assume that k == f3n and m == an where a and f3are absolute constants independent of n.

A particular way of solving (1) which recently generateda large amount of research is called .e1-optimization [5], [6],[10] (more on different solving algorithms the interested readercan find in e.g. [1], [18], [20]-[22], [28]). The basic .e I ­

optimization algorithm finds x in (1) by solving the followingproblem

min II xII I

subject to Ax == y. (2)

Quite remarkably, in [5] the authors were able to show thatif a and n are given, the matrix A is given and satisfies aspecial property called the restricted isometry property (RIP)(more on when A satisfies the RIP condition the interested

reader can find in [3], [6]), then any unknown vector x withno more than k == f3n (f3 is an absolute constant dependenton a and explicitly calculated in [6] to which we will referthroughout the paper as threshold) non-zero elements can berecovered by solving (2). As expected, this assumes that ywas in fact generated by that x and given to us. The casewhen the available measurements are noisy versions of y isalso of interest [5], [6], [30]. Although that case is not theprimary concern of the present paper we mention in passingthat the recent popularity of .e I -optimization in compressedsensing is significantly due to its robustness with respect tonoisy measurements.

Instead of characterizing the m x n matrix A through the RIPcondition, in [9], [10] the author establishes that solutions of(2) and (1) coincide if and only if the projection of the regularn-dimensional cross-polytope by matrix A is a k-neighborlypolytope. For A chosen as a random ortho-projector the preciserelation between m and k so that the solutions of (2) and (1)coincide is characterized in [9], [10] as well (similar resultswere obtained in [12], [13] if components of x are non­negative or in general of fixed sign).

In this paper we will consider a slight modification of thestandard compressed sensing described above. Namely, insteadof ideally sparse signals in this paper we will consider theso-called approximately sparse signals (more on the approx­imately sparse signals the interested reader can find in e.g.[7], [27] and references therein). We define k-approximatelysparse signals as signals that have k components with largemagnitude and whose remaining (n - k) components (as avector) have .e1-norm smaller than a constant 8.

Since the unknown vector x in this case generally has nozeros it is relatively easy to see that its exact recovery froma reduced number of measurements is not possible. Instead,we will prove (in a statistical context) that, if the unknownk-approximately sparse vector is x and x is the solution of(2) then for any given constant 0 < a ~ 1 there will be aconstant f3 such that

II '" II 2(C+1)8x-x I < ----- C-1

where C > 1 is a given constant determining how close in.e I -norm the recovered vector x should be to the originalk-approximately sparse vector x. As expected, f3 will bea function of C and a. However, f3 will be an absolute

978-1-4244-4313-0/09/$25.00 ©2009 IEEE 478

Page 2: [IEEE 2009 IEEE International Symposium on Information Theory - ISIT - Seoul, South Korea (2009.06.28-2009.07.3)] 2009 IEEE International Symposium on Information Theory - Explicit

ISIT 2009, Seoul, Korea, June 28 - July 3, 2009

Connecting (6) and (7) we havek k k

LXKi+ll xklh 2: LXKi+ LWKi-lIwKlh+llwlh-lIxkllli=l i=l i=l

k 2:::0: LXK, -llx1?111 - C + lllwill + Ilwlll

i=lk 0-1

= LXK, -IIX1?lh + C + lllwlh-i=l

Connecting the beginning and the end in the inequality above,we finally get 211 xklh 2: g+i [wln or equivalently as stated

in (4) IIx - xlh ~ 2~~11) IIxklh.At the beginning of the proof we assumed that the elements

ofxK are non-negative. The entire chain of inequalities shownabove can be written for any different sign combination ofelements of XK. Say 1j is a k-dimensional row vector of signsof elements in column vector XK. The only two things thatwill change in the previous chain of inequalities are 2::~=1 XKi

and 2::7=1WK,. Namely, 2::7=1, XK, should be replaced by theinner-product 1jxK while 2::i=1 WKi should be replaced bythe inner-product 1j w K.Since in general there are 2k differentsign rows 1j we ultimately require that (5) holds for everyj E {1,2, . . . ,2 k

} .

If (5) holds then IIx - xlh ~ 2~~11) IIxklh for any K.Hence it is also true for the set K which corresponds tothe k largest (in magnitude) elements of vector x. When Kcorresponds to the k largest elements of x, R correspondsto the (n - k) smallest elements of x. By our definition ofapproximately sparse signals the .eI-norm of (n - k) smallest

(7)

Using (5) we obtain0+1 k 0+1

--2-LwKi + -2-lIwKII1i=l

0+1 k 0-1== --2- LWKi + -2-lIwKlh + IIwKlh

i=l< II wklll + IIwKlh == [wlh.

constant independent of n. Problems with similar goals andpossibly different algorithms and different proof techniqueshave recently been considered throughout the literature (seee.g. [2], [4], [5], [7], [8], [16], [20], [21], [27], [31], [32]).Carefully looking at (3) one can note that in this paper we willuse .el-norm as a measure of quality of the recovery algorithm.Instead of .e1-norm on the left hand side of inequality in (3)one can use .e2 -norm which is a stronger condition. Resultssimilar to the ones that we will present here were obtained forthat case in e.g. [2], [5], [7], [20], [21]. Furthermore, one canreplace both (the second one is hidden in the definition of b).el-norms in (3) with .e2-norms and up to a scaling factor stillobtain results similar to the ones that we will present here (seee.g. [8], [31]).

As mentioned above, in this paper we use a slightly lessstrict performance measure of the recovery. However, we willprovide the explicit values of the allowable constants f3 for theentire range 0 < a ~ 1 for any fixed 0 > 1. (It is not difficultto see that when 0 ~ 1 we will have the case of ideally sparsesignals [7], [27], [32].) Our analysis will massively utilize thestructure of the null-space of the matrix A.

II. NULL-SPACE CHARACTERIZATIONThe following theorem provides the null-space characteriza-

tion of the matrix A that guarantees success of .eI-optimization(2) in recovering x so that (3) holds (see [14], [17], [18], [26],[27], [33] for similar characterizations used in various differentcontexts).

Theorem 1: Assume that an m x n measurement matrix Ais given. Let x be a k-approximately sparse vector. Further,assume that y == Ax and that W is an n x 1 vector. Let Kbe any subset of {I, 2, ... , n} such that IKI == k and let K,denote the i-th element of K. Further, let R == {I, 2, ... , n} \K. Let 1 be a 2k x k sign matrix. Each element of the matrix 1is either 1 or -1 and there are no two rows that are identical.Let 1j be the j-th row of the matrix 1. Then for any x and

any 0 > 1 2(0 + 1)IIx - xlh < II xklll (4)

h ld ·f - 0 - 1 elements of x is no greater than b, i.e. II xklll ~ b. From (4)o s I _G+Il-

w K+ G-IllwKlh then easily follows 28~~il) :::0: Ilx-xlh which is exactly what

(Vw E RnlAw = 0) and VK,j 2 J II _II 2 < Us stated in (3). This concludes the proof. •WK I (5) Remark: As proven above, if (5) holds then (4) will hold

where WK is the vector comprised of k elements of W with for any x. On the other hand, to have (4) to hold for a givenindexes determined by the elements of K.(XK and Wk are x we only need (5) to hold for a specific K and a specific j.defined in a similar fashion.) . . The specific K should be exactly the one that corresponds to

Pro~f· FIrst let the ~lements of XK be positive and let the k largest in magnitude elements of x. On the other handthe ~atr:x A has t~e claIr;ted null-space prope~. The? .the the specific j should be exactly the one that corresponds to thesolutIon x of(2) satIsfies IIxlh ~ IIxlh, where x IS the original row of matrix 1 that has the same signs as the given vectorsignal. Since Ax == y, it easily follows that W == x - x is in x.

the null space. of A. T~erefore ,:e can .further wri~e IIxlh 2: In the following sections we will for a certain statisticalIlx + will. USIng the triangular inequality we obtain matrix A and any given 0 < a ~ 1, 0 > 1, and b > 0 find a

~ _ _ > A _ (3such that (5) holds with overwhelming probability. That willL XK, + IlxKIII - Ilxlll - Ilxlh - Ilx + will be enough to have (4) to hold with overwhelming probability

1,=1 k k as well. f3s will denote the maximum f3 such that (5) holds for2: L XKi + L WKi + II wklh - II xklh every j E {I, 2, ... , 2k

} and every K, while f3w will denotek i=l k i=l the maximum f3 such that (5) holds for a specific j and a

== L XKi + L WKi - IIwKlh + IIwlIl - IIxklh. (6) specific K (in both cases (5) will hold with overwhelmingi= 1 i= 1 probability-the one that is less than an exponentially small in

479

Page 3: [IEEE 2009 IEEE International Symposium on Information Theory - ISIT - Seoul, South Korea (2009.06.28-2009.07.3)] 2009 IEEE International Symposium on Information Theory - Explicit

ISIT 2009, Seoul, Korea, June 28 - July 3,2009

Let Ee be the set of all extreme rays of U~:~ k C~p). Thefunction f(v) is convex (in fact linear) over the union ofcones U~:~k C~) and achieves the maximum (up to a scalingconstant) on its extreme rays. Hence we have

(12)

Fig. I. Cones c~p).

<

n quantity smaller than 1). Usually in literature (3s is called thestrong while (3w is called the weak sparsity threshold (moreon the definitions of the weak and the strong threshold theinterested reader can find in e.g. [11], [13]).

III. PROBABILISTIC ANALYSISIn this section we consider random matrices A that have a

basis of the null-space comprised of i.i.d, zero-mean Gaussianelements (we refer to these matrices as null-gaussian).

A. Gaussian null-spaceAssume that we are given an n x (n - m) matrix Z. Let

Zi be the i-th row of Z and let Zij be the i , j-th element ofZ . Further, let Zij be i.i.d, zero-mean unit-variance Gaussianrandom variables. Assume that A is a matrix such that Z is abasis of its null space. It then holds AZ = O. Furthermore, anyn x 1 vector w from the null-space of A can be represented asZv where v E R'"?", To simplify further exposition we set( = °21. Let It denote the event - (( + 1) 2:7=1 ljZKiV +(2:7=1 IZKiv l ~ 2:7:1

kIZxi VI. Let S be a 2n- k x (n - k)

sign matrix. Each element of the matrix S is either 1 or - 1and there are no two rows that are identical. Let Sp be thep-th row of the matrix S and let Sp,i be the p , i-th element

of S. Further let C~)v ' 1 ~ P ~ 2n- k denote the polyhedral

cones Sp,iZ Xiv 2: 0,'1 ~ i < (n - k). (nk

) 2kP(3v E c., s. t. f(v) 2: 0)Essentially, for any given constant 0: = If!- we will compute

a constant (3= !!:. such that (n) k. n n- k () ~ k 2 P(max(f(v)) 2: 0) .

lim P(I~ 'VK C {I , 2, 00" n} , IKI = k,»i. 'Vv E U~= l C;: v) = 1 v E En->oo (8) Using the union bound over v we further obtain

In or~er to show that (8) holds for certain values of 0: and (3P < (n) 2k P(max(f(v)) 2: 0) ~ (n) 2k I~I P(f(Vt) 2: 0)we will actually show that f k v EE k L..J

t=llim Pf = 0, (9) IEel k k

n->oo (n)2kLP( -(( +1)2:i =~ZiVt +(2:i=1 IZiVt l 2:1) .where Pf = P( 3j3K C {1 ,2, oo .,n}, IK I k, 3v E k t=l 2:i=k+1 IZiVtlU~:~k C~:v s. t. It) and It denotes the complement of (13)

It, i.e, it denotes the event - (( + 1)2:7=1 1jZKiV + where v t,l ~ t ~ IEel are the extreme rays of U~:~kC~).(2:7=1 IZKiv l 2: 2:7:1

kIZ x iVI· In what follows we will The following lemma estimates IEel.

repeatedly use P]. Our goal will always be to show that Lemma I: Let of:) ,1 ~ p ~ 2n - k be the polyhedral cones

Iimn -> oo Pf = O. {v E Rn-m ISp,iZk+iV 2: 0, 1 ~ i ~ (n - k)} , where S is aNow, using the union bound over all subsets K and all 2n- k x (n _ k) sign matrix, Zk + i, 1 ~ i ~ (n - k) are n - m

indexes j we can write dimensional vectors (in this case obtained from an n x (n -m)(~) 2 k

'" '" 2 n-k (p) matrix Z - basis of the null-space of a m x n measurementPf ~ L..JL..JP( 3v E Up = l CK(l ),v' s. t. n) (10) matrix A). Let Ee be the set of all extreme rays of U~:~k C~).

1=1 j=l 2n - k (n -k)where K(l) is a subset of {1,2, oo .,n} and IK (l )1 = k. Then IEel ~ 2n - = - 1 n-m'Clearly the number of these subsets is (~) and hence the first Proof Each cone C~p) can have at most (~~~) extremesummation on the right side of (10) goes from 1 to G). Since rays and there are in total 2n

-k cones. This gives an upper

the elements of the matrix Z are i.i.d, and symmetric random bound 2n-k(~~~) on IEel· However, in this way eachvariables all (~)2k terms in the summation on the right hand extreme ray is counted 2n- m times (an example of over­side of (10) will tum out to be equal. Therefore without loss counting is shown in simple n - m = 3 case on Figure 2). Forof generality we can further write example, assume that a potential extreme ray, Yext- of C~) is

( )

obtained as solution ofn k 2n - k (p)Pf < k 2 P(3v E Up = l Cv , s. t. f(v) 2: 0) (II) (14)

Sp,iZk+iV = 0, 1 ~ i ~ (n - m - 1).

where C~) , 1 ~ P ~ 2n- k, denote the polyhedral cones Then there will be 2n- m- 1 indexes p such that Sp,i, 1 ~Sp,iZk+iV 2: 0, 1 ~ i ~ (n - k) (see Figure 1) and i ~ (n - m - 1) form a sign matrix. Since changing signsf(v) = - (( + 1)2:7=1ZiV + (2:7=1 IZiVI- 2:7=k+1 IZiVI· of rows in (14) will not change the solution of (14) we

480

Page 4: [IEEE 2009 IEEE International Symposium on Information Theory - ISIT - Seoul, South Korea (2009.06.28-2009.07.3)] 2009 IEEE International Symposium on Information Theory - Explicit

ISIT 2009, Seoul, Korea, June 28 - July 3,2009

conclude that, when used in (14), any of these 2n - m - 1

indexes p produces the same potential extreme ray. Therefore,2

n-

k (n - k)IEel :::; 2n - = - 1 n-m . •

where It is a positive constant. Set

2{3e~ [~ ( eM2/2erfc (-Jz) + e(MC)2/2 (erf(~) + 1))] {3

E,s = 1 - a ( ()) ((3 -ex)eH({3 )e(1 -{3)H(T=7J) eM2/ 2erfc -!f.2(18)

where HO is the entropy function. Using (~) ~ e- nH({3 ) and

( n- k ) ~ e-n ( I -{3)IlCi =~ ) we then haven-m

(19)E,s < l,fL > 0.s.t.

Finally we can formulate the following theorem.Theorem 2 (Strong threshold, Null-gaussian): Let A be an

m x n measurement matrix with a basis of null-space com­prised of LLd. N(O , 1) random variables. Let C > 1, 0: = ~,

and f3 = ~ :::; f3s. Then for any k-approximately sparsex in (1) the solution of (2) satisfies (4) with overwhelmingprobability. The value of f3s can be obtained by solving

max f3

........~

S" ,I Z "+ IV = 0

S" ,2Z "+2V = 0

V e.r f is solu t ion to :

- S" .I Z "+ IV = 0

- S" .2Z "+2V = 0

(21)

where E,s is defined in (18).Proof Follows from the previous discussion.

Similarly to the above one can set

e~ [~ (eM2/ 2erfc ( -!f.2) + e1L2/ 2 (erf ( -!f.2) + 1))]{3

E,w = 1- 0< ( ()) ((3-ex )e(1-{3)H(T=7J) elL2/2erfc -!f.2(20)

and formulate the following theorem that relates to the weakthresholds.

Theorem 3 (Weak threshold, Null-gaussian): Let A be anm x n measurement matrix with a basis of null-space com­prised of LLd. N(O , 1) random variables. Let C > 1, 0: = ~,

and f3 = ~ :::; f3w . Then for a given k-approximately sparsex in (1) the solution of (2) satisfies (4) with overwhelmingprobability. The value of f3 w can be obtained by solving

Fig. 2. C~p) and its mirrored cone - an example of overcounting thenumber of extreme rays; Yext is insensitive to changing equation signs insystems of equations (14).

It is straightforward to show that the probabilities inside thelast summation of (13) are insensitive to the particular valueof v« (the details of that computation can be found in theextended version of the paper [24] and in a somewhat differentcontext in [25]). Using the results of the previous lemma and(13) we then finally obtain

Pf < 2k 2:::~1 (~) (:~~) x

P(f(c) 2:: 0, ZiC 2:: 0, k + 1 < i < m + 1) (15)

where as c is a deterministic vector. Since the norm of thevector c is irrelevant we can further assume that a, = ZiC, 1 :::;i :::; m+1 are LLd. zero mean Gaussian with variance 1. Hencefrom (15) we obtain max f3

k () ( ) S.t. E,w < 1, fL > 0.

Pf < 2k2:::-1 ~ : ~~ x where E,w is defined in (20).

(( ) 'Lk ('Lk I 1 Proof Follows from the previous analysis after realizingP( - + 1 i=1 a i + i=1 a i 2:: 1, a, 2:: 0, k+l :::; i :::; m+l) . that in case of the weak threshold one need not apply the union

'L~t~1 [a ,I bounds over all possible signs of vector x and over all possible(16)locations of k large components within n-dimensional vector

Removing the positivity constraint on a. , k + 1 :::; i :::; m + 1, x. (This effectively means that E,w = 2- {3 eH({3 )E,s.) •and using the symmetry of Gaussian random variables it easily The obtained values of f3w are displayed on Figures 3 andfollows from (16) 4 (the values for f3s can be plotted in a similar fashion). On

2k-m-12n-k (n) (n -k) ('L7-1 ((( + 1)ai + ( Iai I) )Fig~re 4 we showed the achievabl: values for f3w for thePf :::; 2-k2n-m-1 k n _ m P 'Lm+1 I .1 2:: 1 entire range of 0: and for two specific values of C, namely

i=k+l at (17) C = 1 and C = 2. On Figure 3 on the other hand, the

I · h Ch ff b d he ri h h d sid f (17) entire threshold surface for f3w is shown for 1 :::; C :::; 2.App ymg t e emo oun on t e rig t an Sleo.b . C = 1 corresponds to the so-called Ideally sparse case. The

we 0 tam best threshold results known for that case (if .el -optimizationPf < 2k (n) (n -k) (E eM(((+I )a, H !a,l) )k(Ee- Ma, )m- k+1 algo~ithm is used) are th?se from [10] and t~e~ are also shown

k n - m on FIgure 4 for comparison. When C > 1 It IS not known tothe author that there are explicit values of f3w in the literature.

481

Page 5: [IEEE 2009 IEEE International Symposium on Information Theory - ISIT - Seoul, South Korea (2009.06.28-2009.07.3)] 2009 IEEE International Symposium on Information Theory - Explicit

ISIT 2009 , Seoul, Korea, June 28 - July 3, 2009

Fig. 3. Weak threshold surface - A has a Gaussian basis of the null-space

[61 E. Candes and T. Tao. Decoding by linear programming. IEEE Trans.on Information Theory, 51:4203--4215 , Dec. 2005 .

[7] A. Cohen , W. Dahmen , and R. DeVore. Compressed sens-ing and best k-term approximation. 2006 . available online athttp://www.dsp.ece.rice.edu/cs/.

[8] R. DeVore, G. Petrova , and P. Wojtaszczyk. Instance-optimality inprobability with an f l decoder. Preprint, 2008 . available online athttp://www.dsp.ece.rice.edu/cs/.

[9] D. Donoho. Neighbor ly polytopes and sparse solu tions of underdeter­mined linear equations. 2004 . Techn ical report, Department of Statistics,Stanford University.

[lOJ D. Donoho. High-dimensional centrally symmetric polytopes withneighborlines proportional to dimension. Disc. Comput. Geometry,35(4):617-652, 2006 .

[I I J D. Donoho and J. Tanner. Neighborliness of randomly-projectedsimplices in high dimensions. Proc. National Academy of Sciences,102(27):9452-9457, 2005 .

[12] D. Donoho and 1. Tanner. Sparse nonnegative solutions of underdeter­mined linear equations by linear programming. Proc. National AcademyofSciences, 102(27) :9446-945 I, 2005 .

[13] D. Donoho and J. Tanner. Counting the face of randomly projectedhypercube s and orthants with application. 2008. available online athttp ://www.dsp.ece .rice.edu/cs/.

[14] D. L. Donoho and X. Huo. Uncertainly principles and ideal atomicdecompositions. IEEE Trans. on Information Theory, 47(7):2845-2862,Nov 2001.

[15] M. Duarte, M. Davenport, D. Takhar, 1. Laska, T. Sun, K. Kelly, andR. Baraniuk. Single-pixel imaging via compressive sampling. IEEESignal Processing Magazine, 25(2), 2008.

[16] Y. C. Eldar and H. Bolcsk ei. Block-sparsity : Coher ence and efficientrecovery. submilled to ICASSP 2009, 2008 . avaiable at arxiv :0812 .0329 .

[17] A. Feuer and A. Nemirovski. On sparse representation in pairs of bases.IEEE Trans. on Information Theory, 49:1579-1581 , June 2003.

[181 R. Gribonval and M. Nielsen. Sparse representations in unions of bases.IEEE Trans. Inform. Theory, 49(12):3320-3325, December 2003.

[19J M. Mishali and Y. Eldar. Reduc e and boost: Recovering arbi-trary sets of jointly sparse vectors. 2007. available online athttp ://www.dsp.ece.rice.ed u/cs.

[20J D. Needell and J. A. Tropp . CoSaMP: Iterative signal recoveryfrom incomplete and inaccurate samples. 2008. available online athttp://www.dsp.ece.rice.ed u/cs/.

[2 I J D. Needell and R. Vershynin . Unifrom uncertainly principles and signalrecovery via orthogonal matching pursu it. 2007 . avialabl e on arxiv.

[22] F. Parvaresh and B. Hassib i. Explicit measurements with almost optimalthresholds for compressed sensing. IEEE ICASSP, Mar-Apr 2008.

[23J B. Recht , W. Xu, and B. Hassibi. Necessary and sufficient conditionsfor success of the nuelear norm heuristic for rank minimization. 2008.avialable on arxiv.

[24J M. Stojnic. Compressed sens ing problems via fl -optimization ­performance analysis. 2008. preprint, avaiable from the author.

[25] M. Stojnic. A simple performance analysis of f 1-optimization incompressed sensing. to be presented atICASSP, 2009.

[26] M. Stojnic, F. Parvaresh, and B. Hassib i. On the reconstruction of block­sparse signals with an optimal number of measurements. submilled toIEEE Trans. on Signal Processing, 2008 . avaiabl e on arxiv.

[27] M. Stojnic, W. Xu, and B. Hassib i. Compressed sens ing of approx­imately spar se signals. ISIT, International symposium on informationtheory, July 2008.

[28] 1. Tropp and A. Gilbert. Signal recovery from random measurementsvia ortho gonal matching pursuit. IEEE Trans. on Information Theory,53(12):4655--4666, 2007.

[29J H. Vikalo, F. Parvaresh, and B. Ila ssibi . On sparse recovery ofcompressed dna microarrays. Asilomor conference, November 2007 .

[30] M. 1. Wainwr ight. Sharp thresholds for high-dimensional and noisyrecovery of spars ity. Proc. Allerton Conference on Communication,Control, and Computing, September 2006 .

[3 I] P. Wojtaszczyk. Stabi lity and instance opt imality for gaus sian mea­surements in compressed sensing. Preprint, 2008. availabl e online athttp://www.dsp.ece.rice.edu/cs/.

[32] W. Xu and B. Hassibi. Compressed sensing over the grassmannmanifold: A unified analytical framework. 2008 . available onlin e athttp://www.dsp.ece.rice.ed u/cs/.

[33] Y. Zhang. When is miss ing data recoverable. available online athttp ://www.d sp.ece .rice.edu/cs/.

I C= ' , D o n o h o I....... ", - C"'2. th is p a p e rC = 1 , th iS pa per

'8' -1 .5

~.2 -2

0 ,5

Weak thresh old su rface (1<G< 2)

1/G

,I C 2 . thi s paper

9 ... - C= 1 , thi s paper- C= 1 , D o n o h o

8I

/7

/6

e /-:

4

,/' I3 v: __dJ2 /',0

Fig. 4. Allowable sparsity as a function of C (allowable fl-norm of theimperfection of the recovered signal is 2 (g~i ) 8)

IV. CONCLUSION

Our analysis provided a somewhat novel technique incharacterizing the optimality of the £l-norm optimizationin compressed sensing of approximately sparse signals. Wepresented a probabilistic analysis for measurement matricesthat have a Gaussian basis of the null-space. The technique isvery simple, yet it produces very good values of thresholds.Furthermore, although we presented our results only for theGaussian case, it is possible to generalize them to severaldifferent symmetric distributions of null-space bases. Theseextensions will be subject of a future work.

R EF ER ENC ES

[I J M. Akcakaya and V. Tarokh . Performance bounds on sparse rep-resentations using redundant frames. 2007. available online atarXiv:c s/0703045.

[2J R. Baraniuk , V. Cevher, M. Duarte, and C. Hegde . Model-basedcompressive sensing. available online at http ://www.dsp.ece.rice.edu/cs/.

[3J R. Baraniuk , M. Davenport, R. DeVore, and M. Wakin. A sim-ple proo f of the restricted isometry property for random matr i­ces . To appear in Constructive Approximation. available online athttp://www.dsp.ece.rice.edu/cs/.

[4] R. Ber inde, A. C. Gilbert, P. Indyk, H. KarlolT, and M. J. Strauss. Com­bining geometry and combinatorics: A unified approach to sparse signalrecovery. 2008. available onl ine at http ://www.dsp.ece.rice.edu/csl.

[51 E. Candes, J. Romberg, and T. Tao. Robust uncertainty principles: exactsignal reconstruction from highly incomplete frequency information.IEEE Trans. on Information Theory, 52:489-509, December 2006.

Weak thre s ho ld s for C = 1 a nd C = 2 Wea k thre sh ol d s fo r C = 1 a n d C = 2 ( lo g-s c a le)o ,----.------.----------,,---------,-'--oi

482