Rip

4
1074 IEEE SIGNAL PROCESSING LETTERS, VOL. 20, NO. 11, NOVEMBER 2013 Improving the Bound on the RIP Constant in Generalized Orthogonal Matching Pursuit Siddhartha Satpathi, Rajib Lochan Das, and Mrityunjoy Chakraborty, Senior Member, IEEE Abstract—The generalized Orthogonal Matching Pursuit (gOMP) is a recently proposed compressive sensing greedy recovery algorithm which generalizes the OMP algorithm by selecting atoms in each iteration. In this letter, we demonstrate that the gOMP can successfully reconstruct a -sparse signal from a compressed measurement by a maximum of iterations if the sensing matrix satises the Restricted Isometry Property (RIP) of order , with the RIP constant satisfying . The proposed bound is an improvement over the existing bound on . We also show that by increasing the RIP order just by one (i.e., from ), it is possible to rene the bound further to , which is consistent (for ) with the near optimal bound on in OMP. Index Terms—Compressive sensing, orthogonal matching pur- suit, restricted isometry property, sensing matrix. I. INTRODUCTION T HE generalized orthogonal matching pursuit (gOMP) [1] is a generalization of the well known compressed sensing [2], [3] greedy recovery algorithm called orthogonal matching pursuit (OMP) [4]. Both the gOMP and the OMP algorithms try to obtain the sparsest solution to an underdetermined set of equations given as where is a real valued, sensing matrix and is a real valued observation vector. It is assumed that the sparsest solution to the above system is -sparse, i.e., not more than (for some minimum , ) elements of are non-zero and also that the sparsest solution is unique, which means that every columns of are linearly independent [5]. Greedy approaches like the OMP, gOMP, and also the orthog- onal least square (OLS) [6], the compressive sampling matching pursuit (CoSaMP) [7], the subspace pursuit (SP) [8] etc recover the -sparse signal by iteratively constructing the support set of the sparse signal (i.e., index of non-zero elements in the sparse vector) by some greedy principles. Convergence of these itera- tive procedures in nite number of steps requires the matrix Manuscript received June 11, 2013; revised August 06, 2013; accepted Au- gust 21, 2013. Date of publication August 29, 2013; date of current version September 16, 2013. The associate editor coordinating the review of this man- uscript and approving it for publication was Prof. Alexandros G. Dimakis. The authors are with the Department of Electronics and Electrical Commu- nication Engineering, Indian Institute of Technology, Kharagpur, India (e-mail: [email protected]; [email protected]; [email protected]). Digital Object Identier 10.1109/LSP.2013.2279977 to satisfy the so-called “Restricted Isometry Property (RIP)” [9] of appropriate order as given below. Denition 1: A matrix is said to satisfy the RIP of order if there exists a “Restricted Isometry Constant” so that (1) for all -sparse . The constant is taken as the smallest number from (0, 1) for which the RIP is satised. It is easy to see that if satises RIP of order , then it also satises RIP for any order where and that . Simple choice of a random matrix for can make it satisfy the RIP condition with high probability [9]. Convergence of the above stated greedy algorithms is usu- ally established by imposing certain upper bounds on the RIP constant as a sufcient condition. In the case of gOMP, such bound is given by [1], where is the number of columns of (also called atoms) the algorithm se- lects in each iteration. It is, however, observed that this bound is overly restrictive as it is obtained by applying a series of in- equalities that makes it progressively tighter. In this letter, by following alternate algebraic manipulations, we improve this bound to . Further, we show that by increasing the RIP order just by one (i.e., from ), it is possible to rene 1 the bound further to . The latter is interestingly seen to be consistent (for ) with the near op- timal upper bound of in the OMP algorithm [10]. II. NOTATIONS AND A BRIEF REVIEW OF THE OMP ALGORITHM A. Notations For the sake of uniformity, we follow the same notations as used in [1]. Let denote the index set and denote the -th column of . The matrix repre- sents the sub-matrix of with columns indexed by the elements present in set . Similarly represents the sub-vector of with elements chosen as per the indices given in . By , we denote the true support set of , meaning where denotes the cardinality of the set “.”, and by , we denote the estimated support set after iterations of the algorithm. The no- tation indicates a set with elements belonging to but not contained in , i.e., . The pseudo-inverse of is denoted by , i.e., , where it 1 Note that as the upper bound of the RIP constant increases, one can choose larger values for the RIP constant than possible earlier, which in turn permits further compression of the number of measurements (i.e., ) without disturbing the high probability of satisfaction of the RIP by the sensing matrix [11]. 1070-9908 © 2013 IEEE

description

An IEEE paper by Prof. M. Chakarborty

Transcript of Rip

Page 1: Rip

1074 IEEE SIGNAL PROCESSING LETTERS, VOL. 20, NO. 11, NOVEMBER 2013

Improving the Bound on the RIP Constant inGeneralized Orthogonal Matching PursuitSiddhartha Satpathi, Rajib Lochan Das, and Mrityunjoy Chakraborty, Senior Member, IEEE

Abstract—The generalized Orthogonal Matching Pursuit(gOMP) is a recently proposed compressive sensing greedyrecovery algorithm which generalizes the OMP algorithm byselecting atoms in each iteration. In this letter, wedemonstrate that the gOMP can successfully reconstruct a-sparse signal from a compressed measurement by

a maximum of iterations if the sensing matrix satisfiesthe Restricted Isometry Property (RIP) of order , with theRIP constant satisfying . The proposedbound is an improvement over the existing bound on . Wealso show that by increasing the RIP order just by one (i.e.,

from ), it is possible to refine the bound further to, which is consistent (for ) with the

near optimal bound on in OMP.

Index Terms—Compressive sensing, orthogonal matching pur-suit, restricted isometry property, sensing matrix.

I. INTRODUCTION

T HE generalized orthogonal matching pursuit (gOMP) [1]is a generalization of the well known compressed sensing

[2], [3] greedy recovery algorithm called orthogonal matchingpursuit (OMP) [4]. Both the gOMP and the OMP algorithmstry to obtain the sparsest solution to an underdetermined set ofequations given as

where is a real valued, sensing matrix andis a real valued observation vector. It is assumed that

the sparsest solution to the above system is -sparse, i.e., notmore than (for some minimum , ) elements ofare non-zero and also that the sparsest solution is unique, whichmeans that every columns of are linearly independent [5].Greedy approaches like the OMP, gOMP, and also the orthog-onal least square (OLS) [6], the compressive sampling matchingpursuit (CoSaMP) [7], the subspace pursuit (SP) [8] etc recoverthe -sparse signal by iteratively constructing the support set ofthe sparse signal (i.e., index of non-zero elements in the sparsevector) by some greedy principles. Convergence of these itera-tive procedures in finite number of steps requires the matrix

Manuscript received June 11, 2013; revised August 06, 2013; accepted Au-gust 21, 2013. Date of publication August 29, 2013; date of current versionSeptember 16, 2013. The associate editor coordinating the review of this man-uscript and approving it for publication was Prof. Alexandros G. Dimakis.The authors are with the Department of Electronics and Electrical Commu-

nication Engineering, Indian Institute of Technology, Kharagpur, India (e-mail:[email protected]; [email protected]; [email protected]).Digital Object Identifier 10.1109/LSP.2013.2279977

to satisfy the so-called “Restricted Isometry Property (RIP)” [9]of appropriate order as given below.Definition 1: A matrix is said to satisfy the

RIP of order if there exists a “Restricted Isometry Constant”so that

(1)

for all -sparse . The constant is taken as the smallestnumber from (0, 1) for which the RIP is satisfied.It is easy to see that if satisfies RIP of order , then it also

satisfies RIP for any order where and that .Simple choice of a random matrix for can make it satisfy theRIP condition with high probability [9].Convergence of the above stated greedy algorithms is usu-

ally established by imposing certain upper bounds on the RIPconstant as a sufficient condition. In the case of gOMP,such bound is given by [1], where is thenumber of columns of (also called atoms) the algorithm se-lects in each iteration. It is, however, observed that this boundis overly restrictive as it is obtained by applying a series of in-equalities that makes it progressively tighter. In this letter, byfollowing alternate algebraic manipulations, we improve thisbound to . Further, we show that by increasingthe RIP order just by one (i.e., from ), it is possibleto refine1 the bound further to . The latter isinterestingly seen to be consistent (for ) with the near op-timal upper bound of in the OMP algorithm [10].

II. NOTATIONS AND A BRIEF REVIEWOF THE OMP ALGORITHM

A. Notations

For the sake of uniformity, we follow the same notationsas used in [1]. Let denote the index set and

denote the -th column of . The matrix repre-sents the sub-matrix of with columns indexed by the elementspresent in set . Similarly represents the sub-vector ofwith elements chosen as per the indices given in . By , we

denote the true support set of , meaning wheredenotes the cardinality of the set “.”, and by , we denote theestimated support set after iterations of the algorithm. The no-tation indicates a set with elements belonging to butnot contained in , i.e., . The pseudo-inverseof is denoted by , i.e., , where it

1Note that as the upper bound of the RIP constant increases, one can chooselarger values for the RIP constant than possible earlier, which in turn permitsfurther compression of the number of measurements (i.e., ) without disturbingthe high probability of satisfaction of the RIP by the sensing matrix [11].

1070-9908 © 2013 IEEE

Page 2: Rip

SATPATHI et al.: IMPROVING THE BOUND ON THE RIP CONSTANT 1075

TABLE IgOMP ALGORITHM

is assumed that has full column rank . The twomatrices, and denote two or-thogonal projection operators which project a given vector or-thogonally on the column space of and on its orthogonalcomplement respectively. Similarly, is a matrixwith each column denoting the projection error vector relatedto the orthogonal projection of the corresponding column ofon the column space of for some . Lastly, we use “ ”in the superscript to denote matrix/vector transposition and by“ ”, we denote the subspace spanned by the columns ofthe matrix “.”.For convenience of presentation, we also follow the following

convention: we use the notation or or or to indicatethat the equality “ ” follows from Lemma 1/Equation (1)/Def-inition 1/Theorem 1 respectively (same for inequalities).

B. A Brief Review of the gOMP Algorithm

The gOMP algorithm is listed in Table I. At the -th iteration, it evaluates the correlations between

an available residual and the atoms of , and picks up thelargest correlated atoms. The corresponding indices are ap-

pended to an estimated support set to yieldand the residual is updated as the orthogonal projection error

(which implies that at any -th step of itera-tion, is orthogonal to each column of ). It is shownin [1] that under the sufficient condition , thealgorithm converges in a maximum of steps, meaning, undernoise free condition, becomes zero for some .The -sparse solution is then obtained as . Note thatthe gOMP algorithm boils down to the OMP algorithm [4] for

.

III. PROPOSED IMPROVED BOUNDS

In gOMP algorithm, convergence in a maximum of stepsis established by ensuring that in each iteration, at least one ofthe new atoms chosen belongs to , i.e., it has an index be-longing to the true support set . Let ,denote the largest (in magnitude) correlation between andthe atoms of at the -th step of iteration, i.e.,

. Also let the

largest (in magnitude) correlations between and the atoms ofnot belonging to be given by , , ar-

ranged in descending order as . It isshown in [1] that

(2)

and

(3)

where . A sufficient condition to ensure conver-gence in a maximum of steps is then obtained by setting theRHS of (3) greater than that of (2).In this paper, we first retain the upper bound of as given

in (2), while the lower bound of given in (3) is refinedwhich eventually results in a lesser restrictive upper bound on

as shown in Theorem 1. Subsequently, we refine both theupper bound of and the lower bound of , which leadsto an improved upper bound on as shown in Theorem 2.The derivation uses the following Lemma, which is an extensionof the Lemma 3.2 of [11].

Lemma 1: Given , , whereand ,

Proof: Given in Appendix A.Additionally, we use certain properties of the RIP constant,

given by Lemma 1 in [7], [8] which are reproduced below.

Lemma 2: For any two integers with , and,for , with , , ,(a) (monotonicity)(b)(c) with equality holding

if either of and is zero. Also,with equality holding if is zero.

Theorem 1: The gOMP algorithm can recover exactlywhen satisfies RIP of order with

Proof: First note that ,and that is orthogonal to each

column of , which also means that. It

is then possible to write,

Page 3: Rip

1076 IEEE SIGNAL PROCESSING LETTERS, VOL. 20, NO. 11, NOVEMBER 2013

(4)

where we have used the fact that .Next we define a vector , where if and

otherwise. It is easy to see thatand thus,

(5)

[It is also easy to observe that ,since for .] We are now in a position to applyLemma 1 on , after taking ,

and noting that , ,resulting in

(6)

where follows from the fact that and. Moreover,

(7)

Combining (6) and (7) we get

From (4), it then follows that

(8)

Setting the RHS of (8) greater than the RHS of (2), the resultfollows trivially.2

Theorem 2: The gOMP algorithm can recover exactlywhen satisfies RIP of order with

Proof: First, as seen earlier,. We can then write,

2Note that both [1] (Lemma 3.7) and the above procedure first derive a lowerbound for ( in [1]) by applying a series of inequalities, with each in-equality making the bound more restrictive than before. However, unlike [1]which relies on replacing by first and then on expanding

in its pseudo-inverse based form, the above procedure simply exploits theHermitian as well as idempotent nature of and also the Lemma 1, whichleads to the deployment of much lesser number of inequalities than used in [1].As a result, the lower bound of derived above (i.e., (8)) turns out to belarger than as given in [1] which in turn leads to a lesser restrictive bound on

.

Fig. 1. Venn diagram for correct and estimated support set.

(9)

where we use the fact that belongs to theand thus can be written as for some. The vector is then given as

(10)

More over we see that which implies that. Hence we can conclude that belongs to such

a subspace spanned by which lies in null space of.

Let W be the set of incorrect indices corresponding to’s for (clearly, and

). Then,

(11)

where follows from the fact thatand .

Similarly,

(12)

Setting the RHS of (12) greater than that of (11), the result isobtained trivially.

Page 4: Rip

SATPATHI et al.: IMPROVING THE BOUND ON THE RIP CONSTANT 1077

IV. CONCLUSIONS

In this letter, we have presented two new upper bounds for theRIP constant for use in the recently proposed gOMP algorithm[1], which guarantee exact reconstruction of a -sparse signalin less than or equal to number of iterations. The proposedbounds are obtained by suitable exploitation of the properties ofthe RIP constant and are considerably less restrictive than theircounterparts existing in literature. Due to space limitations, thepresent treatment has, however, been confined to the noise freecase only. It is nevertheless straightforward to extend this to thecase of noisy observations by following arguments similar to[1].

APPENDIX A

PROOF OF LEMMA 1

First, the relation is trivially satisfied with equality for .We now assume that . It is easy to see that for ,can never be zero since satisfies RIP of order andthus of order .It is also easy to observe that

.Also, , meaning for some

with . First consider the case where thecolumns of and those of are not orthogonal to eachother, meaning . We can then write,

(A.1)

This means,

(if , we

directly have,

). Again,

.

REFERENCES

[1] J. Wang, S. Kwon, and B. Shim, “Generalized orthogonal matchingpursuit,” IEEE Trans. Signal Process., vol. 60, no. 12, pp. 6202–6216,Dec. 2012.

[2] D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol.52, no. 4, pp. 1289–1306, Apr. 2006.

[3] R. Baraniuk, “Compressive sensing,” IEEE Signal Process. Mag., vol.25, pp. 21–30, March 2007.

[4] J. A. Tropp and A. C. Gilbert, “Signal recovery from random measure-ments via orthogonal matching pursuit,” IEEE Trans. Inf. Theory, vol.53, no. 12, pp. 4655–4666, Dec. 2007.

[5] J. A. Tropp and S. J. Wright, “Computational methods for sparsesolution of linear inverse problems,” Proc. IEEE, vol. 98, no. 6, pp.948–958, June 2010.

[6] S. Chen, S. A. Billings, andW. Luo, “Orthogonal least squares methodsand their application to non-linear system identification,” Int. J. Contr.,vol. 50, no. 5, pp. 1873–1896, 1989.

[7] D. Needell and J. Tropp, “CoSaMP: Iterative signal recovery from in-complete and inaccurate samples,” Appl. Comput. Harmon. Anal., vol.26, pp. 301–321, 2009.

[8] W. Dai and O. Milenkovic, “Subspace pursuit for compressive sensingsignal reconstruction,” IEEE Trans. Inf. Theory, vol. 55, no. 5, pp.2230–2249, 2009.

[9] M. Elad, Sparse and Redundant Representations. Berlin, Germany:Springer, 2010.

[10] J. Wang and B. Shim, “On the recovery limit of sparse signals usingorthogonal matching pursuit,” IEEE Trans. Signal Process., vol. 60,no. 9, pp. 4973–4976, Sep. 2012.

[11] M. A. Davenport and M. B. Wakin, “Analysis of orthogonal matchingpursuit using the restricted isometry property,” IEEE Trans. Inf.Theory, vol. 56, no. 9, pp. 4395–4401, Sep. 2010.