Random Convolution

download Random Convolution

of 22

Transcript of Random Convolution

  • 8/2/2019 Random Convolution

    1/22

    COMPRESSIVE SENSING BY RANDOM CONVOLUTION

    JUSTIN ROMBERG

    Abstract. This paper outlines a new framework for compressive sensing: convolution with arandom waveform followed by random time domain subsampling. We show that sensing by randomconvolution is a universally efficient data acquisition strategy in that an n-dimensional signal whichis S sparse in any fixed representation can be recovered from m Slog n measurements. We discusstwo imaging scenarios radar and Fourier optics where convolution with a random pulse allowsus to seemingly super-resolve fine-scale features, allowing us to recover high-resolution signals fromlow-resolution measurements.

    1. Introduction. The new field of compressive sensing (CS) has given us afresh look at data acquisition, one of the fundamental tasks in signal processing. Themessage of this theory can be summarized succinctly [7, 8, 10, 15, 32]: the number ofmeasurements we need to reconstruct a signal depends on its sparsity rather thanits bandwidth. These measurements, however, are different than the samples thattraditional analog-to-digital converters take. Instead of simple point evaluations, we

    model the acquisition of a signal x0 as a series of inner products against differentwaveforms 1, . . . , m:

    yk = k, x0, k = 1, . . . , m . (1.1)Recovering x0 from the yk is then a linear inverse problem. If we assume that x0

    and the k are members of some n-dimensional space (the space ofn-pixel images, forexample), (1.1) becomes a system of m n linear equations, y = x0. In general, wewill need more measurements than unknowns, m n, to recover x0 from y. But if thesignal of interest x0 is sparse in a known basis, and the k are chosen appropriately,then results from CS have shown us that recovering x0 is possible even when thereare far fewer measurements than unknowns, m n.

    The notion of sparsity is critical to compressive sensing it is the essential

    measure of signal complexity, and plays roughly the same role in CS that bandwidthplays in the classical Shannon-Nyquist theory. When we say a signal is sparse italways in the context of an orthogonal signal representation. With as an n northogonal representation matrix (the columns of are the basis vectors), we sayx0 is S-sparse in if we can decompose x0 as x0 = 0, where 0 has at most Snon-zero components. In most applications, our signals of interest are not perfectlysparse; a more appropriate model is that they can be accurately approximated usinga small number of terms. That is, there is a transform vector 0,S with only S termssuch that 0,S 02 is small. This will be true if the transform coefficients decaylike a power law such signals are called compressible.

    Sparsity is what makes it possible to recover a signal from undersampled data.Several methods for recovering sparse x0 from a limited number of measurements havebeen proposed [12,16,29]. Here we will focus on recovery via 1 minimization. Given

    the measurements y = x0, we solve the convex optimization program

    min

    1 subject to = y. (1.2)

    School of Electrical and Computer Engineering, Georgia Tech, Atlanta, Georgia.Email: [email protected]. This work was supported by DARPAs Analog-to-Information pro-gram.Submitted to the SIAM Journal on Imaging Science on July 9, 2008.

    1

  • 8/2/2019 Random Convolution

    2/22

    In words, the above searches for the set of transform coefficients such that themeasurements of the corresponding signal agree with y. The 1 norm is beingused to gauge the sparsity of candidate signals.

    The number of measurements (rows in ) we need to take for (1.2) to succeed

    depends on the nature of the waveforms k. To date, results in CS fall into one oftwo types of measurement scenarios. Randomness plays a major role in each of them.

    1. Random waveforms. In this scenario, the shape of the measurement wave-forms is random. The ensemble is generated by drawing each elementindependently from a subgaussian [6, 10, 15, 24] distribution. The canonicalexamples are generating each entry i,j of the measurement ensemble as in-dependent Gaussian random variables with zero mean and unit variance, oras independent Bernoulli 1 random variables.The essential result is that if x0 is S-sparse in any and we make

    1

    m S log n (1.3)

    random waveform measurements, solving (1.2) will recover x0 exactly.

    2. Random sampling from an incoherent orthobasis. In this scenario, we selectthe measurement waveforms k from rows of an orthogonal matrix [7]. By

    convention, we normalize the rows to have energy k22 = n, making themthe same size (at least on average) as in the random waveform case above.The m rows of are selected uniformly at random from among all subsetsof rows of size m. The principal example of this type of sensing is observingrandomly selected entries of a spectrally sparse signal [8]. In this case, thek are elements of the standard basis (and

    = nI), and the representation is a discrete Fourier matrix.When we randomly sample from a fixed orthosystem , the number of sam-ples required to reconstruct x0 depends on the relationship between

    and. One way to quantify this relationship is by using the mutual coherence:

    (, ) = maxkRows()jColumns()

    |k, j|. (1.4)

    When = 1, and are as different as possible; each element of is flatin the domain. If x0 is S sparse in the domain, and we take

    m 2 S log n (1.5)

    measurements, then (1.2) will succeed in recovering x0 exactly. Notice thatto have the same result (1.3) as the random waveform case, will have to beon the order of 1; we call such and incoherent.

    There are three criteria which we can use to compare strategies for compressivesampling.

    Universality. A measurement strategy is universal if it is agnostic towards thechoice of signal representation. This is the case when sensing with randomwaveforms. When is a Gaussian ensemble this is clear, as it will remainGaussian under any orthogonal transform . It is less clear but still true [6]

    1Here and below we will use the notation X Y to mean that there exists a known constantC such that X CY. For random waveforms, we actually have the the more refined bound ofm Slog(n/S). But since we are usually interested in the case S n, (1.3) is essentially the same.

    2

  • 8/2/2019 Random Convolution

    3/22

    that the strategy remains universal when the entries of are independentand subgaussian.Random sampling from a fixed system is definitely not universal, as (, )cannot be on the order of 1 for every . We cannot, for example, sample in

    a basis which is very similar to ( n) and expect to see much of thesignal.

    Numerical structure. Algorithms for recovering the signal will invariably in-volve repeated applications of and the adjoint . Having an efficientmethod for computing these applications is thus of primary importance. Ingeneral, applying an m n matrix to an n-vector requires O(mn) operations;far too many if n and m are only moderately large (in the millions, say).It is often the case that a which sparsifies the signals of interest comesequipped with a fast transform (the wavelet transform for piecewise smoothsignals, for example, or the fast Fourier transform for signals which are spec-trally sparse). We will ask the same of our measurement system .The complete lack of structure in measurement ensembles consisting of ran-dom waveforms makes a fast algorithm for applying out of the question.

    There are, however, a few examples of orthobases which are incoherent withsparsity bases of interest and can be applied efficiently. Fourier systems areperfectly incoherent with the identity basis (for signals which are sparse intime), and noiselets [13] are incoherent with wavelet representations and enjoya fast transform with the same complexity as the fast Fourier transform.

    Physically realizable. In the end, we have to be able to build sensors whichtake the linear measurements in (1.1). There are architectures for CS wherewe have complete control over the types of measurements we make; a well-known example of this is the single pixel camera of [17]. However, it isoften the case that we are exploiting a physical principle to make indirectobservations of an object of interest. There may be opportunities for injectingrandomness into the acquisition process, but the end result will not be takinginner products against independent waveforms.

    In this paper, we introduce a framework for compressive sensing which meets allthree of these criteria. The measurement process consists of convolving the signalof interest with a random pulse and then randomly subsampling. This procedure israndom enough to be universally incoherent with any fixed representation system,but structured enough to allow fast computations (via the FFT). Convolution with apulse of our choosing is also a physically relevant sensing architecture. In Section 2we discuss two applications in particular: radar imaging and coherent imaging usingFourier optics.

    1.1. Random Convolution. Our measurement process has two steps. We cir-cularly convolve the signal x0 Rn with a pulse h Rn, then subsample. Thepulse is random, global, and broadband in that its energy is distributed uniformlyacross the discrete spectrum.

    In terms of linear algebra, we can write the convolution ofx0 and h as Hx, where

    H = n1/2FF,

    with F as the discrete Fourier matrix

    Ft, = ej2(t1)(1)/n, 1 t, n,

    3

  • 8/2/2019 Random Convolution

    4/22

    and as a diagonal matrix whose non-zero entries are the Fourier transform of h.We generate h at random by taking

    =

    1 0 0 2

    ... . . .n

    ,a diagonal matrix whose entries are unit magnitude complex numbers with randomphases. We generate the as follows

    2:

    = 1 : 1 1 with equal probability,2 < n/2 + 1 : = ej , where Uniform([0, 2]),

    = n/2 + 1 : n/2+1 1 with equal probabilityn/2 + 2 n : = n+2, the conjugate of n+2.

    The action of H on a signal x can be broken down into a discrete Fourier transform,followed by a randomization of the phase (with constraints that keep the entries of Hreal), followed by an inverse discrete Fourier transform.

    The construction ensures that H will be orthogonal,

    HH = n1FF FF = nI,

    since F F = FF = nI and = I. Thus we can interpret convolution with h asa transformation into a random orthobasis.

    1.2. Subsampling. Once we have convolved x0 with the random pulse, wecompress the measurements by subsampling. We consider two different methodsfor doing this. In the first, we simply observe entries of Hx0 at a small number ofrandomly chosen locations. In the second, we break Hx0 into blocks, and summarizeeach block with a single randomly modulated sum.

    1.2.1. Sampling at random locations. In this scheme, we simply keep someof the entries of Hx0 and throw the rest away. If we think of Hx0 as a set ofNyquist samples for a bandlimited function, this scheme can be realized by withan analog-to-digital converter (ADC) that takes non-uniformly spaced samples at anaverage rate which is appreciably slower than Nyquist. Convolving with the pulse hcombines a little bit of information about all the samples in x0 into each sample ofHx0, information which we can untangle using (1.2).

    There are two mathematical models for sampling at random locations, and theyare more or less equivalent. The first is to set a size m, and select a subset of locations {1, . . . , n} uniformly at random from all nm subsets of size m. The second is togenerate an iid sequence of Bernoulli random variables 1, . . . , n, each of which takesa value of 1 with probability m/n, and sample at locations t where t = 1. In thiscase, the size of the sample set will not be exactly m. Nevertheless, it can be shown

    (see [8] for details) that if we can establish successful recovery with probability 1 for the Bernoulli model, we will have recovery with probability 1 2 in the uniformmodel. Since the Bernoulli model is easier to analyze, we will use it throughout therest of the paper.

    In either case, the measurement matrix can be written as = RH, where Ris the restriction operator to the set .

    2For simplicity, we assume throughout that n is even; very little changes for the case of odd n.

    4

  • 8/2/2019 Random Convolution

    5/22

  • 8/2/2019 Random Convolution

    6/22

    1.3. Main Results. Both random subsampling [8] and RPMS [33] have beenshown to be effective for compressive sampling of spectrally sparse signals. In thispaper, we show that preceding either by a random convolution results in a universalcompressive sampling strategy.

    Our first theoretical result shows that if we generate a random pulse as in Sec-tion 1.1 and a random sampling pattern as in Section 1.2.1, then with high probabilitywe will be able to sense the vast majority of signals supported on a fixed set in the domain.

    Theorem 1.1. Let be an arbitrary orthonormal signal representation. Fix asupport set of size || = S in the domain, and choose a sign sequence z on uniformly at random. Let 0 be a set of domain coefficients supported on withsigns z, and take x0 = 0 as the signal to be acquired. Create a random convolutionmatrix H as described in Section 1.1, and choose a set of sample locations of size|| = m uniformly at random with

    m C0 S log(n/) (1.7)

    and also m C1 log3

    (n/), where C0 and C1 are known constants. Set = RH.Then given the set of samples on of the convolution Hx0, y = x0, the program(1.2) will recover 0 (and hence x0) exactly with probability exceeding 1 .

    Roughly speaking, Theorem 1.1 works because if we generate a convolution matrixH at random, with high probability it will be incoherent with any fixed orthonormalmatrix . Actually using the coherence defined in (1.4) directly would give a slightlyweaker bound (log2(n/) instead of log(n/) in (1.7)); our results will rely on a morerefined notion of coherence which is outline in Section 3.1.

    Our second result shows that we can achieve similar acquisition efficiency usingthe RPMS.

    Theorem 1.2. Let , , 0, x0, and H be as in Theorem 1.1. Create a randompre-modulated summation matrix P as described in Section 1.2.1 that outputs anumber of samples m with

    m C0 S log2(n/) (1.8)

    and also m C1 log4(n/), where C0 and C1 are known constants. Set = PH.Then given the measurements y = x0, the program (1.2) will recover 0 (and hencex0) exactly with probability exceeding1 . The form of the bound (1.8) is the sameas using RPMS (again, without the random convolution in front) to sense spectrallysparse signals.

    At first, it may seem counterintuitive that convolving with a random pulse andsubsampling would work equally well with any sparsity basis. After all, an applicationofH = n1/2FF will not change the magnitude of the Fourier transform, so signalswhich are concentrated in frequency will remain concentrated and signals which arespread out will stay spread out. For compressive sampling to work, we need Hx0to be spread out in the time domain. We already know that signals which areconcentrated on a small set in the Fourier domain will be spread out in time [8]. Therandomness of will make it highly probable that a signal which is concentrated intime will not remain so after H is applied. Time localization requires very delicaterelationships between the phases of the Fourier coefficients, when we blast the phasesby applying , these relationships will no longer exist. A simple example of thisphenomena is shown in Figure 1.1.

    6

  • 8/2/2019 Random Convolution

    7/22

    (a) (b) (c)

    Fig. 1.1. (a) A signalx0 consisting of a single Daubechies-8 wavelet. Taking samples atrandom locations of this wavelet will not be an effective acquisition strategy, as very few willbe located in its support. (b) Magnitude of the Fourier transform Fx0. (c) Inverse Fouriertransform after the phase has been randomized. Although the magnitude of the Fouriertransform is the same as in (b), the signal is now evenly spread out in time.

    1.4. Relationship to previous research. In [1], Ailon and Chazelle proposethe idea of a randomized Fourier transform followed by a random projection as afast Johnson-Lindenstrauss transform (FJLT). The transform is decomposed asQF, where Q is a sparse matrix with non-zero entries whose locations and valuesare chosen at random locations. They show that this matrix QF behaves like arandom waveform matrix in that with extremely high probability, it will not changethe norm of an arbitrary vector too much. However, this construction requires thatthe number of non-zero entries in each row of Q is commensurate with the number ofrows m ofQ. Although ideal for dimensionality reduction of small point sets, this typeof subsampling does not translate well to compressive sampling, as it would requireus to randomly combine on the order of m samples ofHx0 from arbitrary locations toform a single measurement taking m measurements would require on the order ofm2 samples. We show here that more structured projections, consisting either of onerandomly chosen sample per row or a random combination of consecutive samples,

    are adequate for CS. This is in spite of the fact that our construction results in muchweaker concentration than the FJLT.The idea that the sampling rate for a sparse (spikey) signal can be significantly

    reduced by first convolving with a kernel that spreads it out is one of the central ideasof sampling signal with finite rates of innovation [23,35]. Here, we show that a randomkernel works for any kind sparsity, and we use an entirely different reconstructionframework.

    In [34], numerical results are presented that demonstrate recovery of sparse signals(using orthogonal matching pursuit instead of 1 minimization) from a small numberof samples of the output of a finite length random filter. In this paper, we approachthings from a more theoretical perspective, deriving bounds on the number of samplesneed to guarantee sparse reconstruction.

    In [4], random convolution is explored in a slightly different context than in this

    paper. Here, the sensing matrix consists of random selected rows (or modulatedsums) of a Toeplitz matrix; in [4], the sensing matrix is itself Toeplitz, correspondingto convolution followed by a small number of consecutive samples. This differencein structure will allow us to derive stronger bounds: (1.7) guarantees recovery fromSlog n measurements, while the bound in [4] requires S2 log n.

    2. Applications. The fact that random convolution is universal and allows fastcomputations makes it extremely attractive as a theoretical sensing strategy. In this

    7

  • 8/2/2019 Random Convolution

    8/22

    section, we briefly discuss two imaging scenarios (in a somewhat rarified stetting) inwhich convolution with a random pulse can be implemented naturally.

    We begin by noting that while Theorems 1.1 and 1.2 above deal explicitly withcircular convolution, what is typically implemented is linear convolution. One simple

    way to translate our results to linear convolution would be to repeat the pulse h; thenthe samples in the midsection of the linear convolution would be the same as samplesfrom the circular convolution.

    2.1. Radar imaging. Reconstruction from samples of a signal convolved witha known pulse is fundamental in radar imaging. Figure 2.1 illustrates how a scene ismeasured in spotlight synthetic aperture radar (SAR) imaging (see [25] for a more in-depth discussion). The radar, located at point p1, focusses its antenna on the regionof interest with reflectivity function I(x1, x2) whose center is at orientation relativeto p1, and sends out a pulse h(t). If the radar is far enough away from the region ofinterest, this pulse will arrive at every point along one of the parallel lines at a certainrange r at approximately the same time. The net reflectivity from this range is thenthe integral R(r) of I(x1, x2) over the line at lr,,

    R(r) =lr,

    I(x1, x2)dx1dx2,

    and the return signal y(t) is thus the pulse convolved with R

    y(t) = h R,where it is understood that we can convert R from a function of range to a functionof time by dividing by the speed at which the pulse travels.

    The question, then, is how many samples of y(t) are needed to reconstruct therange profile R. A classical reconstruction will require a number of samples propor-tional to the bandwith of the pulse h; in fact, the sampling rate of the analog-to-digitalconverter is one of the limiting factors in the performance of modern-day radar sys-

    tems [26]. The results outlined in Section 1.3 suggest that if we have an appropriaterepresentation for the range profiles and we use a broadband random pulse, then thenumber of samples needed to reconstruct an R(r) using (1.2) scales linearly with thecomplexity of these range profiles, and only logarithmicallywith the bandwidth of h.We can gain the benefits of a high-bandwidth pulse without paying the cost of anADC operating at a comparably fast rate.

    A preliminary investigation of using ideas from compressive sensing in radar imag-ing can be found in [5]. There has also been some recent work on implementinglow-cost radars which use random waveforms [2, 3] and traditional image reconstruc-tion techniques. Also, in [19], it is shown that compressive sensing can be used tosuper-resolve point targets when the radar sends out an incoherent pulse.

    2.2. Fourier optics. Convolution with a pulse of our choosing can also be im-plemented optically. Figure 2.2 sketches a possible compressive sensing imaging archi-tecture. The object is illuminated by a coherent light source; one might think of theobject of interest as a pattern on a transparency, and the image we want to acquireas the light field exiting this transparency. The first lens takes an optical Fouriertransform of the image, the phase of the Fourier transform is then manipulated usinga spatial light modulator. The next lens inverts the Fourier transform, and then asecond spatial light modulator and a low-resolution detector array with big pixelsaffect the RPMS subsampling scheme. In this particular situation, we will assume

    8

  • 8/2/2019 Random Convolution

    9/22

    lr,

    I(x1, x2)

    R(r)

    r

    1

    Fig. 2.1. Geometry for the spotlight SAR imaging problem. The return signal from apulseh(t) emitted from point p1 will be the range profile R(r) (collection of line integralsat an angle ) convolved with h.

    that our detectors can observe both the magnitude and the phase of the final lightfield (modern heterodyne detectors have this capability).

    Without the spatial light modulators, the resolution of this system scales withthe size of the detector array. The big-pixel detectors simply average the light fieldover a relatively large area, and the result is a coarsely pixellated image. Adding thespatial light modulators allows us to effectively super-resolve this coarse image. Withthe SLMs in place, the detector is taking incoherent measurements in the spirit ofTheorem 1.2 above. The resolution (for sparse images and using the reconstruction(1.2)) is now determined by the SLMs: the finer the grid over which we can effectivelymanipulate the phases, the more effective pixels we will be able to reconstruct.

    Figure 2.3 illustrates the potential of this architecture with a simple numericalexperiment3. A 256 256 synthetic image, was created by placing 40 ellipses

    with randomly chosen orientations, positions, and intensities and adding a modestamount of noise. Measuring this image with a 64 64 low resolution detector arrayproduces the image in Fig. 2.3(b), where we have simply averaged the image in part(a) over 44 blocks. Figure 2.3(c) is the result when the low resolution measurementsare augmented with measurements from the architecture in Figure 2.2. With x0 asthe underlying image, we measure y = x0, where

    =

    P

    PH

    .

    From these measurements, the image is recovered using total variation minimization,a variant of 1 minimization that tends to give better results on the types of imagesbeing considered here. Given y, we solve

    minx

    TV(x) subject to x y2 ,

    where is a relaxation parameter set at a level commensurate with the noise. Theresult is shown in Figure 2.3(c). As we can see, the incoherent measurements have

    3Matlab code that reproduces this experiment can be downloaded atusers.ece.gatech.edu/ justin/randomconv/ .

    9

  • 8/2/2019 Random Convolution

    10/22

    !"#$%&0

    '%()&

    F

    *+,& '%()&

    F

    *+,&

    -%.%/.01#2&

    P

    y = PHx0

    34,*&1#(-0"&/0(50'670(&H

    Fig. 2.2. Fourier optics imaging architecture implementing random convolution followedby RPMS. The computation y = PHx0 is done entirely in analog; the lenses move theimage to the Fourier domain and back, and spatial light modulators (SLMs) in the Fourierand image planes randomly change the phase.

    (a) (b) (c)

    Fig. 2.3. Fourier optics imaging experiment. (a) The high-resolution image we wishto acquire. (b) The high-resolution image pixellated by averaging over 4 4 blocks. (c)The image restored from the pixellated version in (b), plus a set of incoherent measurementstaken using the architecture from Figure 2.2. The incoherent measurements allow us toeffectively super-resolve the image in (b).

    allowed us to super-resolve the image; the boundaries of the ellipses are far clearerthan in part (b).

    The architecture is suitable for imaging with incoherent illumination as well, butthere is a twist. Instead of convolution with the random pulse h (the inverse Fouriertransform of the mask ), the lens-SLM-lens system convolves with |h|2. While |h|2is still random, and so the spirit of the device remains the same, convolution with |h|2is no longer a unitary operation, and thus falls slightly outside of the mathematicalframework we develop in this paper.

    3. Theory.

    3.1. Coherence b ounds. First, we will establish a simple bound on the co-herence parameter between a random convolution system and a given representationsystem.

    Lemma 3.1. Let be an arbitrary fixed orthogonal matrix, and create H atrandom as above with H = n1/2FF. Choose 0 < < 1. Then with probabilityexceeding 1 , the coherence (H, ) will obey

    (H, ) 2

    log(2n2/). (3.1)

    10

  • 8/2/2019 Random Convolution

    11/22

    Proof. The proof is a simple application of Hoeffdings inequality (see Appendix A).Ifht is the tth row of H (measurement vector) and s is the sth column of (repre-sentation vector), then

    ht, s =n

    =1

    ej2(t1)(1)/ns(),

    where s is the normalized discrete Fourier transform n1/2F s. Since ht and s are

    real-valued and the are conjugate symmetric, we can rewrite this sum as

    ht, s = 1s(1) + (1)t1n/2+1s(n/2 + 1) + 2n/2=2

    Re

    Ft,s()

    , (3.2)

    where we have assumed without loss of generality that n is even; it will be obvious howto extend to the case where n is odd. Now each of the in the sum above are inde-

    pendent. Because the are uniformly distributed on the unit circle, Re[Ft,s()]has a distribution identical to |s| cos(()), where () is the phase of s()Ft,and {} is an independent random sign sequence. Thus,

    n/2+1=1

    a, with a =

    s(1), = 1

    2|s| cos(()), 2 n/2s(n/2 + 1), = n/2 + 1

    has a distribution identical to ht, s. Sincen/2+1

    =1a2 2s22 = 2,

    applying the bound (A.1) gives us

    P

    n/2+1=1

    a2

    > 2e2/4.

    Taking = 2

    log(2n2/) and applying the union bound over all n2 choices of (t, s)establishes the lemma.

    Appling Lemma 3.1 directly to the recovery result (1.5) guarantees recovery from

    m S log2 n

    randomly chosen samples, a log factor off of (1.7). We are able to get rid of this extralog factor by using a more subtle property of the random measurement ensemble H.

    Fix a subset of the domain of size || = S, and let be the n S matrixconsisting of the columns of indexed by . In place of the coherence, our quantityof interest will be

    := () = maxk=1,...,n

    rk2 (3.3)11

  • 8/2/2019 Random Convolution

    12/22

    with the rk as rows in the matrix H; we will call () the cumulative coherence (this quantity was also used in [32]). We will show below that we can bound thenumber of measurements needed to recover (with high probability) a signal on by

    m 2

    log n.Since we always have the bound S, the result (1.5) is still in effect. However,Lemma 3.2 below will show that in the case where U = H with H as a randomconvolution matrix, we can do better, achieving

    S.

    Lemma 3.2. Fix an orthobasis and a subset of the -domain = {1, . . . , S}of size || = S Generate a random convolution matrix H at random as described inSection 1.1 above, and let rk, k = 1, . . . , n be the rows of H. Then with probabilityexceeding 1

    () = maxk=1,...,n

    rk2

    8S, (3.4)

    for S

    16 log(2n/).

    Proof. of Lemma 3.2. We can write rk as the following sum of random vectorsin CS:

    rk =n

    =1

    Fk,g,

    where g CS is a column of :

    g =

    1()

    2()

    . . .S()

    .

    By conjugate symmetry, we can rewrite this as a sum of vectors in RS,

    rk = 1g1 + n/2+1(1)k1gn/2+1 + 2n/2=2

    Re [Fk,g]

    (note that g1 and gn/2+1 will be real-valued). Because the are uniformly distributedover the unit circle, the random vector Re[Fk,

    g] has a distribution identical to

    Re[Fk,g], where is an independent Rademacher random variable. We set

    Y =

    n/2+1

    =1 v, where v =

    g1 = 1

    2 R e [Fk,g] 2 n/2

    gn/2+1 = n/2 + 1

    .

    and will show that with high probability Y2 will be within a fixed constant of

    S.We can bound the mean of Y2 as follows:

    E[Y2]2 E[Y22] =n/2+1=1

    v22 2n

    =1

    g22 = 2S,

    12

  • 8/2/2019 Random Convolution

    13/22

  • 8/2/2019 Random Convolution

    14/22

    and also m C1 2 log3(n/) where C1 and C1 are known constants. Then withprobability exceeding 1 O(), every vector 0 supported on with sign sequence zcan be recovered from y = 0 by solving (1.2).

    The proofs of Theorems 3.3 and 3.4 follow the same general outline put forth

    in [7], with with one important modification (Lemmas 3.5 and 3.6 below). As detailedin [8, 18,31], sufficient conditions for the successful recovery of a vector 0 supportedon with sign sequence z are that has full rank, where is the m S matrixconsisting of the columns of indexed by , and that

    |()| = |()1 , z| < 1 for all c, (3.5)

    where is the column of at index .

    There are three essential steps in establishing (3.5):

    1. Show that with probability exceeding 1 , the random matrix willhave a bounded inverse:

    ()

    1

    2/m, (3.6)

    where is the standard operator norm. This has already been done for us inboth the random subsampling and the RPMS cases. For random subsampling,it is essentially shown in [7, Th.1.2] that this is true when

    m 2 max(C1 log S, C2 log(3/)),

    for given constants C1, C2. For RPMS, a straightforward generalization of [33,Th. 7] establishes (3.6) for

    m C3 2 log2(S/). (3.7)

    2. Establish, again with probability exceeding 1

    O(), that the norm of

    is on the order of m. This is accomplished in Lemmas 3.5 and 3.6 below.Combined with step 2, this means that the norm of ()

    1 is on theorder of /

    m.

    3. Given that ()12 /

    m, show that |()| < 1 for all cwith probability exceeding 1 O(). Taking z as a random sign sequence,this is a straightforward application of the Hoeffding inequality.

    3.3. Proof of Theorem 3.3. As step 1 is already established, we start withstep 2. We will assume without loss of generality that m1/2 /, as the probabilityof success monotonically increases as m increases. The following lemma shows that2 /

    m for each c.

    Lemma 3.5. Let, , , , andm be as in Theorem 3.3. Fix c, and considerthe random vector v =

    = U

    R

    . Assume that

    m

    2/. Then for any

    a 2m1/41/2,P2

    m + a1/2m1/4

    3eCa2 ,

    where C is a known constant.

    Proof. We show that the mean E v2 is less than m, and then apply theTalagrand concentration inequality to show that v2 is concentrated about its mean.

    14

  • 8/2/2019 Random Convolution

    15/22

    Using the Bernoulli sampling model, can be written as a sum of independentrandom variables,

    =n

    k=1

    kUk,rk =

    n

    k=1

    (k

    m/n)Uk,r

    k,

    where rk is the kth row of U = H, and the second equality follows from theorthogonality of the columns of U. To bound the mean, we use

    E[2]2 E[| , |2]

    =n

    k1,k2=1

    E[(k1 m/n)(k2 m/n)]Uk1,Uk2,rk1 , rk2

    =n

    k=1

    m

    n

    1 m

    n

    U2k,rk22

    m2

    n

    n

    k=1

    U2k,

    = m2.

    We will apply Talagrand (see (A.4) in the Appendix) using Yk = (Ikm/n)Uk,rk.Note that for all f RS with f2 1,

    |f, Yk | |Uk, | | f, rk| ,and so we can use B = . For 2, note that

    E |f, Yk|2 = mn

    1 m

    n

    |Uk, |2 |f, rk|

    and so

    2 = supf

    nk=1

    E |f, Yk|2

    =m

    n

    1 m

    n

    2

    nk=1

    |f, rk|2

    m2.Plugging the bounds for E 2, B, and 2 into (A.4), we have

    P2 > m + t 3exp

    t

    Klog

    1 +

    t

    m2 + 2

    m

    .

    Using our assumption that

    m 2/ and the fact that log(1 + x) > 2x/3 when0 x 1, this becomes

    P2 > m + t 3exp

    t

    2

    3K

    m2

    for all 0 t 2m. Thus

    P2 >

    m + a1/2m1/4

    3eCa2

    15

  • 8/2/2019 Random Convolution

    16/22

    for a 2m1/41/2, where C = 1/(3K).

    To finish off the proof of the Theorem, let A be the event that (3.6) holds; step1 tells us that P(Ac) . Let B be the event that

    maxc

    ()1v2 .

    By Lemma 3.5 and taking the union bound over all c, we have

    P(Bc | A) 3neCa2

    .

    By the Hoeffding inequality,

    P

    supc

    |()| > 1 | B, A

    2ne1/22 .

    Our final probability of success can then be bounded by

    P

    maxc |()| > 1

    Pmaxc |()| > 1 | B, A

    + P (B|A) + P(A)

    2ne1/22 + 3neCa2 + . (3.8)

    Choose = 2m1/2 + 2a1/2m3/4. Then we can make the second term in(3.8) less than by choosing a = C1/2

    log(3n/); for m 16C22 log2(3n/) we

    will have a (1/2)m1/41/2. This choice of a also ensures that 3m1/2. Forthe first term in (3.8) to be less than , we need 2 (2 log(2n/))1, which holdwhen

    m 18 2 log(2n/).3.4. Proof of Theorem 3.4. As before, we control the norm of (now

    with = PU), and then use Hoeffding to establish (3.5) with high probability.Lemma 3.6 says that 2 mlog n with high probability.Lemma 3.6. Let , , , , and m be as in Theorem 3.4. Fix c. Then for

    any > 0

    P2 > C m1/2

    log(n/)

    + 4

    log(2n2/)

    = 2/n,

    where C is a known constant.Proof. To start, note that we can write as

    =m

    n

    mk=1

    p1,p2Bk

    p1p2Up2,rp1

    = mn

    mk=1

    p1,p2Bkp1=p2

    p1p2Up2,rp1

    since

    mk=1

    pBk

    Up,rp = 0,

    16

  • 8/2/2019 Random Convolution

    17/22

    by the orthogonality of the columns of U. To decouple the sum above, we must firstmake it symmetric:

    =m

    2n

    m

    k=1

    p1,p2Bkp1=p2

    p1p2(Up2,rp1 + Up1,r

    p2).

    Now, if we set

    Z =

    m

    2n

    mk=1

    p1,p2Bkp1=p2

    p1p2(Up2,r

    p1 + Up1,rp2)

    2

    , (3.9)

    where the sequence {i} is an independent sign sequence distributed the same as {i},we have (see [14, Chapter 3])

    P(2 > ) < CD P(Z > /CD) (3.10)

    for some known constant CD, and so we can shift our attention to bounding Z.We will bound the two parts of the random vector separately. Let

    v =m

    n

    mk=1

    p1,p2Bkp1=p2

    p1p2Up2,r

    p1 .

    We can rearrange this sum to get

    v =m

    n

    mk=1

    pBk

    n/m1q=1

    p(p+q)k

    U(p+q)k,rp (where (p + q)k is addition modulo Bk)

    =m

    n

    n/m1

    q=1

    q

    m

    k=1

    pBk

    p

    q(p+q)kU(p+q)k,r

    p

    =m

    n

    n/m1q=1

    q

    mk=1

    pBk

    (p+q)kU(p+q)k,rp (where {i } is an iid sign sequence independent of {q})

    =m

    n

    n/m1q=1

    qwq, where wq =

    mk=1

    pBk

    (p+q)kU(p+q)k,rp.

    We will show that wq2 can be controlled properly for each q, and then use thea concentration inequality to control the norm ofv . We start by bounding the meanofwq2:

    (E

    wq

    2)2

    E

    wq

    22

    =m

    k=1

    pBk

    U2(p+q)k,rp22

    2n

    i=1

    U2i,

    = n2.

    17

  • 8/2/2019 Random Convolution

    18/22

    Using (A.3), we have that

    Pwq2 > n + 2e2/162 ,

    where

    2 = sup21

    TV VT, with V :=

    U(1q)k,r1 U(2q)k,r

    2 U(nq)k,rn

    = UD,

    and D is a diagonal matrix with the U, along the diagonal. Since 2 is the largest(squared) singular value of V, and since U =

    n and D , we have that

    2 n2. Thus

    Pwq2 > n + 2e2/16n2 .

    Let M be the event that all of the wq2 are close to their mean, specificallymax

    q=1,...,n/mwq2

    n + ,

    and note that

    P(Mc) 2nm1e2/n2 . (3.11)

    Then

    P(v2 > ) P(v2 > |M) + P(Mc). (3.12)Given that M has occurred, we will now control v2. Again, we start with

    the expectation

    E[v22 | M] =m2

    n2

    q1,q2

    E[q1q2 |M] E[wq1 , wq2|M]

    = m2

    n2q1,q2

    E[q1q2 ] E[wq1 , wq2|M]

    =m2

    n2

    q

    E[wq22|M]

    m2

    n2

    q

    (

    n+ )2

    =m

    n(

    n+ )2.

    Using Khintchine-Kahane ((A.2) in the Appendix), we have

    P v2 > Cm1/2n1/2(n1/2+ )log(n/) | M /n. (3.13)

    Choosing = 4n1/2

    log(2n2/m), we fold (3.11) and (3.13) into (3.12) to get

    Pv2 > Cm1/2

    log(n/)

    + 4

    log(2n2/m)

    2/n (3.14)

    By symmetry, we can replace v2 with Z in (3.14), and arrive at the lemma byapplying the decoupling result (3.10).

    18

  • 8/2/2019 Random Convolution

    19/22

  • 8/2/2019 Random Convolution

    20/22

    Appendix A. Concentration inequalities.

    Almost all of the analysis in this paper relies on controlling the magnitude/normof the sum of a sequence of random variables/vectors. In this appendix, we brieflyoutline the concentration inequalities that we use in the proofs of Theorem 1.1 and

    1.2.The Hoeffding inequality [20] is a classical tail bound on the sum of a sequence of

    independent random variables. Let Y1, Y2, . . . , Y n be independent, zero-mean randomvariables bounded by |Yk| ak, and let the random variable Z be

    Z =

    n

    k=1

    Yk

    .Then

    P (Z > ) 2exp

    2

    2a22

    , (A.1)

    for every > 0.Concentration inequalities analogous to (A.1) exist for the norm of a random sum

    of vectors. Let v1, v2, . . . , vn be a fixed sequence of vectors in RS, and let {1, . . . , n}

    be a sequence of independent random variables taking values of1 with equal prob-ability. Let the random variable Z be the norm of the randomized sum

    Z =

    n

    k=1

    kvk

    2

    .

    If we create the S n matrix V by taking the vi as columns, Z is the norm of theresult of the action of V on the vector [1 n]T.

    The second moment of Z is easily computed

    E[Z

    2

    ] =k1

    k2 E[

    k1k2 ]vk1 , vk2=

    k

    vk22

    = V2F,where F is the Frobenius norm. In fact, the Khintchine-Kahane inequality [22]allows us to bound all of the moments in terms of the second moment; for every q 2

    E[Zq] C qq/2 (E[Z2])1/2,where C 21/4. Using the Markov inequality, we have

    P (Z > ) E[Zq]

    q C

    q

    V

    F

    q

    ,

    for any q 2. Choosing q = log(1/) and = C e q VF gives us

    P

    Z > CVF

    log(1/)

    , (A.2)

    with C e 21/4.20

  • 8/2/2019 Random Convolution

    21/22

    When the singular values of V are all about the same size, meaning that theoperator norm (largest singular value) is significantly smaller than the Frobeniusnorm (sum-of-squares of the singular values), Z is more tightly concentrated aroundits mean than (A.2) suggests. In particular, Theorem 7.3 in [21] shows that for all

    > 0

    P (Z E[Z] + ) 2exp

    2

    162

    , (A.3)

    where

    2 = sup21

    ni=1

    |, vi|2 = V2.

    The variance 2 is simply the largest squared singular value of V.When the random weights in the vector sum have a variance which is much smaller

    than their maximum possible magnitude (as is the case in Lemma 3.5), an even tighterbound is possible. Now let Z be the norm of the random sum

    Z =

    n

    k=1

    kvk

    2

    ,

    where the k are zero-mean iid random variables with |k| . A result due toTalagrand [28] gives

    P (Z E[Z] + ) 3exp

    KB

    log

    1 +

    B

    2 + B E[Z]

    , (A.4)

    where K is a fixed numerical constant,

    2 = E[

    |k

    |2]

    V

    2, and B =

    maxk

    vk

    22.

    REFERENCES

    [1] N. Ailon and B. Chazelle, Approximate nearest neighbors and the fast Johnson-Lindenstrauss transform, in Proc. 38th ACM Symp. Theory of Comput., Seattle, WA,2006, pp. 557563.

    [2] S. R. J. Axelsson, Analysis of random step frequency radar and comparison with experiments,IEEE Trans. Geosci. Remote Sens., 45 (2007), pp. 890904.

    [3] , Random noise radar/sodar with ultrawideband waveforms, IEEE Trans. Geosci. RemoteSens., 45 (2007), pp. 10991114.

    [4] W. U. Bajwa, J. D. Haupt, G. M. Raz, S. J. Wright, and R. D. Nowak , Toeplitz-structuredcompressed sensing matrices, in Proc. IEEE Stat. Sig. Proc. Workshop, Madison, WI,August 2007, pp. 294298.

    [5] R. Baraniuk and P. Steeghs, Compressive radar imaging, in Proc. IEEE Radar Conference,Boston, MA, April 2007, pp. 128133.

    [6] R. G. Baraniuk, M. Davenport, R. DeVore, and M. Wakin , A simple proof of the restrictedisometry property for random matrices. to appear in Constructive Approximation, 2008.

    [7] E. Candes and J. Romberg, Sparsity and incoherence in compressive sampling, Inverse Prob-lems, 23 (2007), pp. 969986.

    [8] E. Candes, J. Romberg, and T. Tao, Robust uncertainty principles: Exact signal recon-struction from highly incomplete frequency information, IEEE Trans. Inform. Theory, 52(2006), pp. 489509.

    [9] , Stable signal recovery from incomplete and inaccurate measurements, Comm. on Pureand Applied Math., 59 (2006), pp. 12071223.

    21

  • 8/2/2019 Random Convolution

    22/22

    [10] E. Candes and T. Tao, Near-optimal signal recovery from random projections and universalencoding strategies?, IEEE Trans. Inform. Theory, 52 (2006), pp. 54065245.

    [11] , The Dantzig selector: statistical estimation when p is much smaller thann, Annals ofStat., 35 (2007), pp. 23132351.

    [12] S. S. Chen, D. L. Donoho, and M. A. Saunders, Atomic decomposition by basis pursuit,

    SIAM J. Sci. Comput., 20 (1999), pp. 3361.[13] R. Coifman, F. Geshwind, and Y. Meyer, Noiselets, Appl. Comp. Harmonic Analysis, 10

    (2001), pp. 2744.[14] V. H. de la Pena and E. Gine, Decoupling: From Dependence to Independence, Springer,

    1999.[15] D. L. Donoho, Compressed sensing, IEEE Trans. Inform. Theory, 52 (2006), pp. 12891306.[16] D. L. Donoho, Y. Tsaig, I. Drori, and J. L. Starck, Sparse solutions of underdetermined

    linear equations by stagewise orthogonal matching pursuit, Technical Report, StanfordUniversity, (2006).

    [17] M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, andR. G. Baraniuk, Single-pixel imaging via compressive sampling, IEEE Signal Proc. Mag.,25 (2008), pp. 8391.

    [18] J. J. Fuchs, Recovery of exact sparse representations in the presence of bounded noise, IEEETrans. Inform. Theory, 51 (2005), pp. 36013608.

    [19] M. A. Herman and T. Strohmer, High-resolution radar via compressed sensing. submittedto IEEE. Trans. Sig. Proc., 2008.

    [20] W. Hoeffding, Probability inequalities for sums of bounded random variables, J. AmericanStat. Assoc., 58 (1963), pp. 1330.[21] M. Ledoux, The Concentration of Measure Phenomenon, American Mathematical Society,

    2001.[22] M. Ledoux and M. Talagrand, Probability in Banach Spaces, vol. 23 of Ergebnisse der

    Mathematik und ihrer Grenzgegiete, Springer-Verlag, 1991.[23] I. Maravic, M. Vetterli, and K. Ramchandran, Channel estimation and synchornization

    with sub-nyquist sampling and application to ultra-wideband systems, in IEEE Symp. onCircuits and Systems, vol. 5, Vancouver, May 2004, pp. 381384.

    [24] S. Mendelson, A. Pajor, and N. Tomczak-Jaegermann, Reconstruction and subgaussianoperators in asymptotic geometric analysis, Geometric and Functional Analysis, 17 (2007),pp. 12481282.

    [25] D. C. Munson, J. D. OBrien, and W. K. Jenkins , A tomographic formulation of spotlight-mode synthetic aperure radar, Proc. IEEE, 71 (1983), pp. 917925.

    [26] M. Richards, Fundamentals of Radar Signal Processing, McGraw-Hill, 2005.[27] M. Rudelson and R. Vershynin, On sparse reconstruction from Fourier and Gaussian mea-

    surements, Comm. on Pure and Applied Math., 61 (2008), pp. 10251045.[28] M. Talagrand, New concentration inequalities in product spaces, Invent. Math., 126 (1996),pp. 505563.

    [29] J. Tropp and A. Gilbert, Signal recovery from partial information via orthogonal matchingpursuit, IEEE Trans. Inform. Theory, 53 (2007), pp. 46554666.

    [30] J. A. Tropp. Personal communication.[31] , Recovery of short, complex linear combinations via 1 minimization, IEEE Trans. In-

    form. Theory, 51 (2005), pp. 15681570.[32] , Random subdictionaries of general dictionaries. Submitted manuscript, August 2006.[33] J. A. Tropp, J. N. Laska, M. F. Duarte, J. Romberg, and R. G. Baraniuk , Beyond Nyquist:

    efficient sampling of sparse bandlimited signals. submitted to IEEE. Trans. Inform. Theory,May 2008.

    [34] J. A. Tropp, M. B. Wakin, M. F. Duarte, D. Baron, and R. G. Baraniuk, Random filtersfor compressive sampling and reconstruction, in Proc. IEEE Int. Conf. Acoust. Speech Sig.Proc., Toulouse, France, May 2006.

    [35] M. Vetterli, P. Marziliano, and T. Blu, Sampling signals with finite rate of innovation,IEEE Trans. Signal Proc., 50 (2002), pp. 14171428.

    22