Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... ·...

35
Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor: Wotao Yin July 2013 Note scriber: Zheng Sun online discussions on piazza.com Those who complete this lecture will know how to read different recovery guarantees some well-known conditions for exact and stable recovery such as Spark, coherence, RIP, NSP, etc. indeed, we can trust 1 -minimization for recovering sparse vectors 1 / 35

Transcript of Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... ·...

Page 1: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

Sparse Optimization

Lecture: Sparse Recovery Guarantees

Instructor: Wotao Yin

July 2013

Note scriber: Zheng Sun

online discussions on piazza.com

Those who complete this lecture will know

• how to read different recovery guarantees

• some well-known conditions for exact and stable recovery such as Spark, coherence, RIP, NSP, etc.

• indeed, we can trust `1-minimization for recovering sparse vectors

1 / 35

Page 2: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

The basic question of sparse optimization is:

Can I trust my model to return an intended sparse quantity?

That is

• does my model have a unique solution? (otherwise, different algorithms

may return different answers)

• is the solution exactly equal to the original sparse quantity?

• if not (due to noise), is the solution a faithful approximate of it?

• how much effort is needed to numerically solve the model?

This lecture provides brief answers to the first three questions.

2 / 35

Page 3: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

What this lecture does and does not cover

It covers basic sparse vector recovery guarantees based on

• spark

• coherence

• restricted isometry property (RIP) and null-space property (NSP)

as well as both exact and robust recovery guarantees.

It does not cover the recovery of matrices, subspaces, etc.

Recovery guarantees are important parts of sparse optimization, but they are

not the focus of this summer course.

3 / 35

Page 4: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

Examples of guarantees

Theorem (Donoho and Elad [2003], Gribonval and Nielsen [2003])

For Ax = b where A ∈ Rm×n has full rank, if x satisfies

‖x‖0 ≤ 12(1 + µ(A)−1), then `1-minimization recovers this x.

Theorem (Candes and Tao [2005])

If x is k-sparse and A satisfies the RIP-based condition δ2k + δ3k < 1, then x

is the `1-minimizer.

Theorem (Zhang [2008])

IF A ∈ Rm×n is a standard Gaussian matrix, then with probability at least

1− exp(−c0(n−m)) `1-minimization is equivalent to `0-minimization for all x:

‖x‖0 <c214

m

1 + log(n/m)

where c0, c1 > 0 are constants independent of m and n.

4 / 35

Page 5: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

How to read guarantees

Some basic aspects that distinguish different types of guarantees:

• Recoverability (exact) vs stability (inexact)

• General A or special A?

• Universal (all sparse vectors) or instance (certain sparse vector(s))?

• General optimality? or specific to model / algorithm?

• Required property of A: spark, RIP, coherence, NSP, dual certificate?

• If randomness is involved, what is its role?

• Condition/bound is tight or not? Absolute or in order of magnitude?

5 / 35

Page 6: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

Spark

First questions for finding the sparsest solution to Ax = b

1. Can sparsest solution be unique? Under what conditions?

2. Given a sparse x, how to verify whether it is actually the sparsest one?

Definition (Donoho and Elad [2003])

The spark of a given matrix A is the smallest number of columns from A that

are linearly dependent, written as spark(A).

rank(A) is the largest number of columns from A that are linearly

independent. In general, spark(A) 6= rank(A) + 1; except for many randomly

generated matrices.

Rank is easy to compute (due to the matroid structure), but spark needs a

combinatorial search.

6 / 35

Page 7: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

Spark

Theorem (Gorodnitsky and Rao [1997])

If Ax = b has a solution x obeying ‖x‖0 < spark(A)/2, then x is the sparsest

solution.

• Proof idea: if there is a solution y to Ax = b and x− y 6= 0, then

A(x− y) = 0 and thus

‖x‖0 + ‖y‖0 ≥ ‖x− y‖0 ≥ spark(A)

or ‖y‖0 ≥ spark(A)− ‖x‖0 > spark(A)/2 > ‖x‖0.

• The result does not mean this x can be efficiently found numerically.

• For many random matrices A ∈ Rm×n, the result means that if an

algorithm returns x satisfying ‖x‖0 < (m+ 1)/2, the x is optimal with

probability 1.

• What to do when spark(A) is difficult to obtain?

7 / 35

Page 8: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

General Recovery - Spark

Rank is easy to compute, but spark needs a combinatorial search.

However, for matrix with entries in general positions, spark(A) = rank(A) + 1.

For example, if matrix A ∈ Rm×n(m < n) has entries Aij ∼ N (0, 1), then

rank(A) = m = spark(A)− 1 with probability 1.

In general, ∀ full rank matrix A ∈ Rm×n(m < n), any m+ 1 columns of A is

linearly dependent, so

spark(A) ≤ m+ 1 = rank(A) + 1.

8 / 35

Page 9: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

Coherence

Definition (Mallat and Zhang [1993])

The (mutual) coherence of a given matrix A is the largest absolute normalized

inner product between different columns from A. Suppose

A = [a1 a2 · · · an]. The mutual coherence of A is given by

µ(A) = maxk,j,k 6=j

|a>k aj |‖ak‖2 · ‖aj‖2

.

• It characterizes the dependence between columns of A

• For unitary matrices, µ(A) = 0

• For matrices with more columns than rows, µ(A) > 0

• For recovery problems, we desire a small µ(A) as it is similar to unitary

matrices.

• For A = [Φ Ψ] where Φ and Ψ are n× n unitary, it holds

n−1/2 ≤ µ(A) ≤ 1

• µ(A) = n−1/2 is achieved with [I F ], [I Hadamard], etc.

• if A ∈ Rm×n where n > m, then µ(A) ≥ m−1/2.

9 / 35

Page 10: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

Coherence

Theorem (Donoho and Elad [2003])

spark(A) ≥ 1 + µ−1(A).

Proof sketch:

• A← A with columns normalized to unit 2-norm

• p← spark(A)

• B← a p× p minor of A>A

• |Bii| = 1 and∑j 6=i |Bij | ≤ (p− 1)µ(A)

• Suppose p < 1 + µ−1(A)⇒ 1 > (p− 1)µ(A)⇒ |Bii| >∑j 6=i |Bij |,∀i

• ⇒ B � 0 (Gershgorin circle theorem) ⇒ spark(A) > p. Contradiction.

10 / 35

Page 11: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

Coherence-base guarantee

Corollary

If Ax = b has a solution x obeying ‖x‖0 < (1 + µ−1(A))/2, then x is the

unique sparsest solution.

Compare with the previous

Theorem

If Ax = b has a solution x obeying ‖x‖0 < spark(A)/2, then x is the sparsest

solution.

For A ∈ Rm×n where m < n, (1 + µ−1(A)) is at most 1 +√m but spark can

be 1 +m. spark is more useful.

Assume Ax = b has a solution with ‖x‖0 = k < spark(A)/2. It will be the

unique `0 minimizer. Will it be the `1 minimizer as well? Not necessarily.

However, ‖x‖0 < (1 + µ−1(A))/2 is a sufficient condition.

11 / 35

Page 12: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

Coherence-based `0 = `1

Theorem (Donoho and Elad [2003], Gribonval and Nielsen [2003])

If A has normalized columns and Ax = b has a solution x satisfying

‖x‖0 <1

2

(1 + µ−1(A)

),

then this x is the unique minimizer with respect to both `0 and `1.

Proof sketch:

• Previously we know x is the unique `0 minimizer; let S := supp(x)

• Suppose y is the `1 minimizer but not x; we study e := y − x

• e must satisfy Ae = 0 and ‖e‖1 ≤ 2‖eS‖1• A>Ae = 0⇒ |ej | ≤ (1 + µ(A))−1µ(A)‖e‖1,∀j

• the last two points together contradict the assumption

Result bottom line: allow ‖x‖0 up to O(√m) for exact recovery

12 / 35

Page 13: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

The null space of A

• Definition: ‖x‖p :=(∑

i |xi|p)1/p

.

• Lemma: Let 0 < p ≤ 1. If ‖(y−x)S‖p > ‖(y−x)S‖p then ‖x‖p < ‖y‖p.

Proof: Let e := y − x.

‖y‖pp = ‖x + e‖pp = ‖xS + eS‖pp + ‖eS‖pp =

‖x‖pp + (‖eS‖pp − ‖eS‖pp) + (‖xS + eS‖pp − ‖xS‖pp + ‖eS‖pp).

Last term is nonnegative for 0 < p ≤ 1.

So, a sufficient condition is ‖eS‖pp > ‖eS‖pp. �

• If the condition holds for 0 < p ≤ 1, it also holds for q ∈ (0, p].

• Definition (null space property NSP(k, γ)). Every nonzero e ∈ N (A)

satisfies ‖eS‖1 < γ‖eS‖1 for all index sets S with |S| ≤ k.

13 / 35

Page 14: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

The null space of A

Theorem (Donoho and Huo [2001], Gribonval and Nielsen [2003])

Basis pursuit min{‖x‖1 : Ax = b} uniquely recovers all k-sparse vectors xo

from measurements b = Axo if and only if A satisfies NSP(k, 1).

Proof.

Sufficiency. Pick any k-sparse vector xo. Let S := supp(xo) and S = Sc. For

any non-zero h ∈ N (A), we have A(xo + h) = Axo = b and

‖x0 + h‖1 = ‖x0S + hS‖1 + ‖hS‖1

≥ ‖x0S‖1 − ‖hS‖1 + ‖hS‖1 (1)

= ‖x0‖1 + (‖hS‖1 − ‖hS‖1) .

NSP(k, 1) of A guarantees ‖x0 + h‖1 > ‖x0‖1, so xo is the unique solution.

Necessity. The inequality (1) holds with equality if sign(xoS) = −sign(hS) and

hS has a sufficiently small scale. Therefore, basis pursuit to uniquely recovers

all k-sparse vectors xo, NSP(k, 1) is also necessary.

14 / 35

Page 15: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

The null space of A

• Another sufficient condition (Zhang [2008]) for ‖x‖1 < ‖y‖1 is

‖x‖0 <1

4

(‖y − x‖1‖y − x‖2

)2

.

Proof:

‖eS‖1 ≤√|S|‖eS‖2 ≤

√|S|‖e‖2 =

√‖x‖0‖e‖2.

Then, the above sufficient condition ‖y− x‖1 > 2‖(y− x)S‖1 is given the

above inequality. �

15 / 35

Page 16: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

Null space

Theorem (Zhang [2008])

Given x and b = Ax,

min ‖x‖1 s.t. Ax = b

recovers x uniquely if

‖x‖0 < min

{1

4

‖e‖21‖e‖22

: e ∈ N (A) \ {0}}.

Comments:

• We know 1 ≤ ‖e‖1/‖e‖2 ≤√n for all e 6= 0. The ratio is small for sparse

vectors but we want it large, i.e., close to√n and away from 1.

• Fact: in most subspaces, the ratio is away from 1

• In particular, Kashin, Garvaev, and Gluskin showed that a randomly drawn

(n−m)-dimensional subspace V satisfies

‖e‖1‖e‖2

≥ c1√m√

1 + log(n/m), e ∈ V, e 6= 0

with probability at least 1− exp(−c0(n−m)), where c0, c1 > 0 are

independent of m and n.16 / 35

Page 17: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

Null space

Theorem (Zhang [2008])

If A ∈ Rm×n is sampled from i.i.d. Gaussian or is any rank-m matrix such that

BA> = 0 and B ∈ R(n−m)×m is i.i.d. Gaussian, then with probability at least

1− exp(−c0(n−m)), `1 minimization recovers any sparse x if

‖x‖0 <c214

m

1 + log(n/m),

where c0, c1 are positive constants independent of m and n.

17 / 35

Page 18: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

Comments on NSP

• NSP is no longer necessary if “for all k-sparse vectors” is relaxed.

• NSP is widely used in the proofs of other guarantees.

• NSP of order 2k is necessary for stable universal recovery.

Consider an arbitrary decoder ∆, tractable or not, that returns a vector

from the input b = Axo. If one requires ∆ to be stable in the sense

‖xo −∆(Axo)‖1 < C · σ[k](xo)

for all xo and σ[k] is the best k-term approximation error, then it holds

‖hS‖1 < C · ‖hSc‖1,

for all non-zero h ∈ N (A) and all coordinate sets S with |S| ≤ 2k. See

Cohen, Dahmen, and DeVore [2006].

18 / 35

Page 19: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

Restricted isometry property (RIP)

Definition (Candes and Tao [2005])

Matrix A obeys the restricted isometry property (RIP) with constant δs if

(1− δs)‖c‖22 ≤ ‖Ac‖22 ≤ (1 + δs)‖c‖22

for all s-sparse vectors c.

RIP essentially requires that every set of columns with cardinality less than or

equal to s behaves like an orthonormal system.

19 / 35

Page 20: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

RIP

Theorem (Candes and Tao [2006])

If x is k-sparse and A satisfies δ2k + δ3k < 1, then x is the unique `1 minimizer.

Comments:

• RIP needs a matrix to be properly scaled

• the tight RIP constant of a given matrix A is difficult to compute

• the result is universal for all k-sparse

• ∃ tighter conditions (see next slide)

• all methods (including `0) require δ2k < 1 for universal recovery; every

k-sparse x is unique if δ2k < 1

• the requirement can be satisfied by certain A (e.g., whose entries are i.i.d

samples following a subgaussian distribution) and lead to exact recovery

for ‖x‖0 = O(m/ log(m/k)).

20 / 35

Page 21: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

More Comments

• (Foucart-Lai) If δ2k+2 < 1, then ∃ a sufficiently small p so that `p

minimization is guaranteed to recovery any k-sparse x

• (Candes) δ2k <√

2− 1 is sufficient

• (Foucart-Lai) δ2k < 2(3−√

2)/7 ≈ 0.4531 is sufficient

• RIP gives κ(AS) ≤√

(1 + δk)/(1− δk), ∀|S| ≤ k; so δ2k < 2(3−√

2)/7

gives κ(AS) ≤ 1.7, ∀|S| ≤ 2m, very well-conditioned.

• (Mo-Li) δ2k < 0.493 is sufficient

• (Cai-Wang-Xu) δk < 0.307 is sufficient

• (Cai-Zhang) δk < 1/3 is sufficient and necessary for universal `1 recovery

21 / 35

Page 22: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

Random matrices with RIPs

Trivial randomly constructed matrices satisfy RIPs with overwhelming

probability.

• Gaussian: Aij ∼ N(0, 1/m), ‖x‖0 ≤ O(m/ log(n/m)) whp, proof is based

on applying concentration of measures to the singular values of Gaussian

matrices (Szarek-91,Davidson-Szarek-01).

• Bernoulli: Aij ∼ ±1 wp 1/2, ‖x‖0 ≤ O(m/ log(n/m)) whp, proof is

based on applying concentration of measures to the smallest singular value

of a subgaussian matrix

(Candes-Tao-04,Litvak-Pajor-Rudelson-TomczakJaegermann-04).

• Fourier ensemble: A ∈ Cm×n is a randomly chosen submatrix of discrete

Fourier transform F ∈ Cn×n. Candes-Tao shows ‖x‖0 ≤ O(m/ log(n)6)

whp; Rudelson-Vershynin shows ‖x‖0 ≤ O(m/ log(n)4); conjectured

‖x‖0 ≤ O(m/ log(n)).

• ......

22 / 35

Page 23: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

Incoherent Sampling

Suppose (Φ,Ψ) is a pair of orthonormal bases of Rn.

• Φ is used for sensing: A is a subset of rows of Φ∗

• Ψ is used to sparsely represent x: x = Ψα, α is sparse

Definition

The coherence between Φ and Ψ is

µ(Φ,Ψ) =√n max

1≤k,j≤n|〈φk, ψj〉|

Coherence is the largest correlation between any two elements of Φ and Ψ.

• If Φ and Ψ contains correlated elements, then µ(Φ,Ψ) is large

• Otherwise, µ(Φ,Ψ) is small

From linear algebra, 1 ≤ µ(Φ,Ψ) ≤√n.

23 / 35

Page 24: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

Incoherent Sampling

Compressive sensing requires low coherent pairs.

• x is sparse under Ψ: x = Ψα, α is sparse

• x is measured as b← Ax

• x is recovered from min ‖α‖1, s.t. AΨα = b

Examples:

• Φ is spike basis φk(t) = δ(t− k) and Ψ is the Fourier basis

ψj(t) = n−1/2ei·2π·jt/n; then µ(Φ,Ψ) = 1, achieving max incoherence.

• Coherence between noiselets and Haar wavelets is√

2.

• Coherence between noiselets and Baubechies D4 and D8 are ∼ 2.2 and

2.9, respectively.

• Random matrices are largely incoherent with any fixed basis Ψ. Randomly

generated and orthonormalized Φ: w.h.p., the coherence between Φ and

any fixed Ψ is about√

2 logn.

• Similar results apply to random Gaussian or ±1 matrices. Bottom line:

many random matrices are universally incoherent with any fixed Ψ w.h.p.

• some kind of random circulant matrix is universally incoherent with any

fixed Ψ w.h.p.24 / 35

Page 25: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

Incoherent Sampling

Theorem (Candes and Romberg [2007])

Fix x and suppose x is k-sparse under basis Ψ with coefficients in uniformly

random signs. Select m measurements in the Φ domain uniformly at random.

If m ≥ O(µ2(Φ,Ψ)k log(n)), `1-minimization recovers x with high probability.

Comments:

• The result is not universal for all Ψ or all k-sparse x under Ψ.

• Only guaranteed for nearly all sign sequences x with a fixed support.

• Why seeing probability? Because there are special signals that are sparse

in Ψ yet vanish at most places in the Φ domain.

• This result allows structured, as opposed to noise-like (random), matrices.

• Can be seen as an extension to Fourier CS.

• Bottom line: the smaller the coherence, the fewer the samples required.

This matches numerical experience.

25 / 35

Page 26: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

Robust Recovery

In order to be practically powerful, CS must deal with

• nearly sparse signals

• measurement noise

• sometimes both

Goal: To obtain accurate reconstructions from highly undersampled

measurements, or in short, stable recovery.

26 / 35

Page 27: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

Stable `1 Recovery

Consider

• a sparse x

• noisy CS measurements b← Ax + z, where ‖z‖2 ≤ ε

Apply the BPDN model: min ‖x‖1 s.t. ‖Ax− b‖2 ≤ ε.

Theorem

Assume (some bounds on δk or δ2k). The solution of the BPDN model returns

a solution x∗ satisfying

‖x∗ − x‖2 ≤ C · ε

for some constant C.

Proof sketch (using an overly sufficient RIP bound):

• Let e = x∗ − x. S′ = {i : largest 2k |x|(i)}.• One can show ‖e‖2 ≤ C1‖eS′‖2 ≤ C2‖Ae‖2 ≤ C · ε.

• 1st inequality essentially from ‖eS‖1 > ‖eS‖1,

• 2nd inequality essentially from the RIP;

• 3rd inequality essentially from the constraint.

27 / 35

Page 28: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

Stable `1 Recovery

Theorem

Assume (some bounds on δk or δ2k). The solution of the BPDN model returns

a solution x∗ satisfying

‖x∗ − x‖2 ≤ C · ε

for some constant C.

Comments:

• The result is universal and more general than exact recovery;

• The error bound is order-optimal: knowing supp(x) will give C′ · ε at best;

• x∗ is almost as good as if one knows where the largest k entries are and

directly measure them;

• C depends on k; when k violates the condition and gets too large,

‖x∗ − x‖2 will blow up.

28 / 35

Page 29: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

Stable `1 Recovery

Consider

• a nearly sparse x = xk + w,

• xk is the vector x with all but the largest (in magnitude) k entries set to 0,

• CS measurements b← Ax + z, where ‖z‖2 ≤ ε.

Theorem

Assume (some bounds on δk or δ2k). The solution of the BPDN model returns

a solution x∗ satisfying

‖x∗ − x‖2 ≤ C · ε+ C · k−1/2‖x− xk‖1

for some constants C and C.

Proof sketch (using an overly sufficient RIP bound): Similar to the previous

one, except x is no longer k-sparse and ‖eS‖1 > ‖eS‖1 is no longer valid.

Instead, we get ‖eS‖1 + 2‖x− xk‖1 > ‖eS‖1. Then,

‖e‖2 ≤ C1‖e4k‖2 + C′k−1/2‖x− xk‖1, ......

29 / 35

Page 30: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

Stable `1 Recovery

Comments on

‖x∗ − x‖2 ≤ C · ε+ C · k−1/2‖x− xk‖1.

Suppose ε = 0; let us focus on the last term:

1. Consider power-law decay signals |x|(i) ≤ C · i−r, r > 1;

2. Then, ‖x− xk‖1 ≤ C1k−r+1 or k−1/2‖x− xk‖1 ≤ C1k

−r+(1/2);

3. But even if x∗ = xk, ‖x∗ − x‖2 = ‖xk − x‖2 ≤ C1k−r+(1/2);

4. Conclusion: the bound cannot be fundamentally improved.

30 / 35

Page 31: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

Information Theoretic Analysis

Question: is there an encoding-decoding means that can do fundamentally

better than Gaussian A and `1-minimization?

In math: ∃ encoder-decoder pair (A,∆), 3 ‖∆(Ax)− x‖2 ≤ O(k−1/2σk(x))

holds for k larger than O(m/ log(n/m))?

Comments: A can be any matrix, and ∆ can be any decoder, tractable or not.

Let ‖x− xk‖1 be called the best-k approximation error, denoted by

σk(x) := ‖x− xk‖1.

31 / 35

Page 32: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

Performance of (A,∆) where A ∈ Rm×n:

Em(K) := inf(A,∆)

supx∈K‖∆(Ax)− x‖2

Gelfand width:

dm(K) = infcodim(Y )≤m

sup {‖h‖2 : h ∈ K ∩ Y } .

Cohen, Dahmen, and DeVore [2006]: If set K = −K and K +K ≤ C0K, then

dm(K) ≤ Em(K) ≤ C0dm(K).

32 / 35

Page 33: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

Gelfand Width and K = `1-ball

Kashin, Gluskin, Garnaev: for K = {h ∈ Rn : ‖h‖1 ≤ 1},

C1

√log(n/m)

m≤ dm(K) ≤ C2

√log(n/m)

m.

Consequences:

1. KGG means Em(K) ≈√

log(n/m)m

2. we want ‖∆(Ax)− x‖2 ≤ C · k−1/2σk(x) ≤ C · k−1/2‖x‖1;

normalizing gives Em(K) ≤ C · k−1/2.

3. Therefore, k ≤ m/ log(n/m). We cannot do better than this.

33 / 35

Page 34: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

(Incomplete) References:

D. Donoho and M. Elad. Optimally sparse representation in general (nonorthogonal)

dictionaries vis `1 minimization. Proceedings of the National Academy of Sciences,

100:2197–2202, 2003.

R. Gribonval and M. Nielsen. Sparse representations in unions of bases. IEEE

Transactions on Information Theory, 49(12):3320–3325, 2003.

E. Candes and T. Tao. Decoding by linear programming. IEEE Transactions on

Information Theory, 51:4203–4215, 2005.

Y. Zhang. Theory of compressive sensing via l1-minimization: a non-RIP analysis and

extensions. Rice University CAAM Technical Report TR08-11, 2008.

I.F. Gorodnitsky and B.D. Rao. Sparse signal reconstruction from limited data using

focuss: A re-weighted minimum norm algorithm. Signal Processing, IEEE

Transactions on, 45(3):600–616, 1997.

S. G. Mallat and Z. Zhang. Matching pursuits with time-frequency dictionaries. IEEE

Transactions on Signal Processing, 41(12):3397–3415, 1993.

D. Donoho and X. Huo. Uncertainty principles and ideal atomic decompositions. IEEE

Transactions on Information Theory, 47:2845–2862, 2001.

A. Cohen, W. Dahmen, and R. A. DeVore. Compressed sensing and best k-term

approximation. Submitted., 2006.

34 / 35

Page 35: Sparse Optimization - Lecture: Sparse Recovery Guaranteeswotaoyin/summer2013/slides/Lec03... · 2013-08-16 · Sparse Optimization Lecture: Sparse Recovery Guarantees Instructor:

E. Candes and T. Tao. Near optimal signal recovery from random projections:

universal encoding strategies. IEEE Transactions on Information Theory, 52(1):

5406–5425, 2006.

E. Candes and J. Romberg. Sparsity and incoherence in compressive sampling. Inverse

Problems, 23(3):969–985, 2007.

35 / 35