A Weighted Average of Sparse Representations is Better than the Sparsest One Alone Michael Elad and...

14
A Weighted Average of Sparse Representations is Better than the Sparsest One Alone Michael Elad and Irad Yavneh SIAM Conference on Imaging Science ’08 Presented by Dehong Liu ECE, Duke University July 24, 2009

Transcript of A Weighted Average of Sparse Representations is Better than the Sparsest One Alone Michael Elad and...

Page 1: A Weighted Average of Sparse Representations is Better than the Sparsest One Alone Michael Elad and Irad Yavneh SIAM Conference on Imaging Science ’08.

A Weighted Average of Sparse Representations is Better than the Sparsest One Alone

Michael Elad and Irad Yavneh

SIAM Conference on Imaging Science ’08

Presented by Dehong Liu

ECE, Duke University

July 24, 2009

Page 2: A Weighted Average of Sparse Representations is Better than the Sparsest One Alone Michael Elad and Irad Yavneh SIAM Conference on Imaging Science ’08.

Outline

• Motivation• A mixture of sparse representations• Experiments and results• Analysis • Conclusion

Page 3: A Weighted Average of Sparse Representations is Better than the Sparsest One Alone Michael Elad and Irad Yavneh SIAM Conference on Imaging Science ’08.

Motivation

• Noise removal problem y=x+v, in which y is a measurement signal, x is the clean signal, v

is assumed to be zero mean iid Gaussian.

• Sparse representationx=D, in which DRnm, n<m, is a sparse vector.

• Compressive sensing problem

• Orthogonal Matching Pursuit (OMP)

Sparsest representation

• Question:“Does this mean that other competitive and slightly inferior sparse r

epresentations are meaningless?”

Page 4: A Weighted Average of Sparse Representations is Better than the Sparsest One Alone Michael Elad and Irad Yavneh SIAM Conference on Imaging Science ’08.

A mixture of sparse representations

• How to generate a set of sparse representations?– Randomized OMP

• How to fuse these sparse representations? – A plain averaging

Page 5: A Weighted Average of Sparse Representations is Better than the Sparsest One Alone Michael Elad and Irad Yavneh SIAM Conference on Imaging Science ’08.

OMP algorithm

Page 6: A Weighted Average of Sparse Representations is Better than the Sparsest One Alone Michael Elad and Irad Yavneh SIAM Conference on Imaging Science ’08.

Randomized OMP

Page 7: A Weighted Average of Sparse Representations is Better than the Sparsest One Alone Michael Elad and Irad Yavneh SIAM Conference on Imaging Science ’08.

Experiments and results

Model:

• y=x+v=D+v• D: 100x200 random dictionary with entries dra

wn from N(0,1), and then with columns normalized;

: a random representations with k=10 non-zeros chosen at random and with values drawn from N(0,1);

• v: white Gaussian noise with entries drawn from N(0,1);

• Noise threshold in OMP algorithm T=100(??);• Run the OMP once, and the RandOMP 1000 t

imes.

Page 8: A Weighted Average of Sparse Representations is Better than the Sparsest One Alone Michael Elad and Irad Yavneh SIAM Conference on Imaging Science ’08.

Observations

0 10 20 30 400

50

100

150

Candinality

His

togra

m

Random-OMP cardinalitiesOMP cardinality

85 90 95 100 1050

50

100

150

200

250

300

350

Representation Error

His

togra

m

Random-OMP errorOMP error

0 0.1 0.2 0.3 0.40

50

100

150

200

250

300

Noise Attenuation

His

togra

m

Random-OMP denoisingOMP denoising

0 5 10 15 200.05

0.1

0.15

0.2

0.25

0.3

0.35

Cardinality

No

ise

Att

enu

ation

Random-OMP denoisingOMP denoising

Page 9: A Weighted Average of Sparse Representations is Better than the Sparsest One Alone Michael Elad and Irad Yavneh SIAM Conference on Imaging Science ’08.

Sparse vector reconstruction

0 50 100 150 200-3

-2

-1

0

1

2

3

index

valu

e

Averaged Rep.Original Rep.OMP Rep.

The average representation over 1000 RandOMP representations is not sparse at all.

Page 10: A Weighted Average of Sparse Representations is Better than the Sparsest One Alone Michael Elad and Irad Yavneh SIAM Conference on Imaging Science ’08.

Denoising factor based on 1000 experiments

0 0.1 0.2 0.3 0.4 0.50

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

OMP Denoising Factor

Ra

ndO

MP

Den

oisi

ng F

acto

r

OMP versus RandOMP resultsMean Point

Denoising factor=

Run RandOMP 100 times for each experiment.

Page 11: A Weighted Average of Sparse Representations is Better than the Sparsest One Alone Michael Elad and Irad Yavneh SIAM Conference on Imaging Science ’08.

Performance with different parameters

Page 12: A Weighted Average of Sparse Representations is Better than the Sparsest One Alone Michael Elad and Irad Yavneh SIAM Conference on Imaging Science ’08.

Analysis

The RandOMP is an approximation of the Minimum-Mean-Squared-Error (MMSE) estimate.

Page 13: A Weighted Average of Sparse Representations is Better than the Sparsest One Alone Michael Elad and Irad Yavneh SIAM Conference on Imaging Science ’08.

The above results correspond to a 20x30 dictionary. Parameters: True support=3, x=1, Averaged over 1000 experiments.

0.5 1 1.5

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5R

ela

tive

Me

an

-Sq

ua

red

-Err

or

20

1. Emp. Oracle2. Theor. Oracle3. Emp. MMSE4. Theor. MMSE5. Emp. MAP6. Theor. MAP7. OMP8. RandOMP

Comparison

Page 14: A Weighted Average of Sparse Representations is Better than the Sparsest One Alone Michael Elad and Irad Yavneh SIAM Conference on Imaging Science ’08.

Conclusion

• The paper shows that averaging several sparse representations for a signal lead to better denoising, as it approximates the MMSE estimator.