Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j)...
Transcript of Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j)...
![Page 1: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/1.jpg)
Sparse and Redundant RepresentationsPh. GrohsETH Zurich, Seminar for Applied Mathematics
09-27-2012
![Page 2: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/2.jpg)
Introduction
Basic ThemeLet A ∈ Rn×m with n < m.
We would like to solve the linear system
Ax = b, x ∈ Rn, b ∈ Rm.
Clearly not uniquely solvable! Representation of b is redundant!
Ph. Grohs 09-27-2012 p. 2
![Page 3: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/3.jpg)
Introduction
Basic ThemeLet A ∈ Rn×m with n < m. We would like to solve the linear system
Ax = b, x ∈ Rn, b ∈ Rm.
Clearly not uniquely solvable! Representation of b is redundant!
Ph. Grohs 09-27-2012 p. 2
![Page 4: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/4.jpg)
Introduction
Basic ThemeLet A ∈ Rn×m with n < m. We would like to solve the linear system
Ax = b, x ∈ Rn, b ∈ Rm.
Clearly not uniquely solvable! Representation of b is redundant!
Ph. Grohs 09-27-2012 p. 2
![Page 5: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/5.jpg)
Introduction
Basic ThemeLet A ∈ Rn×m with n < m. We would like to solve the linear system
Ax = b, x ∈ Rn, b ∈ Rm.
Clearly not uniquely solvable!
Representation of b is redundant!
Ph. Grohs 09-27-2012 p. 2
![Page 6: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/6.jpg)
Introduction
Basic ThemeLet A ∈ Rn×m with n < m. We would like to solve the linear system
Ax = b, x ∈ Rn, b ∈ Rm.
Clearly not uniquely solvable! Representation of b is redundant!
Ph. Grohs 09-27-2012 p. 2
![Page 7: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/7.jpg)
Introduction
X-Ray CT
Mathematically, measurements are line-integrals
Rf (r , θ) =∫
Lf (x , y)dl
Only available for a few angles θ. Reconstruction of f??
Ph. Grohs 09-27-2012 p. 3
![Page 8: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/8.jpg)
Introduction
X-Ray CT
Mathematically, measurements are line-integrals
Rf (r , θ) =∫
Lf (x , y)dl
Only available for a few angles θ. Reconstruction of f??
Ph. Grohs 09-27-2012 p. 3
![Page 9: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/9.jpg)
Introduction
X-Ray CT
Mathematically, measurements are line-integrals
Rf (r , θ) =∫
Lf (x , y)dl
Only available for a few angles θ. Reconstruction of f??
Ph. Grohs 09-27-2012 p. 3
![Page 10: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/10.jpg)
Introduction
X-Ray CT
Mathematically, measurements are line-integrals
Rf (r , θ) =∫
Lf (x , y)dl
Only available for a few angles θ.
Reconstruction of f??
Ph. Grohs 09-27-2012 p. 3
![Page 11: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/11.jpg)
Introduction
X-Ray CT
Mathematically, measurements are line-integrals
Rf (r , θ) =∫
Lf (x , y)dl
Only available for a few angles θ. Reconstruction of f??Ph. Grohs 09-27-2012 p. 3
![Page 12: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/12.jpg)
Introduction
Projection-Slice TheoremTheorem
FrRf (ω, θ) = f̂ (ω cos(θ), ω sin(θ)).
A measurement Rf (r , θ) for a fixed angle θ gives us access to theFourier transform of f along a ray with polar angle θ.
For 22 different angles we getthe Fourier transform off ∈ R256×256 on the rays in thepicture. This leads to aredundant linear system for f !
Ph. Grohs 09-27-2012 p. 4
![Page 13: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/13.jpg)
Introduction
Projection-Slice TheoremTheorem
FrRf (ω, θ) = f̂ (ω cos(θ), ω sin(θ)).
A measurement Rf (r , θ) for a fixed angle θ gives us access to theFourier transform of f along a ray with polar angle θ.
For 22 different angles we getthe Fourier transform off ∈ R256×256 on the rays in thepicture. This leads to aredundant linear system for f !
Ph. Grohs 09-27-2012 p. 4
![Page 14: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/14.jpg)
Introduction
Projection-Slice TheoremTheorem
FrRf (ω, θ) = f̂ (ω cos(θ), ω sin(θ)).
A measurement Rf (r , θ) for a fixed angle θ gives us access to theFourier transform of f along a ray with polar angle θ.
For 22 different angles we getthe Fourier transform off ∈ R256×256 on the rays in thepicture. This leads to aredundant linear system for f !
Ph. Grohs 09-27-2012 p. 4
![Page 15: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/15.jpg)
Introduction
Projection-Slice TheoremTheorem
FrRf (ω, θ) = f̂ (ω cos(θ), ω sin(θ)).
A measurement Rf (r , θ) for a fixed angle θ gives us access to theFourier transform of f along a ray with polar angle θ.
For 22 different angles we getthe Fourier transform off ∈ R256×256 on the rays in thepicture.
This leads to aredundant linear system for f !
Ph. Grohs 09-27-2012 p. 4
![Page 16: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/16.jpg)
Introduction
Projection-Slice TheoremTheorem
FrRf (ω, θ) = f̂ (ω cos(θ), ω sin(θ)).
A measurement Rf (r , θ) for a fixed angle θ gives us access to theFourier transform of f along a ray with polar angle θ.
For 22 different angles we getthe Fourier transform off ∈ R256×256 on the rays in thepicture. This leads to aredundant linear system for f !
Ph. Grohs 09-27-2012 p. 4
![Page 17: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/17.jpg)
Introduction
Solution Strategies I: PseudoinverseA common strategy to solve Ax = b with A ∈ Rn×m, m < n is to usethe Moore-Penrose pseudoinverse and get a solution
x∗ = A†b
= argminAx=b
‖x‖`2 .
Let’s try it for the X-ray problem.
This is a typical benchmark (Shepp-Logan phantom).
Ph. Grohs 09-27-2012 p. 5
![Page 18: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/18.jpg)
Introduction
Solution Strategies I: PseudoinverseA common strategy to solve Ax = b with A ∈ Rn×m, m < n is to usethe Moore-Penrose pseudoinverse and get a solution
x∗ = A†b = argminAx=b
‖x‖`2 .
Let’s try it for the X-ray problem.
This is a typical benchmark (Shepp-Logan phantom).
Ph. Grohs 09-27-2012 p. 5
![Page 19: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/19.jpg)
Introduction
Solution Strategies I: PseudoinverseA common strategy to solve Ax = b with A ∈ Rn×m, m < n is to usethe Moore-Penrose pseudoinverse and get a solution
x∗ = A†b = argminAx=b
‖x‖`2 .
Let’s try it for the X-ray problem.
This is a typical benchmark (Shepp-Logan phantom).
Ph. Grohs 09-27-2012 p. 5
![Page 20: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/20.jpg)
Introduction
Solution Strategies I: PseudoinverseA common strategy to solve Ax = b with A ∈ Rn×m, m < n is to usethe Moore-Penrose pseudoinverse and get a solution
x∗ = A†b = argminAx=b
‖x‖`2 .
Let’s try it for the X-ray problem.
This is a typical benchmark (Shepp-Logan phantom).
Ph. Grohs 09-27-2012 p. 5
![Page 21: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/21.jpg)
Introduction
Solution Strategies I: PseudoinverseA common strategy to solve Ax = b with A ∈ Rn×m, m < n is to usethe Moore-Penrose pseudoinverse and get a solution
x∗ = A†b = argminAx=b
‖x‖`2 .
Let’s try it for the X-ray problem.
This is a typical benchmark (Shepp-Logan phantom).Ph. Grohs 09-27-2012 p. 5
![Page 22: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/22.jpg)
Introduction
Solution
Ph. Grohs 09-27-2012 p. 6
![Page 23: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/23.jpg)
Introduction
Solution
Ph. Grohs 09-27-2012 p. 7
![Page 24: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/24.jpg)
Introduction
Solution Strategies II: TV-regularizationPseudoinverse approach seeks for solution with smallest `2-norm.
Why should sought solution have this property?
Define discrete gradient for image f ∈ R256×256
Dhf (i, j) =
{f (i + 1, j) − f (i, j) i < 256
0 i = 256 , Dv f (i, j) =
{f (i, j + 1) − f (i, j) j < 256
0 j = 256 , Df (i, j) =
(Dhf (i, j)Dv f (i, j)
).
vertical gradient of image issparse!
instead of searching for thesolution with minimal `2-norm,look for solution with sparsestgradient!
Ph. Grohs 09-27-2012 p. 8
![Page 25: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/25.jpg)
Introduction
Solution Strategies II: TV-regularizationPseudoinverse approach seeks for solution with smallest `2-norm.Why should sought solution have this property?
Define discrete gradient for image f ∈ R256×256
Dhf (i, j) =
{f (i + 1, j) − f (i, j) i < 256
0 i = 256 , Dv f (i, j) =
{f (i, j + 1) − f (i, j) j < 256
0 j = 256 , Df (i, j) =
(Dhf (i, j)Dv f (i, j)
).
vertical gradient of image issparse!
instead of searching for thesolution with minimal `2-norm,look for solution with sparsestgradient!
Ph. Grohs 09-27-2012 p. 8
![Page 26: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/26.jpg)
Introduction
Solution Strategies II: TV-regularizationPseudoinverse approach seeks for solution with smallest `2-norm.Why should sought solution have this property?
Define discrete gradient for image f ∈ R256×256
Dhf (i, j) =
{f (i + 1, j) − f (i, j) i < 256
0 i = 256 , Dv f (i, j) =
{f (i, j + 1) − f (i, j) j < 256
0 j = 256 , Df (i, j) =
(Dhf (i, j)Dv f (i, j)
).
vertical gradient of image issparse!
instead of searching for thesolution with minimal `2-norm,look for solution with sparsestgradient!
Ph. Grohs 09-27-2012 p. 8
![Page 27: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/27.jpg)
Introduction
Solution Strategies II: TV-regularizationPseudoinverse approach seeks for solution with smallest `2-norm.Why should sought solution have this property?
Define discrete gradient for image f ∈ R256×256
Dhf (i, j) =
{f (i + 1, j) − f (i, j) i < 256
0 i = 256 , Dv f (i, j) =
{f (i, j + 1) − f (i, j) j < 256
0 j = 256 , Df (i, j) =
(Dhf (i, j)Dv f (i, j)
).
vertical gradient of image issparse!
instead of searching for thesolution with minimal `2-norm,look for solution with sparsestgradient!
Ph. Grohs 09-27-2012 p. 8
![Page 28: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/28.jpg)
Introduction
Solution Strategies II: TV-regularizationPseudoinverse approach seeks for solution with smallest `2-norm.Why should sought solution have this property?
Define discrete gradient for image f ∈ R256×256
Dhf (i, j) =
{f (i + 1, j) − f (i, j) i < 256
0 i = 256 , Dv f (i, j) =
{f (i, j + 1) − f (i, j) j < 256
0 j = 256 , Df (i, j) =
(Dhf (i, j)Dv f (i, j)
).
vertical gradient of image issparse!
instead of searching for thesolution with minimal `2-norm,look for solution with sparsestgradient!
Ph. Grohs 09-27-2012 p. 8
![Page 29: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/29.jpg)
Introduction
Solution Strategies II: TV-regularizationPseudoinverse approach seeks for solution with smallest `2-norm.Why should sought solution have this property?
Define discrete gradient for image f ∈ R256×256
Dhf (i, j) =
{f (i + 1, j) − f (i, j) i < 256
0 i = 256 , Dv f (i, j) =
{f (i, j + 1) − f (i, j) j < 256
0 j = 256 , Df (i, j) =
(Dhf (i, j)Dv f (i, j)
).
vertical gradient of image issparse!
instead of searching for thesolution with minimal `2-norm,look for solution with sparsestgradient!
Ph. Grohs 09-27-2012 p. 8
![Page 30: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/30.jpg)
Introduction
Solution Strategies II: TV-regularizationFollowing these ideas we solve the sparse optimization problem
x0 = argminAx=b
‖Dx‖0 := # {(i , j) : Dx(i , j) 6= 0} .
/This optimization problem is NP-hard.
Relax, solve instead
x1 = argminAx=b
‖Dx‖1 :=∑i,j
√Dhx(i , j)2 + Dv x(i , j)2 :=
∑i,j
‖Dx(i , j)‖2.
,Can be recast as SOCP (Second Order Cone Program)
mint,x
∑i,j
t(i , j) s.t. Ax = b, ‖Dx(i , j)‖2 ≤ t(i , j)
and solved efficiently.
Ph. Grohs 09-27-2012 p. 9
![Page 31: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/31.jpg)
Introduction
Solution Strategies II: TV-regularizationFollowing these ideas we solve the sparse optimization problem
x0 = argminAx=b
‖Dx‖0 := # {(i , j) : Dx(i , j) 6= 0} .
/This optimization problem is NP-hard.
Relax, solve instead
x1 = argminAx=b
‖Dx‖1 :=∑i,j
√Dhx(i , j)2 + Dv x(i , j)2 :=
∑i,j
‖Dx(i , j)‖2.
,Can be recast as SOCP (Second Order Cone Program)
mint,x
∑i,j
t(i , j) s.t. Ax = b, ‖Dx(i , j)‖2 ≤ t(i , j)
and solved efficiently.
Ph. Grohs 09-27-2012 p. 9
![Page 32: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/32.jpg)
Introduction
Solution Strategies II: TV-regularizationFollowing these ideas we solve the sparse optimization problem
x0 = argminAx=b
‖Dx‖0 := # {(i , j) : Dx(i , j) 6= 0} .
/This optimization problem is NP-hard.
Relax
, solve instead
x1 = argminAx=b
‖Dx‖1 :=∑i,j
√Dhx(i , j)2 + Dv x(i , j)2 :=
∑i,j
‖Dx(i , j)‖2.
,Can be recast as SOCP (Second Order Cone Program)
mint,x
∑i,j
t(i , j) s.t. Ax = b, ‖Dx(i , j)‖2 ≤ t(i , j)
and solved efficiently.
Ph. Grohs 09-27-2012 p. 9
![Page 33: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/33.jpg)
Introduction
Solution Strategies II: TV-regularizationFollowing these ideas we solve the sparse optimization problem
x0 = argminAx=b
‖Dx‖0 := # {(i , j) : Dx(i , j) 6= 0} .
/This optimization problem is NP-hard.
Relax, solve instead
x1 = argminAx=b
‖Dx‖1
:=∑i,j
√Dhx(i , j)2 + Dv x(i , j)2 :=
∑i,j
‖Dx(i , j)‖2.
,Can be recast as SOCP (Second Order Cone Program)
mint,x
∑i,j
t(i , j) s.t. Ax = b, ‖Dx(i , j)‖2 ≤ t(i , j)
and solved efficiently.
Ph. Grohs 09-27-2012 p. 9
![Page 34: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/34.jpg)
Introduction
Solution Strategies II: TV-regularizationFollowing these ideas we solve the sparse optimization problem
x0 = argminAx=b
‖Dx‖0 := # {(i , j) : Dx(i , j) 6= 0} .
/This optimization problem is NP-hard.
Relax, solve instead
x1 = argminAx=b
‖Dx‖1 :=∑i,j
√Dhx(i , j)2 + Dv x(i , j)2
:=∑i,j
‖Dx(i , j)‖2.
,Can be recast as SOCP (Second Order Cone Program)
mint,x
∑i,j
t(i , j) s.t. Ax = b, ‖Dx(i , j)‖2 ≤ t(i , j)
and solved efficiently.
Ph. Grohs 09-27-2012 p. 9
![Page 35: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/35.jpg)
Introduction
Solution Strategies II: TV-regularizationFollowing these ideas we solve the sparse optimization problem
x0 = argminAx=b
‖Dx‖0 := # {(i , j) : Dx(i , j) 6= 0} .
/This optimization problem is NP-hard.
Relax, solve instead
x1 = argminAx=b
‖Dx‖1 :=∑i,j
√Dhx(i , j)2 + Dv x(i , j)2 :=
∑i,j
‖Dx(i , j)‖2.
,Can be recast as SOCP (Second Order Cone Program)
mint,x
∑i,j
t(i , j) s.t. Ax = b, ‖Dx(i , j)‖2 ≤ t(i , j)
and solved efficiently.
Ph. Grohs 09-27-2012 p. 9
![Page 36: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/36.jpg)
Introduction
Solution Strategies II: TV-regularizationFollowing these ideas we solve the sparse optimization problem
x0 = argminAx=b
‖Dx‖0 := # {(i , j) : Dx(i , j) 6= 0} .
/This optimization problem is NP-hard.
Relax, solve instead
x1 = argminAx=b
‖Dx‖1 :=∑i,j
√Dhx(i , j)2 + Dv x(i , j)2 :=
∑i,j
‖Dx(i , j)‖2.
,Can be recast as SOCP (Second Order Cone Program)
mint,x
∑i,j
t(i , j) s.t. Ax = b, ‖Dx(i , j)‖2 ≤ t(i , j)
and solved efficiently.Ph. Grohs 09-27-2012 p. 9
![Page 37: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/37.jpg)
Introduction
Solution
Original image is reconstructed exactly!
Ph. Grohs 09-27-2012 p. 10
![Page 38: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/38.jpg)
Introduction
Solution
Original image is reconstructed exactly!
Ph. Grohs 09-27-2012 p. 10
![Page 39: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/39.jpg)
Introduction
Summary
Redundant system Ax = b can be solved exactly for sparse solutionsx .
Ph. Grohs 09-27-2012 p. 11
![Page 40: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/40.jpg)
Introduction
Learning Objectives
Ability to...
◦ understand theoretical analysis of sparse optimizationalgorithms,
◦ critique current research publications,◦ implement basic models and methods in signal processing,◦ summarize and explain research publications in the field of
sparse optimization.
Ph. Grohs 09-27-2012 p. 12
![Page 41: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/41.jpg)
Introduction
Learning Objectives
Ability to...◦ understand theoretical analysis of sparse optimization
algorithms,
◦ critique current research publications,◦ implement basic models and methods in signal processing,◦ summarize and explain research publications in the field of
sparse optimization.
Ph. Grohs 09-27-2012 p. 12
![Page 42: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/42.jpg)
Introduction
Learning Objectives
Ability to...◦ understand theoretical analysis of sparse optimization
algorithms,◦ critique current research publications,
◦ implement basic models and methods in signal processing,◦ summarize and explain research publications in the field of
sparse optimization.
Ph. Grohs 09-27-2012 p. 12
![Page 43: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/43.jpg)
Introduction
Learning Objectives
Ability to...◦ understand theoretical analysis of sparse optimization
algorithms,◦ critique current research publications,◦ implement basic models and methods in signal processing,
◦ summarize and explain research publications in the field ofsparse optimization.
Ph. Grohs 09-27-2012 p. 12
![Page 44: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/44.jpg)
Introduction
Learning Objectives
Ability to...◦ understand theoretical analysis of sparse optimization
algorithms,◦ critique current research publications,◦ implement basic models and methods in signal processing,◦ summarize and explain research publications in the field of
sparse optimization.
Ph. Grohs 09-27-2012 p. 12
![Page 45: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/45.jpg)
Introduction
Main Literature Source
Ph. Grohs 09-27-2012 p. 13
![Page 46: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/46.jpg)
Introduction
Modus Operandi
Interactivity!
Everybody is required to...give a ∼ 50 minutes talk on 1-2 Chapters of Elad’s book...attend the talks of others...critique and discuss the talks of others (each time with adifferent focus, e.g., logical coherence, body language, slides,speech)
Ph. Grohs 09-27-2012 p. 14
![Page 47: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/47.jpg)
Introduction
Modus Operandi
Interactivity!
Everybody is required to...give a ∼ 50 minutes talk on 1-2 Chapters of Elad’s book...attend the talks of others...critique and discuss the talks of others (each time with adifferent focus, e.g., logical coherence, body language, slides,speech)
Ph. Grohs 09-27-2012 p. 14
![Page 48: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/48.jpg)
Introduction
Modus Operandi
Interactivity!
Everybody is required to
...give a ∼ 50 minutes talk on 1-2 Chapters of Elad’s book
...attend the talks of others
...critique and discuss the talks of others (each time with adifferent focus, e.g., logical coherence, body language, slides,speech)
Ph. Grohs 09-27-2012 p. 14
![Page 49: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/49.jpg)
Introduction
Modus Operandi
Interactivity!
Everybody is required to...give a ∼ 50 minutes talk on 1-2 Chapters of Elad’s book
...attend the talks of others
...critique and discuss the talks of others (each time with adifferent focus, e.g., logical coherence, body language, slides,speech)
Ph. Grohs 09-27-2012 p. 14
![Page 50: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/50.jpg)
Introduction
Modus Operandi
Interactivity!
Everybody is required to...give a ∼ 50 minutes talk on 1-2 Chapters of Elad’s book...attend the talks of others
...critique and discuss the talks of others (each time with adifferent focus, e.g., logical coherence, body language, slides,speech)
Ph. Grohs 09-27-2012 p. 14
![Page 51: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/51.jpg)
Introduction
Modus Operandi
Interactivity!
Everybody is required to...give a ∼ 50 minutes talk on 1-2 Chapters of Elad’s book...attend the talks of others...critique and discuss the talks of others (each time with adifferent focus, e.g., logical coherence, body language, slides,speech)
Ph. Grohs 09-27-2012 p. 14
![Page 52: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/52.jpg)
Introduction
Hints
When giving a talk
...practice the right speed
...think thoroughly about the balance detail vs. big picture
...keep a “red thread” throughout the talk
...give a “takehome message”
Ph. Grohs 09-27-2012 p. 15
![Page 53: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/53.jpg)
Introduction
Hints
When giving a talk...practice the right speed
...think thoroughly about the balance detail vs. big picture
...keep a “red thread” throughout the talk
...give a “takehome message”
Ph. Grohs 09-27-2012 p. 15
![Page 54: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/54.jpg)
Introduction
Hints
When giving a talk...practice the right speed...think thoroughly about the balance detail vs. big picture
...keep a “red thread” throughout the talk
...give a “takehome message”
Ph. Grohs 09-27-2012 p. 15
![Page 55: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/55.jpg)
Introduction
Hints
When giving a talk...practice the right speed...think thoroughly about the balance detail vs. big picture...keep a “red thread” throughout the talk
...give a “takehome message”
Ph. Grohs 09-27-2012 p. 15
![Page 56: Sparse and Redundant Representations - ETH Zpeople.math.ethz.ch/~pgrohs/srr/Intro.pdf · Dhf(i; j) Dvf(i; j) : vertical gradient of image is sparse! instead of searching for the solution](https://reader033.fdocuments.us/reader033/viewer/2022042619/5a78bdd07f8b9a70648c34be/html5/thumbnails/56.jpg)
Introduction
Hints
When giving a talk...practice the right speed...think thoroughly about the balance detail vs. big picture...keep a “red thread” throughout the talk...give a “takehome message”
Ph. Grohs 09-27-2012 p. 15