Regularization Michael Moeller Chapter 3 3 Regularization Ill-Posed Problems in Image and Signal...
-
Upload
hoangtuong -
Category
Documents
-
view
220 -
download
0
Transcript of Regularization Michael Moeller Chapter 3 3 Regularization Ill-Posed Problems in Image and Signal...
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Chapter 3RegularizationIll-Posed Problems in Image and Signal ProcessingWS 2014/2015
Michael MoellerOptimization and Data Analysis
Department of MathematicsTU Munchen
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
A little summary
What did we learn so far?
• No solution exists
→ least squares solution.
• Solution not unique→ minimal norm solution.
• Linear operator for the above→ Moore-Penrose inverse.
• Third criterion for well posedness?→ A† continuous.
• A† continuous⇔ R(A) closed.
• A compact, R(A) infinite dimensional⇒ A† not continuous.
• Integral equation with H.S. kernel⇒ A compact
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
A little summary
What did we learn so far?
• No solution exists→ least squares solution.
• Solution not unique→ minimal norm solution.
• Linear operator for the above→ Moore-Penrose inverse.
• Third criterion for well posedness?→ A† continuous.
• A† continuous⇔ R(A) closed.
• A compact, R(A) infinite dimensional⇒ A† not continuous.
• Integral equation with H.S. kernel⇒ A compact
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
A little summary
What did we learn so far?
• No solution exists→ least squares solution.
• Solution not unique
→ minimal norm solution.
• Linear operator for the above→ Moore-Penrose inverse.
• Third criterion for well posedness?→ A† continuous.
• A† continuous⇔ R(A) closed.
• A compact, R(A) infinite dimensional⇒ A† not continuous.
• Integral equation with H.S. kernel⇒ A compact
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
A little summary
What did we learn so far?
• No solution exists→ least squares solution.
• Solution not unique→ minimal norm solution.
• Linear operator for the above→ Moore-Penrose inverse.
• Third criterion for well posedness?→ A† continuous.
• A† continuous⇔ R(A) closed.
• A compact, R(A) infinite dimensional⇒ A† not continuous.
• Integral equation with H.S. kernel⇒ A compact
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
A little summary
What did we learn so far?
• No solution exists→ least squares solution.
• Solution not unique→ minimal norm solution.
• Linear operator for the above
→ Moore-Penrose inverse.
• Third criterion for well posedness?→ A† continuous.
• A† continuous⇔ R(A) closed.
• A compact, R(A) infinite dimensional⇒ A† not continuous.
• Integral equation with H.S. kernel⇒ A compact
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
A little summary
What did we learn so far?
• No solution exists→ least squares solution.
• Solution not unique→ minimal norm solution.
• Linear operator for the above→ Moore-Penrose inverse.
• Third criterion for well posedness?→ A† continuous.
• A† continuous⇔ R(A) closed.
• A compact, R(A) infinite dimensional⇒ A† not continuous.
• Integral equation with H.S. kernel⇒ A compact
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
A little summary
What did we learn so far?
• No solution exists→ least squares solution.
• Solution not unique→ minimal norm solution.
• Linear operator for the above→ Moore-Penrose inverse.
• Third criterion for well posedness?
→ A† continuous.
• A† continuous⇔ R(A) closed.
• A compact, R(A) infinite dimensional⇒ A† not continuous.
• Integral equation with H.S. kernel⇒ A compact
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
A little summary
What did we learn so far?
• No solution exists→ least squares solution.
• Solution not unique→ minimal norm solution.
• Linear operator for the above→ Moore-Penrose inverse.
• Third criterion for well posedness?→ A† continuous.
• A† continuous⇔ R(A) closed.
• A compact, R(A) infinite dimensional⇒ A† not continuous.
• Integral equation with H.S. kernel⇒ A compact
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
A little summary
What did we learn so far?
• No solution exists→ least squares solution.
• Solution not unique→ minimal norm solution.
• Linear operator for the above→ Moore-Penrose inverse.
• Third criterion for well posedness?→ A† continuous.
• A† continuous
⇔ R(A) closed.
• A compact, R(A) infinite dimensional⇒ A† not continuous.
• Integral equation with H.S. kernel⇒ A compact
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
A little summary
What did we learn so far?
• No solution exists→ least squares solution.
• Solution not unique→ minimal norm solution.
• Linear operator for the above→ Moore-Penrose inverse.
• Third criterion for well posedness?→ A† continuous.
• A† continuous⇔ R(A) closed.
• A compact, R(A) infinite dimensional⇒ A† not continuous.
• Integral equation with H.S. kernel⇒ A compact
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
A little summary
What did we learn so far?
• No solution exists→ least squares solution.
• Solution not unique→ minimal norm solution.
• Linear operator for the above→ Moore-Penrose inverse.
• Third criterion for well posedness?→ A† continuous.
• A† continuous⇔ R(A) closed.
• A compact, R(A) infinite dimensional
⇒ A† not continuous.
• Integral equation with H.S. kernel⇒ A compact
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
A little summary
What did we learn so far?
• No solution exists→ least squares solution.
• Solution not unique→ minimal norm solution.
• Linear operator for the above→ Moore-Penrose inverse.
• Third criterion for well posedness?→ A† continuous.
• A† continuous⇔ R(A) closed.
• A compact, R(A) infinite dimensional⇒ A† not continuous.
• Integral equation with H.S. kernel⇒ A compact
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
A little summary
What did we learn so far?
• No solution exists→ least squares solution.
• Solution not unique→ minimal norm solution.
• Linear operator for the above→ Moore-Penrose inverse.
• Third criterion for well posedness?→ A† continuous.
• A† continuous⇔ R(A) closed.
• A compact, R(A) infinite dimensional⇒ A† not continuous.
• Integral equation with H.S. kernel
⇒ A compact
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
A little summary
What did we learn so far?
• No solution exists→ least squares solution.
• Solution not unique→ minimal norm solution.
• Linear operator for the above→ Moore-Penrose inverse.
• Third criterion for well posedness?→ A† continuous.
• A† continuous⇔ R(A) closed.
• A compact, R(A) infinite dimensional⇒ A† not continuous.
• Integral equation with H.S. kernel⇒ A compact
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
A little summary
What did we learn so far?
• Compact, self adjoint linear operators have aneigendecomposition.
• If A is compact, A∗A and AA∗ are compact and self adjoint.
• We can develop the singular value decomposition:
Ax =∑n∈I
σn〈x ,un〉vn
for a countable index set I.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
A little summary
What did we learn so far?
• Compact, self adjoint linear operators have aneigendecomposition.
• If A is compact, A∗A and AA∗ are compact and self adjoint.
• We can develop the singular value decomposition:
Ax =∑n∈I
σn〈x ,un〉vn
for a countable index set I.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
A little summary
What did we learn so far?
• Compact, self adjoint linear operators have aneigendecomposition.
• If A is compact, A∗A and AA∗ are compact and self adjoint.
• We can develop the singular value decomposition:
Ax =∑n∈I
σn〈x ,un〉vn
for a countable index set I.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
A little summary
What did we learn so far?
• Moore-Penrose inverse can be expressed as
A†y =∑n∈I
1σn〈y , vn〉un
• The speed of σn → 0 classifies ill-posedness.
→ Mildly (at most O( 1n )),
→ Moderately (at most polynomial, O( 1nγ )),
→ Severely (faster than polynomial).
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
A little summary
What did we learn so far?
• Moore-Penrose inverse can be expressed as
A†y =∑n∈I
1σn〈y , vn〉un
• The speed of σn → 0 classifies ill-posedness.
→ Mildly (at most O( 1n )),
→ Moderately (at most polynomial, O( 1nγ )),
→ Severely (faster than polynomial).
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
A little summary
What did we learn so far?
• Moore-Penrose inverse can be expressed as
A†y =∑n∈I
1σn〈y , vn〉un
• The speed of σn → 0 classifies ill-posedness.
→ Mildly (at most O( 1n )),
→ Moderately (at most polynomial, O( 1nγ )),
→ Severely (faster than polynomial).
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
A little summary
What did we learn so far?
• Moore-Penrose inverse can be expressed as
A†y =∑n∈I
1σn〈y , vn〉un
• The speed of σn → 0 classifies ill-posedness.
→ Mildly (at most O( 1n )),
→ Moderately (at most polynomial, O( 1nγ )),
→ Severely (faster than polynomial).
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
A little summary
What did we learn so far?
• Moore-Penrose inverse can be expressed as
A†y =∑n∈I
1σn〈y , vn〉un
• The speed of σn → 0 classifies ill-posedness.
→ Mildly (at most O( 1n )),
→ Moderately (at most polynomial, O( 1nγ )),
→ Severely (faster than polynomial).
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Fighting the ill-posedness
What can we do?
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Making a plan...
In finite dimensions:
• Symmetric positive definite matrix A
• Approximate A−1 = A† by Rα.
• Rα inverts a modified version of A.
‖A−1y − Rαyδ‖ ≤ ‖A−1y − Rαy‖︸ ︷︷ ︸Approximation error
+ ‖Rαy − Rαyδ‖︸ ︷︷ ︸Data error
• Regularization parameter α depending on the noise.
• The smaller the noise, the smaller the α.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Making a plan...
In finite dimensions:
• Symmetric positive definite matrix A
• Approximate A−1 = A† by Rα.
• Rα inverts a modified version of A.
‖A−1y − Rαyδ‖ ≤ ‖A−1y − Rαy‖︸ ︷︷ ︸Approximation error
+ ‖Rαy − Rαyδ‖︸ ︷︷ ︸Data error
• Regularization parameter α depending on the noise.
• The smaller the noise, the smaller the α.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Making a plan...
In finite dimensions:
• Symmetric positive definite matrix A
• Approximate A−1 = A† by Rα.
• Rα inverts a modified version of A.
‖A−1y − Rαyδ‖ ≤ ‖A−1y − Rαy‖︸ ︷︷ ︸Approximation error
+ ‖Rαy − Rαyδ‖︸ ︷︷ ︸Data error
• Regularization parameter α depending on the noise.
• The smaller the noise, the smaller the α.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Making a plan...
In finite dimensions:
• Symmetric positive definite matrix A
• Approximate A−1 = A† by Rα.
• Rα inverts a modified version of A.
‖A−1y − Rαyδ‖ ≤ ‖A−1y − Rαy‖︸ ︷︷ ︸Approximation error
+ ‖Rαy − Rαyδ‖︸ ︷︷ ︸Data error
• Regularization parameter α depending on the noise.
• The smaller the noise, the smaller the α.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Making a plan...
In infinite dimensions:
• y = Ax for A ∈ L(X ,Y ) compact, dim(R(A)) =∞.
• We are given yδ ∈ Y and know ‖y − yδ‖ ≤ δ.
• Goal: Find x .
• Problems:• A†yδ might not even be defined! yδ ∈ Y 6= D(A†).• A† is discontinuous→ ‖A†yδ − x‖ arbitrary large!
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Making a plan...
In infinite dimensions:
• y = Ax for A ∈ L(X ,Y ) compact, dim(R(A)) =∞.
• We are given yδ ∈ Y and know ‖y − yδ‖ ≤ δ.
• Goal: Find x .
• Problems:• A†yδ might not even be defined! yδ ∈ Y 6= D(A†).• A† is discontinuous→ ‖A†yδ − x‖ arbitrary large!
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Making a plan...
In infinite dimensions:
• y = Ax for A ∈ L(X ,Y ) compact, dim(R(A)) =∞.
• We are given yδ ∈ Y and know ‖y − yδ‖ ≤ δ.
• Goal: Find x .
• Problems:• A†yδ might not even be defined! yδ ∈ Y 6= D(A†).• A† is discontinuous→ ‖A†yδ − x‖ arbitrary large!
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Making a plan...
In infinite dimensions:
• y = Ax for A ∈ L(X ,Y ) compact, dim(R(A)) =∞.
• We are given yδ ∈ Y and know ‖y − yδ‖ ≤ δ.
• Goal: Find x .
• Problems:• A†yδ might not even be defined! yδ ∈ Y 6= D(A†).• A† is discontinuous→ ‖A†yδ − x‖ arbitrary large!
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Making a plan...
Idea: Try something similar to the finite dimensional case!
• Define family {Rα} of continuous operators Rα : Y → X .
• Index α ∈ I ⊂ ]0, α0[.
• We need: Rαy → A†y for α→ 0.
• Careful: There are y ∈ Y\D(A†), ‖Rαy‖ → ∞!
• Desired: α = α(δ, yδ) with Rα(δ,yδ)yδ → A†y as δ → 0.
• Should work for all y ∈ D, yδ ∈ Y .
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Making a plan...
Idea: Try something similar to the finite dimensional case!
• Define family {Rα} of continuous operators Rα : Y → X .
• Index α ∈ I ⊂ ]0, α0[.
• We need: Rαy → A†y for α→ 0.
• Careful: There are y ∈ Y\D(A†), ‖Rαy‖ → ∞!
• Desired: α = α(δ, yδ) with Rα(δ,yδ)yδ → A†y as δ → 0.
• Should work for all y ∈ D, yδ ∈ Y .
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Making a plan...
Idea: Try something similar to the finite dimensional case!
• Define family {Rα} of continuous operators Rα : Y → X .
• Index α ∈ I ⊂ ]0, α0[.
• We need: Rαy → A†y for α→ 0.
• Careful: There are y ∈ Y\D(A†), ‖Rαy‖ → ∞!
• Desired: α = α(δ, yδ) with Rα(δ,yδ)yδ → A†y as δ → 0.
• Should work for all y ∈ D, yδ ∈ Y .
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Making a plan...
Idea: Try something similar to the finite dimensional case!
• Define family {Rα} of continuous operators Rα : Y → X .
• Index α ∈ I ⊂ ]0, α0[.
• We need: Rαy → A†y for α→ 0.
• Careful: There are y ∈ Y\D(A†), ‖Rαy‖ → ∞!
• Desired: α = α(δ, yδ) with Rα(δ,yδ)yδ → A†y as δ → 0.
• Should work for all y ∈ D, yδ ∈ Y .
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Making a plan...
Idea: Try something similar to the finite dimensional case!
• Define family {Rα} of continuous operators Rα : Y → X .
• Index α ∈ I ⊂ ]0, α0[.
• We need: Rαy → A†y for α→ 0.
• Careful: There are y ∈ Y\D(A†), ‖Rαy‖ → ∞!
• Desired: α = α(δ, yδ) with Rα(δ,yδ)yδ → A†y as δ → 0.
• Should work for all y ∈ D, yδ ∈ Y .
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Making a plan...
Idea: Try something similar to the finite dimensional case!
• Define family {Rα} of continuous operators Rα : Y → X .
• Index α ∈ I ⊂ ]0, α0[.
• We need: Rαy → A†y for α→ 0.
• Careful: There are y ∈ Y\D(A†), ‖Rαy‖ → ∞!
• Desired: α = α(δ, yδ) with Rα(δ,yδ)yδ → A†y as δ → 0.
• Should work for all y ∈ D, yδ ∈ Y .
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Regularization
Definition: Regularization
Let A ∈ L(X ,Y ), and let for every α ∈]0, α0[
Rα : Y → X
be a continuous operator. The family {Rα}α∈I is calledregularization (or regularization operator ) for A†, if for ally ∈ D(A†) there exists a parameter choice rule α = α(δ, yδ),α : R+ × Y → I, such that
limδ→0
sup{‖Rα(δ,yδ)yδ − A†y‖ | yδ ∈ Y , ‖y − yδ‖ ≤ δ
}= 0, (1)
and
limδ→0
sup{α(δ, yδ) | yδ ∈ Y , ‖y − yδ‖ ≤ δ
}= 0. (2)
For a specific y ∈ D(A†), the pair (Rα, α) is called (convergent)regularization method of Ax = y if (1) and (2) hold.
Some considerations on the board.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Regularization
Proposition: Pointwise convergence 1
Let A ∈ L(X ,Y ) and Rα : Y → X be a family of continuousoperators with α ∈ R+. If
Rαy → A†y
for all y ∈ D(A†) as α→ 0, then there exists an a-prioriparameter choice rule α = α(δ) such that (Rα, α) is aconvergent regularization method.
Proposition: Pointwise convergence 2
If (Rα, α) is a regularization method with continuous parameterchoice rule α(δ, yδ), then the {Rα} are converging pointwise toA† on D(A†), i.e.
Rαy → A†y
for α→ 0.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Regularization
Proposition: Pointwise convergence 1
Let A ∈ L(X ,Y ) and Rα : Y → X be a family of continuousoperators with α ∈ R+. If
Rαy → A†y
for all y ∈ D(A†) as α→ 0, then there exists an a-prioriparameter choice rule α = α(δ) such that (Rα, α) is aconvergent regularization method.
Proposition: Pointwise convergence 2
If (Rα, α) is a regularization method with continuous parameterchoice rule α(δ, yδ), then the {Rα} are converging pointwise toA† on D(A†), i.e.
Rαy → A†y
for α→ 0.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Regularization
Assume we have a family of Rα ∈ L(Y ,X ) that convergepointwise to A† on D(A†).
What kind of a-prior parameter choice rules work?
Proposition: A-priori parameter choice rules
Let Rβ ∈ L(Y ,X ) be a family of operators such that Rβ → A†
pointwise for β → 0. Then an a-priori parameter choice ruleα = α(δ) makes (Rα, α) a regularization method if and only if
limδ→0
α(δ) = 0, limδ→0
δ‖Rα(δ)‖ = 0.
Proof: Board.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Regularization
Assume we have a family of Rα ∈ L(Y ,X ) that convergepointwise to A† on D(A†).
What kind of a-prior parameter choice rules work?
Proposition: A-priori parameter choice rules
Let Rβ ∈ L(Y ,X ) be a family of operators such that Rβ → A†
pointwise for β → 0. Then an a-priori parameter choice ruleα = α(δ) makes (Rα, α) a regularization method if and only if
limδ→0
α(δ) = 0, limδ→0
δ‖Rα(δ)‖ = 0.
Proof: Board.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Parameter choice rules
What kind of parameter choice rules work?• Seen: Rαy → A†y , then a-priori choices α = α(δ) work• Possible improvements α = α(δ, yδ)?
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Parameter choice rules
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Parameter choice rules
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Parameter choice rules
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Parameter choice rules
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Parameter choice rules
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Parameter choice rules
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Parameter choice rules
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Parameter choice rules
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Parameter choice rules
What kind of parameter choice rules work?• Seen: Rαy → A†y , then a-priori choices α = α(δ) work• Possible improvements α = α(δ, yδ)?
• Tempting in practice: α = α(yδ). Can this work?
Theorem: Data dependent parameter choice impossible
Let A ∈ L(X ,Y ) and let {Rα} be a regularization for A† with aparameter choice rule α which depends on yδ only (and not onδ), such that the regularization method converges for everyy ∈ D(A†). Then A† is continuous.
Proof: Board
Remark: This makes life difficult for applications!
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Parameter choice rules
What kind of parameter choice rules work?• Seen: Rαy → A†y , then a-priori choices α = α(δ) work• Possible improvements α = α(δ, yδ)?• Tempting in practice: α = α(yδ). Can this work?
Theorem: Data dependent parameter choice impossible
Let A ∈ L(X ,Y ) and let {Rα} be a regularization for A† with aparameter choice rule α which depends on yδ only (and not onδ), such that the regularization method converges for everyy ∈ D(A†). Then A† is continuous.
Proof: Board
Remark: This makes life difficult for applications!
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Parameter choice rules
What kind of parameter choice rules work?• Seen: Rαy → A†y , then a-priori choices α = α(δ) work• Possible improvements α = α(δ, yδ)?• Tempting in practice: α = α(yδ). Can this work?
Theorem: Data dependent parameter choice impossible
Let A ∈ L(X ,Y ) and let {Rα} be a regularization for A† with aparameter choice rule α which depends on yδ only (and not onδ), such that the regularization method converges for everyy ∈ D(A†). Then A† is continuous.
Proof: Board
Remark: This makes life difficult for applications!
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Parameter choice rules
What kind of parameter choice rules work?• Seen: Rαy → A†y , then a-priori choices α = α(δ) work• Possible improvements α = α(δ, yδ)?• Tempting in practice: α = α(yδ). Can this work?
Theorem: Data dependent parameter choice impossible
Let A ∈ L(X ,Y ) and let {Rα} be a regularization for A† with aparameter choice rule α which depends on yδ only (and not onδ), such that the regularization method converges for everyy ∈ D(A†). Then A† is continuous.
Proof: Board
Remark: This makes life difficult for applications!
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Behavior of Rα
What happens on Y\D(A†)?
Theorem: Banach-Steinhaus
Let (Rn)n∈N, Rn ∈ L(Y ,X ), be a sequence. The Rn convergespointwise to an operator R ∈ L(Y ,X ) if and only if the followingtwo conditions are met• The sequence (‖Rn‖)n∈N is bounded in R,• There exists a dense subset Y0 ⊂ Y such that for every
y0 ∈ Y0, (Rny0)n∈N converges in X .
Conclusion
If A ∈ L(X ,Y ), R(A) is not closed, and Rα convergespointwise to A† on D(A†), then
‖Rα‖ → ∞
for α→ 0.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Behavior of Rα
What happens on Y\D(A†)?
Theorem: Banach-Steinhaus
Let (Rn)n∈N, Rn ∈ L(Y ,X ), be a sequence. The Rn convergespointwise to an operator R ∈ L(Y ,X ) if and only if the followingtwo conditions are met• The sequence (‖Rn‖)n∈N is bounded in R,• There exists a dense subset Y0 ⊂ Y such that for every
y0 ∈ Y0, (Rny0)n∈N converges in X .
Conclusion
If A ∈ L(X ,Y ), R(A) is not closed, and Rα convergespointwise to A† on D(A†), then
‖Rα‖ → ∞
for α→ 0.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Behavior of Rα
What happens on Y\D(A†)?
Theorem: Uniform boundedness theorem
Ifsup{‖Rαy‖ | α ∈]0, α[} <∞ ∀y ∈ Y
thensup{‖Rα‖ | α ∈]0, α[} <∞
Conclusion
If A ∈ L(X ,Y ), R(A) is not closed, and Rα converges pointwiseto A† on D(A†), then the exists a y ∈ Y\D(A†) for which
‖Rαy‖ → ∞
as α→ 0.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Behavior of Rα
What happens on Y\D(A†)?
Theorem: Uniform boundedness theorem
Ifsup{‖Rαy‖ | α ∈]0, α[} <∞ ∀y ∈ Y
thensup{‖Rα‖ | α ∈]0, α[} <∞
Conclusion
If A ∈ L(X ,Y ), R(A) is not closed, and Rα converges pointwiseto A† on D(A†), then the exists a y ∈ Y\D(A†) for which
‖Rαy‖ → ∞
as α→ 0.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Behavior of Rα
What happens on Y\D(A†)?
Theorem: Divergence on Y\D(A†)
Let A ∈ L(X ,Y ) be compact, dim(R(A)) =∞, and letRα ∈ L(Y ,X ) be a family of regularization operators. If
supα>0‖ARα‖ <∞,
then ‖Rαy‖ → ∞ for y /∈ D(A†).
Proof: Requires some functional analysis and is given in Engl,Hanke, Neubauer, Regularization of inverse problems, 1996.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Construction of regularizations
How can we actually construct regularizations?• A ∈ L(X ,Y ) compact, dim(range(A)) =∞,
A†y =
∞∑n=1
1σn〈vn, y〉un
for y ∈ D(A†).• Unboundedness due to σn → 0.• We need operators Rα that are
• defined on Y ,• continuous,• converge pointwise to A†.
• Idea:
Rαy =∞∑
n=1
gα(σn)〈vn, y〉un
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Construction of regularizations
How can we actually construct regularizations?• A ∈ L(X ,Y ) compact, dim(range(A)) =∞,
A†y =∞∑
n=1
1σn〈vn, y〉un
for y ∈ D(A†).• Unboundedness due to
σn → 0.• We need operators Rα that are
• defined on Y ,• continuous,• converge pointwise to A†.
• Idea:
Rαy =∞∑
n=1
gα(σn)〈vn, y〉un
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Construction of regularizations
How can we actually construct regularizations?• A ∈ L(X ,Y ) compact, dim(range(A)) =∞,
A†y =∞∑
n=1
1σn〈vn, y〉un
for y ∈ D(A†).• Unboundedness due to σn → 0.• We need operators Rα that are
• defined on Y ,• continuous,• converge pointwise to A†.
• Idea:
Rαy =∞∑
n=1
gα(σn)〈vn, y〉un
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Construction of regularizations
How can we actually construct regularizations?• A ∈ L(X ,Y ) compact, dim(range(A)) =∞,
A†y =∞∑
n=1
1σn〈vn, y〉un
for y ∈ D(A†).• Unboundedness due to σn → 0.• We need operators Rα that are
• defined on Y ,• continuous,• converge pointwise to A†.
• Idea:
Rαy =∞∑
n=1
gα(σn)〈vn, y〉un
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Construction of regularizations
How can we actually construct regularizations?• A ∈ L(X ,Y ) compact, dim(range(A)) =∞,
A†y =∞∑
n=1
1σn〈vn, y〉un
for y ∈ D(A†).• Unboundedness due to σn → 0.• We need operators Rα that are
• defined on Y ,• continuous,• converge pointwise to A†.
• Idea:
Rαy =∞∑
n=1
gα(σn)〈vn, y〉un
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Construction of regularizations
SVD based regularizations
For Rα : Y → X defined by
Rαy =∞∑
n=1
gα(σn)〈vn, y〉un
and a function gα : R+ → R+ with1 gα(σ)→ 1
σ for σ > 0 as α→ 0,2 gα(σ) ≤ Cα <∞ for all σ > 0,
3 σgα(σ) ≤ C <∞ for all α and σ > 0.Rα is a regularization for A†.
Proof + considerations on the board.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Recalling RegularizationHow can we get a stable estimate for x , if we are givenyδ = Ax + n, ‖n‖ ≤ δ, and A† is discontinuous?
• Adapt ideas from finite dimensional case.• Look for a family of continuous operators Rα : Y → X with
limδ→0
sup{‖Rα(δ,yδ)yδ − A†y‖ | ‖y − yδ‖ ≤ δ
}= 0
• Means that Rαy → A†y on D(A†).• We need a parameter choice rule α
• If Rαy → A†y on D(A†), α = α(δ) works.• α = α(δ, yδ) can possibly improve the results.• α = α(yδ) can not work in general for ill-posed problems.
• If {Rα}, Rα ∈ L(Y ,X ), is a regularization then a particularchoice α = α(δ) works if and only if
limδ→0
α(δ) = 0, limδ→0
δ‖Rα(δ)‖ = 0
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Recalling RegularizationHow can we get a stable estimate for x , if we are givenyδ = Ax + n, ‖n‖ ≤ δ, and A† is discontinuous?
• Adapt ideas from finite dimensional case.
• Look for a family of continuous operators Rα : Y → X with
limδ→0
sup{‖Rα(δ,yδ)yδ − A†y‖ | ‖y − yδ‖ ≤ δ
}= 0
• Means that Rαy → A†y on D(A†).• We need a parameter choice rule α
• If Rαy → A†y on D(A†), α = α(δ) works.• α = α(δ, yδ) can possibly improve the results.• α = α(yδ) can not work in general for ill-posed problems.
• If {Rα}, Rα ∈ L(Y ,X ), is a regularization then a particularchoice α = α(δ) works if and only if
limδ→0
α(δ) = 0, limδ→0
δ‖Rα(δ)‖ = 0
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Recalling RegularizationHow can we get a stable estimate for x , if we are givenyδ = Ax + n, ‖n‖ ≤ δ, and A† is discontinuous?
• Adapt ideas from finite dimensional case.• Look for a family of continuous operators Rα : Y → X with
limδ→0
sup{‖Rα(δ,yδ)yδ − A†y‖ | ‖y − yδ‖ ≤ δ
}= 0
• Means that Rαy → A†y on D(A†).• We need a parameter choice rule α
• If Rαy → A†y on D(A†), α = α(δ) works.• α = α(δ, yδ) can possibly improve the results.• α = α(yδ) can not work in general for ill-posed problems.
• If {Rα}, Rα ∈ L(Y ,X ), is a regularization then a particularchoice α = α(δ) works if and only if
limδ→0
α(δ) = 0, limδ→0
δ‖Rα(δ)‖ = 0
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Recalling RegularizationHow can we get a stable estimate for x , if we are givenyδ = Ax + n, ‖n‖ ≤ δ, and A† is discontinuous?
• Adapt ideas from finite dimensional case.• Look for a family of continuous operators Rα : Y → X with
limδ→0
sup{‖Rα(δ,yδ)yδ − A†y‖ | ‖y − yδ‖ ≤ δ
}= 0
• Means that Rαy → A†y on D(A†).
• We need a parameter choice rule α• If Rαy → A†y on D(A†), α = α(δ) works.• α = α(δ, yδ) can possibly improve the results.• α = α(yδ) can not work in general for ill-posed problems.
• If {Rα}, Rα ∈ L(Y ,X ), is a regularization then a particularchoice α = α(δ) works if and only if
limδ→0
α(δ) = 0, limδ→0
δ‖Rα(δ)‖ = 0
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Recalling RegularizationHow can we get a stable estimate for x , if we are givenyδ = Ax + n, ‖n‖ ≤ δ, and A† is discontinuous?
• Adapt ideas from finite dimensional case.• Look for a family of continuous operators Rα : Y → X with
limδ→0
sup{‖Rα(δ,yδ)yδ − A†y‖ | ‖y − yδ‖ ≤ δ
}= 0
• Means that Rαy → A†y on D(A†).• We need a parameter choice rule α
• If Rαy → A†y on D(A†), α = α(δ) works.• α = α(δ, yδ) can possibly improve the results.• α = α(yδ) can not work in general for ill-posed problems.
• If {Rα}, Rα ∈ L(Y ,X ), is a regularization then a particularchoice α = α(δ) works if and only if
limδ→0
α(δ) = 0, limδ→0
δ‖Rα(δ)‖ = 0
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Recalling RegularizationHow can we get a stable estimate for x , if we are givenyδ = Ax + n, ‖n‖ ≤ δ, and A† is discontinuous?
• Adapt ideas from finite dimensional case.• Look for a family of continuous operators Rα : Y → X with
limδ→0
sup{‖Rα(δ,yδ)yδ − A†y‖ | ‖y − yδ‖ ≤ δ
}= 0
• Means that Rαy → A†y on D(A†).• We need a parameter choice rule α
• If Rαy → A†y on D(A†), α = α(δ) works.
• α = α(δ, yδ) can possibly improve the results.• α = α(yδ) can not work in general for ill-posed problems.
• If {Rα}, Rα ∈ L(Y ,X ), is a regularization then a particularchoice α = α(δ) works if and only if
limδ→0
α(δ) = 0, limδ→0
δ‖Rα(δ)‖ = 0
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Recalling RegularizationHow can we get a stable estimate for x , if we are givenyδ = Ax + n, ‖n‖ ≤ δ, and A† is discontinuous?
• Adapt ideas from finite dimensional case.• Look for a family of continuous operators Rα : Y → X with
limδ→0
sup{‖Rα(δ,yδ)yδ − A†y‖ | ‖y − yδ‖ ≤ δ
}= 0
• Means that Rαy → A†y on D(A†).• We need a parameter choice rule α
• If Rαy → A†y on D(A†), α = α(δ) works.• α = α(δ, yδ) can possibly improve the results.
• α = α(yδ) can not work in general for ill-posed problems.
• If {Rα}, Rα ∈ L(Y ,X ), is a regularization then a particularchoice α = α(δ) works if and only if
limδ→0
α(δ) = 0, limδ→0
δ‖Rα(δ)‖ = 0
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Recalling RegularizationHow can we get a stable estimate for x , if we are givenyδ = Ax + n, ‖n‖ ≤ δ, and A† is discontinuous?
• Adapt ideas from finite dimensional case.• Look for a family of continuous operators Rα : Y → X with
limδ→0
sup{‖Rα(δ,yδ)yδ − A†y‖ | ‖y − yδ‖ ≤ δ
}= 0
• Means that Rαy → A†y on D(A†).• We need a parameter choice rule α
• If Rαy → A†y on D(A†), α = α(δ) works.• α = α(δ, yδ) can possibly improve the results.• α = α(yδ) can not work in general for ill-posed problems.
• If {Rα}, Rα ∈ L(Y ,X ), is a regularization then a particularchoice α = α(δ) works if and only if
limδ→0
α(δ) = 0, limδ→0
δ‖Rα(δ)‖ = 0
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Recalling RegularizationHow can we get a stable estimate for x , if we are givenyδ = Ax + n, ‖n‖ ≤ δ, and A† is discontinuous?
• Adapt ideas from finite dimensional case.• Look for a family of continuous operators Rα : Y → X with
limδ→0
sup{‖Rα(δ,yδ)yδ − A†y‖ | ‖y − yδ‖ ≤ δ
}= 0
• Means that Rαy → A†y on D(A†).• We need a parameter choice rule α
• If Rαy → A†y on D(A†), α = α(δ) works.• α = α(δ, yδ) can possibly improve the results.• α = α(yδ) can not work in general for ill-posed problems.
• If {Rα}, Rα ∈ L(Y ,X ), is a regularization then a particularchoice α = α(δ) works if and only if
limδ→0
α(δ) = 0, limδ→0
δ‖Rα(δ)‖ = 0
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Recalling RegularizationHow do we construct a regularization method?
• For compact linear operators use the SVD
A†y =∞∑
n=1
1σn〈y , vn〉un
and construct
Rαy =∞∑
n=1
gα(σn)〈y , vn〉un
• We need
1 gα(σ)→ 1σ
for σ > 0 as α→ 0,
2 gα(σ) ≤ Cα <∞ for all σ > 0,
3 σgα(σ) ≤ C <∞ for all α and σ > 0,
4 For the choice rule: δCα(δ) → 0
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Recalling RegularizationHow do we construct a regularization method?
• For compact linear operators use the SVD
A†y =∞∑
n=1
1σn〈y , vn〉un
and construct
Rαy =∞∑
n=1
gα(σn)〈y , vn〉un
• We need
1 gα(σ)→ 1σ
for σ > 0 as α→ 0,
2 gα(σ) ≤ Cα <∞ for all σ > 0,
3 σgα(σ) ≤ C <∞ for all α and σ > 0,
4 For the choice rule: δCα(δ) → 0
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Recalling RegularizationHow do we construct a regularization method?
• For compact linear operators use the SVD
A†y =∞∑
n=1
1σn〈y , vn〉un
and construct
Rαy =∞∑
n=1
gα(σn)〈y , vn〉un
• We need
1 gα(σ)→ 1σ
for σ > 0 as α→ 0,
2 gα(σ) ≤ Cα <∞ for all σ > 0,
3 σgα(σ) ≤ C <∞ for all α and σ > 0,
4 For the choice rule: δCα(δ) → 0
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Recalling RegularizationHow do we construct a regularization method?
• For compact linear operators use the SVD
A†y =∞∑
n=1
1σn〈y , vn〉un
and construct
Rαy =∞∑
n=1
gα(σn)〈y , vn〉un
• We need
1 gα(σ)→ 1σ
for σ > 0 as α→ 0,
2 gα(σ) ≤ Cα <∞ for all σ > 0,
3 σgα(σ) ≤ C <∞ for all α and σ > 0,
4 For the choice rule: δCα(δ) → 0
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Recalling RegularizationHow do we construct a regularization method?
• For compact linear operators use the SVD
A†y =∞∑
n=1
1σn〈y , vn〉un
and construct
Rαy =∞∑
n=1
gα(σn)〈y , vn〉un
• We need
1 gα(σ)→ 1σ
for σ > 0 as α→ 0,
2 gα(σ) ≤ Cα <∞ for all σ > 0,
3 σgα(σ) ≤ C <∞ for all α and σ > 0,
4 For the choice rule: δCα(δ) → 0
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Recalling RegularizationHow do we construct a regularization method?
• For compact linear operators use the SVD
A†y =∞∑
n=1
1σn〈y , vn〉un
and construct
Rαy =∞∑
n=1
gα(σn)〈y , vn〉un
• We need
1 gα(σ)→ 1σ
for σ > 0 as α→ 0,
2 gα(σ) ≤ Cα <∞ for all σ > 0,
3 σgα(σ) ≤ C <∞ for all α and σ > 0,
4 For the choice rule: δCα(δ) → 0
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Truncated SVDExample: Truncated SVD
gα(σ) Cα Convergence 1σ if σ ≥ α
0 else.1α
δα → 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
1
2
3
4
5
6
7
8
g0.2
(σ)
1/σ
Pros and cons?
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Bringmann-Drizsga regularizationExample: Bringmann-Drizsga regularization
gα(σ) Cα Convergence 1σ if σ ≥ α1α else.
1α
δα → 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
1
2
3
4
5
6
7
8
g0.2
(σ)
1/σ
Pros and cons?
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Lavrentiev regularization
Example: Lavrentiev regularization
gα(σ) Cα Convergence1
σ+α1α
δα → 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
1
2
3
4
5
6
7
8
g0.15
(σ)
1/σ
Pros and cons?
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Tikhonov regularization
Example: Tikhonov regularization
gα(σ) Cα Convergenceσ
σ2+α1
2√α
δ√α→ 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
1
2
3
4
5
6
7
8
g0.005
(σ)
1/σ
Pros and cons?
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Iterative regularization
Example: Landweber regularization
gk (σ) Ck Convergence1−(1−τσ2)k
σ More difficult More difficult
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
1
2
3
4
5
6
7
8
g20
(σ)
1/σ
Pros and cons?
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
The discrepancy principle for iterative regularization
Monotonic Improvement
Let γ = 22−τ‖A‖2 and let xk
δ be given by the Landweber iteration
with noisy data yδ. Let y = Ax†. Then xk+1δ is a better
approximation of x† than xkδ as long as ‖Axk
δ − yδ‖ > γδ.
Discrepancy Principle
The discrepancy principle is the parameter choice rule for theLandweber iteration based on choosing the regularizedsolution to be xk∗
δ with
k∗(δ, yδ) := inf{k | ‖yδ − Axkδ ‖ < γδ}.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
The discrepancy principle for iterative regularization
Monotonic Improvement
Let γ = 22−τ‖A‖2 and let xk
δ be given by the Landweber iteration
with noisy data yδ. Let y = Ax†. Then xk+1δ is a better
approximation of x† than xkδ as long as ‖Axk
δ − yδ‖ > γδ.
Discrepancy Principle
The discrepancy principle is the parameter choice rule for theLandweber iteration based on choosing the regularizedsolution to be xk∗
δ with
k∗(δ, yδ) := inf{k | ‖yδ − Axkδ ‖ < γδ}.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
The discrepancy principle for iterative regularization
Finite Determination
Let y = Ax†. If τ ≤ 1‖A‖2 is fixed then the discrepancy principle
k∗(δ, yδ) := inf{k | ‖yδ − Axkδ ‖ < γδ}.
determines a finite stopping index k(δ, yδ) for the Landweberiteration for any γ > 1 with k(δ, yδ) ∈ O(δ−2).
Proof: Board.
One can show that the discrepancy principle makes theLandweber iteration a convergent regularization method.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
The discrepancy principle for iterative regularization
Finite Determination
Let y = Ax†. If τ ≤ 1‖A‖2 is fixed then the discrepancy principle
k∗(δ, yδ) := inf{k | ‖yδ − Axkδ ‖ < γδ}.
determines a finite stopping index k(δ, yδ) for the Landweberiteration for any γ > 1 with k(δ, yδ) ∈ O(δ−2).
Proof: Board.
One can show that the discrepancy principle makes theLandweber iteration a convergent regularization method.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Further Questions
We have seen that regularization can yield continuousdependence of our reconstruction on the data in the sense that
limδ→0
sup{‖Rα(δ,yδ)yδ − A†y‖ | ‖y − yδ‖ ≤ δ
}= 0
But can we say something about the rate of convergence?Particularly, can we show
‖Rα(δ)(yδ)− A†y‖ ≤ C δν
for some ν?
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Further Questions
We have seen that regularization can yield continuousdependence of our reconstruction on the data in the sense that
limδ→0
sup{‖Rα(δ,yδ)yδ − A†y‖ | ‖y − yδ‖ ≤ δ
}= 0
But can we say something about the rate of convergence?Particularly, can we show
‖Rα(δ)(yδ)− A†y‖ ≤ C δν
for some ν?
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
How fast do regularizationmethods converge?
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
Let us stay in the setting of compact operators andregularizations defined via
Rαyδ =∑n∈I
gα(σn)〈yδ, vn〉un.
We do our usual estimate
‖x† − Rαyδ‖ ≤ ‖x† − Rαy‖+ Cαδ.
The convergence speed depends on two factors, one being
‖x† − Rαy‖ =
√√√√∑n∈I
(gα(σn)−
1σn
)2
|〈y , vn〉|2
=
√∑n∈I
(σngα(σn)− 1)2 |〈x†,un〉|2
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
Let us stay in the setting of compact operators andregularizations defined via
Rαyδ =∑n∈I
gα(σn)〈yδ, vn〉un.
We do our usual estimate
‖x† − Rαyδ‖ ≤ ‖x† − Rαy‖+ Cαδ.
The convergence speed depends on two factors
, one being
‖x† − Rαy‖ =
√√√√∑n∈I
(gα(σn)−
1σn
)2
|〈y , vn〉|2
=
√∑n∈I
(σngα(σn)− 1)2 |〈x†,un〉|2
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
Let us stay in the setting of compact operators andregularizations defined via
Rαyδ =∑n∈I
gα(σn)〈yδ, vn〉un.
We do our usual estimate
‖x† − Rαyδ‖ ≤ ‖x† − Rαy‖+ Cαδ.
The convergence speed depends on two factors, one being
‖x† − Rαy‖ =
√√√√∑n∈I
(gα(σn)−
1σn
)2
|〈y , vn〉|2
=
√∑n∈I
(σngα(σn)− 1)2 |〈x†,un〉|2
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
We have
‖x† − Rαy‖ =√∑
n∈I
(σngα(σn)− 1)2 |〈x†,un〉|2.
Particularly, if x† = un then
‖x† − Rαy‖ = |σngα(σn)− 1|.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
We have
‖x† − Rαy‖ =√∑
n∈I
(σngα(σn)− 1)2 |〈x†,un〉|2.
Particularly, if x† = un then
‖x† − Rαy‖ = |σngα(σn)− 1|.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
Consider the convergence of
|σngα(σn)− 1|
• Good: For fixed n it holds that for all ε > 0 there exists anα0 small enough such that
|σngα(σn)− 1| ≤ ε ∀α ≤ α0.
Reason for Rα being a regularization.
• Bad: For a fixed α it holds that for all ε > 0 there exists anN large enough such that
|σngα(σn)− 1| ≥ 1− ε ∀n ≥ N.
Reason: gα(σ) is bounded and σn → 0.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
Consider the convergence of
|σngα(σn)− 1|
• Good: For fixed n it holds that for all ε > 0 there exists anα0 small enough such that
|σngα(σn)− 1| ≤ ε ∀α ≤ α0.
Reason for Rα being a regularization.
• Bad: For a fixed α it holds that for all ε > 0 there exists anN large enough such that
|σngα(σn)− 1| ≥ 1− ε ∀n ≥ N.
Reason: gα(σ) is bounded and σn → 0.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
Consider the convergence of
|σngα(σn)− 1|
• Good: For fixed n it holds that for all ε > 0 there exists anα0 small enough such that
|σngα(σn)− 1| ≤ ε ∀α ≤ α0.
Reason for Rα being a regularization.
• Bad: For a fixed α it holds that for all ε > 0 there exists anN large enough such that
|σngα(σn)− 1| ≥ 1− ε ∀n ≥ N.
Reason: gα(σ) is bounded and σn → 0.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
Conclusion
Without further assumptions, the convergence of aregularization method can be arbitrarily slow!
What can we assume?
The convergence depended on∑n∈I
(σngα(σn)− 1)2 |〈x†,un〉|2
We need additional information about the decay of |〈x†,un〉|2!
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
Conclusion
Without further assumptions, the convergence of aregularization method can be arbitrarily slow!
What can we assume?
The convergence depended on∑n∈I
(σngα(σn)− 1)2 |〈x†,un〉|2
We need additional information about the decay of |〈x†,un〉|2!
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
Conclusion
Without further assumptions, the convergence of aregularization method can be arbitrarily slow!
What can we assume?
The convergence depended on∑n∈I
(σngα(σn)− 1)2 |〈x†,un〉|2
We need additional information about the decay of |〈x†,un〉|2!
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
Definition: Source-condition
We say that the minimal norm solution x† satisfies asource-condition of order µ if there exists a w ∈ X such thatx† = (A∗A)µw , where the latter is defined as
(A∗A)µw :=∞∑
n=1
σ2µn 〈w ,un〉un.
Assume that x† satisfies a source-condition of order µ, and forour SVD based regularization let there exist a function ωµ(α)such that
t2µ|tgα(t)− 1| ≤ ωµ(α).
Then it holds that
‖Rαy − A†y‖ ≤ ωµ(α)‖w‖.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
Definition: Source-condition
We say that the minimal norm solution x† satisfies asource-condition of order µ if there exists a w ∈ X such thatx† = (A∗A)µw , where the latter is defined as
(A∗A)µw :=∞∑
n=1
σ2µn 〈w ,un〉un.
Assume that x† satisfies a source-condition of order µ, and forour SVD based regularization let there exist a function ωµ(α)such that
t2µ|tgα(t)− 1| ≤ ωµ(α).
Then it holds that
‖Rαy − A†y‖ ≤ ωµ(α)‖w‖.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
Proof, considerations and example truncated SVD on the board.
• We get‖Rαyδ − A†y‖ ≤ ωµ(α)‖w‖+ δCα.
• Typically we can get some rate of decay likeωµ(α) = α2µ for µ ≤ µ0.
• This allows to derive a convergence rate!
• The optimal order of convergence is δ2µ
2µ+1 .
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
Proof, considerations and example truncated SVD on the board.
• We get‖Rαyδ − A†y‖ ≤ ωµ(α)‖w‖+ δCα.
• Typically we can get some rate of decay likeωµ(α) = α2µ for µ ≤ µ0.
• This allows to derive a convergence rate!
• The optimal order of convergence is δ2µ
2µ+1 .
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
Proof, considerations and example truncated SVD on the board.
• We get‖Rαyδ − A†y‖ ≤ ωµ(α)‖w‖+ δCα.
• Typically we can get some rate of decay likeωµ(α) = α2µ for µ ≤ µ0.
• This allows to derive a convergence rate!
• The optimal order of convergence is δ2µ
2µ+1 .
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
Proof, considerations and example truncated SVD on the board.
• We get‖Rαyδ − A†y‖ ≤ ωµ(α)‖w‖+ δCα.
• Typically we can get some rate of decay likeωµ(α) = α2µ for µ ≤ µ0.
• This allows to derive a convergence rate!
• The optimal order of convergence is δ2µ
2µ+1 .
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
Proof, considerations and example truncated SVD on the board.
• We get‖Rαyδ − A†y‖ ≤ ωµ(α)‖w‖+ δCα.
• Typically we can get some rate of decay likeωµ(α) = α2µ for µ ≤ µ0.
• This allows to derive a convergence rate!
• The optimal order of convergence is δ2µ
2µ+1 .
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
What does it mean to meet a source condition?
• Easiest interpretation for µ = 1.• Source condition means x† ∈ range(A∗A).• Example: Differentiation
Aw(t) =∫ t
0w(s) ds
A∗Aw(t) =∫ 1
t
∫ r
0w(s) ds dr
• Two times differentiable! Smoothness!
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
What does it mean to meet a source condition?• Easiest interpretation for µ = 1.
• Source condition means x† ∈ range(A∗A).• Example: Differentiation
Aw(t) =∫ t
0w(s) ds
A∗Aw(t) =∫ 1
t
∫ r
0w(s) ds dr
• Two times differentiable! Smoothness!
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
What does it mean to meet a source condition?• Easiest interpretation for µ = 1.• Source condition means x† ∈ range(A∗A).
• Example: Differentiation
Aw(t) =∫ t
0w(s) ds
A∗Aw(t) =∫ 1
t
∫ r
0w(s) ds dr
• Two times differentiable! Smoothness!
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
What does it mean to meet a source condition?• Easiest interpretation for µ = 1.• Source condition means x† ∈ range(A∗A).• Example: Differentiation
Aw(t) =∫ t
0w(s) ds
A∗Aw(t) =∫ 1
t
∫ r
0w(s) ds dr
• Two times differentiable! Smoothness!
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
What does it mean to meet a source condition?• Easiest interpretation for µ = 1.• Source condition means x† ∈ range(A∗A).• Example: Differentiation
Aw(t) =∫ t
0w(s) ds
A∗Aw(t) =∫ 1
t
∫ r
0w(s) ds dr
• Two times differentiable! Smoothness!
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
Punshlines:• Without smoothness assumptions, regularization methods
can converge arbitrarily slow.
• Under the assumption of a source-condition, one canderive a convergence rate.
• Under our assumption the optimal order a regularizationcan achieve is O
(δ
2µ2µ+1
).
• Convergence rate is worse than the error in the data.• Most regularization methods can achieve this rate forµ ≤ µ0.
• Some can achieve it unconditionally (e.g. truncated SVD).• The output error can be in the same order as the data.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
Punshlines:• Without smoothness assumptions, regularization methods
can converge arbitrarily slow.• Under the assumption of a source-condition, one can
derive a convergence rate.
• Under our assumption the optimal order a regularizationcan achieve is O
(δ
2µ2µ+1
).
• Convergence rate is worse than the error in the data.• Most regularization methods can achieve this rate forµ ≤ µ0.
• Some can achieve it unconditionally (e.g. truncated SVD).• The output error can be in the same order as the data.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
Punshlines:• Without smoothness assumptions, regularization methods
can converge arbitrarily slow.• Under the assumption of a source-condition, one can
derive a convergence rate.• Under our assumption the optimal order a regularization
can achieve is O(δ
2µ2µ+1
).
• Convergence rate is worse than the error in the data.• Most regularization methods can achieve this rate forµ ≤ µ0.
• Some can achieve it unconditionally (e.g. truncated SVD).• The output error can be in the same order as the data.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
Punshlines:• Without smoothness assumptions, regularization methods
can converge arbitrarily slow.• Under the assumption of a source-condition, one can
derive a convergence rate.• Under our assumption the optimal order a regularization
can achieve is O(δ
2µ2µ+1
).
• Convergence rate is worse than the error in the data.
• Most regularization methods can achieve this rate forµ ≤ µ0.
• Some can achieve it unconditionally (e.g. truncated SVD).• The output error can be in the same order as the data.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
Punshlines:• Without smoothness assumptions, regularization methods
can converge arbitrarily slow.• Under the assumption of a source-condition, one can
derive a convergence rate.• Under our assumption the optimal order a regularization
can achieve is O(δ
2µ2µ+1
).
• Convergence rate is worse than the error in the data.• Most regularization methods can achieve this rate forµ ≤ µ0.
• Some can achieve it unconditionally (e.g. truncated SVD).• The output error can be in the same order as the data.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
Punshlines:• Without smoothness assumptions, regularization methods
can converge arbitrarily slow.• Under the assumption of a source-condition, one can
derive a convergence rate.• Under our assumption the optimal order a regularization
can achieve is O(δ
2µ2µ+1
).
• Convergence rate is worse than the error in the data.• Most regularization methods can achieve this rate forµ ≤ µ0.
• Some can achieve it unconditionally (e.g. truncated SVD).
• The output error can be in the same order as the data.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Convergence of SVD based methods
Punshlines:• Without smoothness assumptions, regularization methods
can converge arbitrarily slow.• Under the assumption of a source-condition, one can
derive a convergence rate.• Under our assumption the optimal order a regularization
can achieve is O(δ
2µ2µ+1
).
• Convergence rate is worse than the error in the data.• Most regularization methods can achieve this rate forµ ≤ µ0.
• Some can achieve it unconditionally (e.g. truncated SVD).• The output error can be in the same order as the data.
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Tikhonov ReguliarzationWe have seen that Tikhonov regularization given via
gα(σ) =σ
σ2 + α
can be obtained by solving
(A∗A + αI)xδα = yδ.
Next: Tikhonov regularization is solving an optimization problem
The xδα obtained from Tikhonov regularization are the uniqueminimizers of
E(x) = ‖Ax − yδ‖2 + α‖x‖2.
• This allows to generalize our concept of regularization!• Create regularizations by convex functionals without SVD!• Generalize theory from linear to nonlinear problems!
Regularization
Michael Moeller
Observations fromprevious chapter
RegularizationGeneral idea
Definition and Properties
Error-free parameter choice
Behavior on Y\D(A†)Construction based on SVD
Examples
ConvergenceWithout source conditions
With source conditions
Generalizations
updated 18.11.2014
Tikhonov ReguliarzationWe have seen that Tikhonov regularization given via
gα(σ) =σ
σ2 + α
can be obtained by solving
(A∗A + αI)xδα = yδ.
Next: Tikhonov regularization is solving an optimization problem
The xδα obtained from Tikhonov regularization are the uniqueminimizers of
E(x) = ‖Ax − yδ‖2 + α‖x‖2.
• This allows to generalize our concept of regularization!• Create regularizations by convex functionals without SVD!• Generalize theory from linear to nonlinear problems!