EE290T: Advanced Reconstruction Methods for Magnetic...
Transcript of EE290T: Advanced Reconstruction Methods for Magnetic...
EE290T: Advanced Reconstruction Methods for MagneticResonance Imaging
Martin Uecker
Introduction
Topics:
I Image Reconstruction as Inverse Problem
I Parallel Imaging
I Non-Cartestian MRI
I Nonlinear Inverse Reconstruction
I k-space Methods
I Subspace Methods
I Model-based Reconstruction
I Compressed Sensing
Tentative Syllabus
I 01: Jan 27 Introduction
I 02: Feb 03 Parallel Imaging as Inverse Problem
I 03: Feb 10 Iterative Reconstruction Algorithms
I –: Feb 17 (holiday)
I 04: Feb 24 Non-Cartesian MRI
I 05: Mar 03 Nonlinear Inverse Reconstruction
I 06: Mar 10 Reconstruction in k-space
I 07: Mar 17 Reconstruction in k-space
I –: Mar 24 (spring recess)
I 08: Mar 31 Subspace methods
I 09: Apr 07 Model-based Reconstruction
I 10: Apr 14 Compressed Sensing
I 11: Apr 21 Compressed Sensing
I 12: Apr 28 TBA
Projects
I Two pre-defined homework projects
I Final project
I Groups: 2-3 people
I 1. Project: iterative SENSE (Feb 03 - Feb 24)
I 2. Project: non-Cartesian MRI (Feb 24 - Mar 17)
I 3. Final Project (Mai 17 - Apr 28)
I Grading: 25%, 25%, 50%
Today
I Introduction to MRI
I Basic image reconstruction
I Inverse problems
Introduction to MRI
I Non-invasive technique to look “inside the body”
I No ionizing radiation or radioactive materials
I High spatial resolution
I Multiple contrasts
I Volumes or arbitrary crossectional slices
I Many applications in radiology/neuroscience/clinical research
Disadvantages
Some disadvantages:
I Expensive
I Long measurement times
I Not every patient is admissible(cardiac pacemaker, implants, tattoo, ...)
MRI Scanner
I MagnetI Radiofrequency coils (and electronics)I Gradient system
I Superconducting magnet
I Field strength: 0.25 T - 9.4 T (human)
MRI Scanner
I MagnetI Radiofrequency coils (and electronics)I Gradient system
I Spin excitation
I Signal detection
MRI Scanner
I MagnetI Radiofrequency coils (and electronics)I Gradient system
I Creates magnetic field gradients on top of the static B0 field
I Switchable gradients in all directions
Protons in the Magnetic Field
energy
magnetic field
∆E = ~γB0
B0
I Protons have spin 1/2(“little magnets”)
I Two energy levels ∆E = 2~γ 12B
0
I Larmor precession ∆E = ~ωLarmor
I Excitation with resonant radiofrequency pulses
I Macroscopic behaviour described by Bloch equations
Bloch Equations
Macroscopic (classical) behaviour of the magnetization correspondsto expectation value 〈~s〉 of spin operator (Pauli matrices).
Dynamics described by differential equations:
d
dt〈~s〉 = − e
m0〈~s〉 ×
B1x (t)
B1y (t)B0z
+
−1T2〈sx〉
− 1T2〈sy 〉
s0−〈sz 〉T1
The Basic NMR-Experiment
x
z
y
Equilibrium
~M
RF-Signal
t
x
y
Bloch equations:
d
dt〈~s〉 = − e
m0〈~s〉 ×
B1x (t)
B1y (t)B0z
+
−1T2〈sx〉
− 1T2〈sy 〉
s0−〈sz 〉T1
The Basic NMR-Experiment
x
z
y
Excitation
~M
RF-Signal
t
x
y
Bloch equations:
d
dt〈~s〉 = − e
m0〈~s〉 ×
B1x (t)
B1y (t)B0z
+
−1T2〈sx〉
− 1T2〈sy 〉
s0−〈sz 〉T1
The Basic NMR-Experiment
x
z
y
Precession
~M
RF-Signal
t
x
y
Bloch equations:
d
dt〈~s〉 = − e
m0〈~s〉 ×
B1x (t)
B1y (t)B0z
+
−1T2〈sx〉
− 1T2〈sy 〉
s0−〈sz 〉T1
The Basic NMR-Experiment
x
z
y
T2-decay
~M
RF-Signal
t
x
y
Bloch equations:
d
dt〈~s〉 = − e
m0〈~s〉 ×
B1x (t)
B1y (t)B0z
+
−1T2〈sx〉
− 1T2〈sy 〉
s0−〈sz 〉T1
The Basic NMR-Experiment
x
z
y
T1-recovery
~M
RF-Signal
t
x
y
Bloch equations:
d
dt〈~s〉 = − e
m0〈~s〉 ×
B1x (t)
B1y (t)B0z
+
−1T2〈sx〉
− 1T2〈sy 〉
s0−〈sz 〉T1
Multiple Contrasts
I Measurement of various physical quantities possible
I Advantage over CT / X-Ray (measurement of tissue density)
Proton density T1-weighted T2-weighted
Receive Chain
ADC 0001011101
Coil Preamplifier Analog-Digital-ConverterSample Matching ComputerAmplifierMixer
Thermal noise:
I Sample noise
I Coil noise
I Preamplifier noise
phased-array coil
Equivalent Circuit
Lcoil
vsignal
R
vnoise
R = RC + RS
I RC loss in coil
I RS loss in sample
=> Johnson-Nyquist noise vn
Johnson-Nyquist Noise
Mean-square noise voltage:
v2n = 4kB T R BW
R resistance, T temperature, kB Boltzmann constant, BW bandwidth
I additive Gaussian white noise
I Brownian motion of electrons in body and coil
I Connection between resistance and noise:Fluctuation-Dissipation-Theorem
Spatial Encoding
Nobel price in medicine 2003for Paul C. Lauterburand Sir Peter Mansfield
Principle:
I Additional field gradients
⇒ Larmor frequency dependenton location
Concepts:
I Slice selection(during excitation)
I Phase/Frequency encoding(before/during readout)
PC Lauterbur, Image formation by induced local interactions: examples employing nuclear magnetic resonance,Nature, 1973
Frequency Encoding
I Additional field gradients B0(x) = B0 + Gx · x⇒ Frequency ω(x) = γB0(x)
B
x
B0
B0(x)
spin density ρ(x)
t
I Signal is Fourier transform of the spin density:
~k(t) = γ
∫ t
0dτ ~G (τ) s(t) =
∫Vd~x ρ(~x)e−i2π
~k(t)·~x
(ignoring relaxation and errors)
Frequency Encoding
I Additional field gradients B0(x) = B0 + Gx · x⇒ Frequency ω(x) = γB0(x)
B
x
B0
B0(x)
spin density ρ(x)
t
I Signal is Fourier transform of the spin density:
~k(t) = γ
∫ t
0dτ ~G (τ) s(t) =
∫Vd~x ρ(~x)e−i2π
~k(t)·~x
(ignoring relaxation and errors)
Gradient System
I z gradient
I y gradient
I x gradient
Gradient System
I z gradient
I y gradient
I x gradient
Gradient System
I z gradient
I y gradient
I x gradient
Gradient System
I z gradient
I y gradient
I x gradient
Gradient System
I z gradient
I y gradient
I x gradient
Imaging Sequence
slice excitation
acquisition
radio
α
slice
read
phase
repetition time TR
FLASH (Fast Low Angle SHot)
Imaging Sequence
readout (in-plane)
acquisition
radio
α
slice
read
phase
repetition time TR
FLASH (Fast Low Angle SHot)
Fourier Encoding
acquisition
radio
α
slice
read
phase
repetition time TR
kread
kphase
Fourier Encoding
acquisition
radio
α
slice
read
phase
repetition time TR
kread
kphase
Image Reconstruction
k-space data
Image Reconstruction
inverse Discrete Fourier Transform
Direct Image Reconstruction
I Assumption: Signal is Fourier transform of the image:
s(t) =
∫d~x ρ(~x)e−i2π~x ·
~k(t)
I Image reconstruction with inverse DFT
kx
ky
~k(t) = γ∫ t
0 dτ ~G (τ)
⇒sampling
k-space
⇒iDFT
image
Requirements
I Short readout (signal equation holds for short time spans only)
I Sampling on a Cartesian grid
⇒ Line-by-line scanning
echo
radio
α
slice
readphase
TR ×N2D MRI sequence
measurement time:TR ≥ 2ms2D: N ≈ 256⇒ seconds3D: N ≈ 256× 256⇒ minutes
Sampling
Sampling
Sampling
Sampling
Poisson Summation Formula
Periodic summation of a signal f from discrete samples of itsFourier transform f̂ :
∞∑n=−∞
f (t + nP) =∞∑
l=−∞
1
Pf̂ (
l
P)e2πi l
Pt
No aliasing ⇒ Nyquist criterium
For MRI: P = FOV ⇒ k-space samples are at k = lFOV
image
k-space
I periodic summation from discrete Fourier samples on grid
I other positions can be recovered by sinc interpolation(Whittaker-Shannon formula)
image
k-space
I periodic summation from discrete Fourier samples on grid
I other positions can be recovered by sinc interpolation(Whittaker-Shannon formula)
image
k-space
I periodic summation from discrete Fourier samples on grid
I other positions can be recovered by sinc interpolation(Whittaker-Shannon formula)
Nyquist-Shannon Sampling Theorem
Theorem 1: If a function f (t) contains no frequencies higher thanW cps, it is completely determined by giving its ordinates at aseries of points spaced 1/2W seconds apart.1
I Band-limited function
I Regular sampling
I Linear sinc-interpolation
1. CE Shannon. Communication in the presence of noise. Proc Institute of Radio Engineers; 37:10–21 (1949)
Sampling
Point Spread Function
I Signal:
y = PFx
sampling operator P, Fourier transform F
I Reconstruction with (continuous) inverse Fourier transform:
F−1PHPFx = psf ? x
⇒ Convolution with point spread function (PSF)
(shift-invariance)
I Question: What is the PSF for the grid?
Dirichlet Kernel
Dn(x) =n∑
k=−ne2πikx =
sin(π(2n + 1)x)
sin(πx)
(Dn ? f )(x) =
∫ ∞−∞
dy f (y)Dn(x − y) =n∑
k=−nf̂ (k)e2πikx
FWHM ≈ 1.2
D16
Gibbs Phenomenon
I Truncation of Fourier series
⇒ Ringing at jump discontinuities
Rectangular wave and Fourier approximation
Gibbs Phenomenon
I Truncation of Fourier series
⇒ Ringing at jump discontinuities
Rectangular wave and Fourier approximation
Gibbs Phenomenon
I Truncation of Fourier series
⇒ Ringing at jump discontinuities
Rectangular wave and Fourier approximation
Gibbs Phenomenon
I Truncation of Fourier series
⇒ Ringing at jump discontinuities
Rectangular wave and Fourier approximation
Direct Image Reconstruction
Assumptions:
I Signal is Fourier transform of the image
I Sampling on a Cartesian grid
I Signal from a limited (compact) field of view
I Missing high-frequency samples are small
I Noise is neglectable
Fast Fourier Transform (FFT) Algorithms
Discrete Fourier Transform (DFT):
DFTNk {fn}
N−1n=0 := f̂k =
N−1∑n=0
e i2πNknfn for k = 0, · · · ,N − 1
Matrix multiplication: O(N2)
Fast Fourier Transform Algorithms: O(N logN)
Cooley-Tukey Algorithm
Decomposition:
N = N1N2
k = k2 + k1N2
n = n2N1 + n1
With ξN := e i2πN we can write:
(ξN)kn = (ξN1N2)(k2+k1N2)(n2N1+n1)
= (ξN1N2)N2k1n1(ξN1N2)k2n1(ξN1N2)N1k2n2(ξN1N2)N1N2k1n2
= (ξN1)k1n1(ξN1N2)k2n1(ξN2)k2n2
(use: ξAAB = ξB and ξAA = 1 )
Cooley-Tukey Algorithm
Decomposition of a DFT:
DFTNk {fn}
N−1n=0 =
N−1∑n=0
(ξN)knfn
=
N1−1∑n1=0
(ξN1)k1n1(ξN)k2n1
N2−1∑n2=0
(ξN2)k2n2fn1+n2N1
= DFTN1k1
{(ξN)k2n1DFTN2
k2{fn1+n2N1}
N2−1n2=0
}N1−1
n1=0
I Recursive decomposition to prime-sized DFTs
⇒ Efficient computation for smooth sizes
(efficient algorithms available for all other sizes too)
Inverse Problem
Definition (Hadamard): A problem is called well-posed if
1. there exists a solution to the problem (existence),
2. there is at most one solution to the problem (uniqueness),
3. the solution depends continuously on the data (stability).
(we will later see that all three conditions can be violated)
Moore-Penrose Generalized Inverse
(A linear and bounded)
x is least-squares solution, if
‖Ax − y‖ = inf {‖Ax̂ − y‖ : x̂ ∈ X}
x is best approximate solution, if
‖x‖ = inf {‖x̂‖ : x̂ least-squares solution}
Generalized inverse A† : D(A†) := R(A) + R(A)⊥ 7→ X maps datato the best approximate solution.
Tikhonov Regularization
(inverse not continuous: ‖A−1‖ =∞)
Regularized optimization problem:
xδα = argminx‖Ax − y δ‖22 + α‖x‖2
2
Explicit solution:
xδα =(AHA + αI
)−1AH︸ ︷︷ ︸
A†α
y δ
Generalized inverse:
A† = limα→0
A†α
Tikhonov Regularization: Bias vs Noise
Noise contamination:
y δ = y + δn
= Ax + δn
Reconstruction error:
x − xδα = x − (AHA + αI )−1AHy δ
= x − (AHA + αI )−1AHAx + (AHA + αI )−1AHδn
= α(AHA + αI )−1x︸ ︷︷ ︸approximation error
+ (AHA + αI )−1AHδn︸ ︷︷ ︸data noise error
Morozov’s discrepancy principle:Largest regularization with ‖Axδα − y δ‖2 ≤ τδ
Example
y =
3.4525 3.3209 3.30903.3209 3.3706 3.31883.3090 3.3188 3.2780
10.50.2
+
1.6522e − 021.5197e − 033.3951e − 03
=
5.79235.67175.6274
Ill-conditioning: cond(A) = σmax
σmin≈ 10000
Solution:(α = 0)
x̂ = A−1y =
1.254141.02660−0.58866
Regularized solution:(α = 3× 10−6)
x̂α = A†αy =
1.097060.457510.14632
Tikhonov Regularization using SVD
Singular Value Decomposition (SVD):
A = UΣVH =∑j
σjUjVHj
Regularized inverse:
A†α =(AHA + αI
)−1AH =
∑j
σjσ2j + α
VjUHj
Approximation error:
1−σ2j
σ2j + α
=α
σ2j + α
Data noise error:
σjδ
σ2j + α
noise
approximation
α
Image Reconstruction as Inverse Problem
Forward problem:
y = Ax + n
x image (and more), A forward operator, n noise, y data
Regularized solution:
x? = argminx ‖Ax − y‖22︸ ︷︷ ︸
data consistency
+ αR(x)︸ ︷︷ ︸regularization
Advantages:
I Simple extension to non-Cartesian trajectories
I Modelling of physical effects
I Prior knowledge via suitable regularization terms
Extension to Non-Cartesian Trajectories
Cartesian radial spiral
Practical implementation issues:
I Imperfect gradient waveforms (e.g. delays, eddy currents)
I Efficient implementation of the reconstruction algorithm
Modelling of Physical Effects
Examples:
I Coil sensitivities (parallel imaging)
sj(t) =
∫d~x ρ(~x)cj(~x)e−i2π~x ·
~k(t)
I T2 relaxation
s(t) =
∫d~x ρ(~x)e−R(~x)te−i2π~x ·
~k(t)
I Field inhomogeneities
s(t) =
∫d~x ρ(~x)e−i∆B0(~x)te−i2π~x ·
~k(t)
I Diffusion, flow, motion, ...
Regularization
I Introduces additional information about the solution
I In case of ill-conditioning: needed for stabilization
Common choices:
I Tikhonov (small norm)
R(x) = ‖W (x − xR)‖22 (often: W = I and xR = 0)
I Total variation (piece-wise constant images)
R(x) =
∫d~r√|∂1x(~r)|2 + |∂2x(~r)|2
I L1 regularization (sparsity)
R(x) = ‖W (x − xR)‖1
Statistical Interpretation
Probability of x given y : p(x |y)
Likelihood: Probability of a specific outcome given a parameter:p(y |x)
Maximum-Likelihood-Estimator: The estimate is the parameterx which maximizes the likelihood.
Statistical Model
Linear measurements contaminated by noise:
y = Ax + n
Gaussian white noise:
p(n) = N (0, σ2) with N (µ, σ2) =1
σ√
2πe−
(x−µ)2
2σ2
Probability of an outcome (measurement) given the image x :
p(y |A, x , λ) = N (Ax , σ2)
Prior Knowledge
Prior knowledge as a probability distribution for the the image:
p(x) = · · ·
A posterior probability distribution p(x |y) given the data y can becomputed using Bayes’ theorem:
p(x |y) =p(y |x)p(x)
p(y)
A point estimate for the image can then be obtained, for exampleby maximum a posteriori estimation (MAP), or by minimizingthe posterior expected value of a loss function, e.g. a minimummean squared error (MMSE) estimator.
Bayesian Prior
I L2-Regularization: Ridge Regression
N (µ, σ2) =1
σ√
2πe−
(x−µ)2
2σ2 Gaussian prior
I L1-Regularization: LASSO
p(x |µ, b) =1
2be−|x−µ|
b Laplacian prior
I L2 and L1: Elastic net1
1. H Zou, T Hastie. Regularization and variable selection via the elastic net. J R Statist Soc B. 67:301–320 (2005)
Summary
I Magnetic Resonance Imaging
I Direct image reconstruction
I Inverse problems