Part 2:Gaussian Markov Random Fields (GMRFs) and
precision matricesUiO April 2020
Haakon Bakka
* [email protected]* King Abdullah University of Science and Technology
Spline models (non-linear) on covariates. Gaussian likelihoods.Sparse matrices. GMRF.
Wide time estimate: 1 hours.
ContextThe different types of problems we have
Computational Given a model coded in R, how to compute theinference?
Modeling How to create a reasonable model structure?
Priors How to define good priors?
Visualisation How to visualise the posterior?
Interpretation How to interpret the results?
NextComputational: How to represent the Gaussian part of the modelwith a GMRF?
The GMRF structureGaussian Markov Random FieldsA.k.a. Sparse Precision Matrices
Let us start with representing the AR1 model as a multivariateGaussian distribution. (Ignore u0 and uT for now.)
ut = ρut−1 + εt (1)
~u ∼ N (0,Σ) = N (0,Q−1) (2)
log(π(~u)) = −1
2log |2πΣ| − 1
2~u>Σ−1~u (3)
log(π(~u)) = +1
2log∣∣(2π)−1Q
∣∣− 1
2~u>Q~u (4)
The GMRF structureCompute the covariance matrix
ut = ρut−1 + εt (5)
C (ut , ut−1) = C (ρut−1 + εt , ut−1) (6)
= C (ρut−1, ut−1) + C (εt , ut−1) (7)
= ρ+ 0 (8)
We assume σ = 1 (we can scale by this later).
If we continue this computation, we get...
Joint covariance matrixut = ρut−1 + εt
Σ =
1 ρ ρ2 ρ3 ρ4 ρ5 ρ6
ρ 1 ρ ρ2 ρ3 ρ4 ρ5
ρ2 ρ 1 ρ ρ2 ρ3 ρ4
ρ3 ρ2 ρ 1 ρ ρ2 ρ3
ρ4 ρ3 ρ2 ρ 1 ρ ρ2
ρ5 ρ4 ρ3 ρ2 ρ 1 ρρ6 ρ5 ρ4 ρ3 ρ2 ρ 1
Writing this down is O(N2), and sampling or computing densitiescan be O(N3). “Kalman filter” method can do O(N). How canwe?
Joint precision matrixut = ρut−1 + εt
Q =
1 −ρ 0 0 0 0 0−ρ 1 + ρ2 −ρ 0 0 0 00 −ρ 1 + ρ2 −ρ 0 0 00 0 −ρ 1 + ρ2 −ρ 0 00 0 0 −ρ 1 + ρ2 −ρ 00 0 0 0 −ρ 1 + ρ2 −ρ0 0 0 0 0 −ρ 1
What do we need to know?
1. Calculate Q entries
2. Use sparse matrices (not keep the 0s)
3. Compute probabilities with Q
4. Sample from N (0,Q)
See: https://haakonbakkagit.github.io/btopic120.html
What is the big fuzz about conditional independence? (I)
I Mendelian inheritance: If weknow the genotypes of theparents (Z ), then thechildren’s genotypes areconditionally independent(X and Y ).
I If Z is unknown, then X andY are dependent!
What is the big fuzz about conditional independence? (I)
I Mendelian inheritance: If weknow the genotypes of theparents (Z ), then thechildren’s genotypes areconditionally independent(X and Y ).
I If Z is unknown, then X andY are dependent!
Gaussian Markov Random Fields
I Gaussian Markov RandomField (GMRF) x
I Conditional independence
xi ⊥ xj | x−ij
is important, notindependence
xi ⊥ xj
What is the big fuzz about conditional independence? (II)
I Sparse (SPD) matrices:makes it faster (Q = LLT ,Qx = b, diag(Q−1), etc...)
I 1DIM: O(n)
I 2DIM: O(n3/2)
I 3DIM: O(n2)
Combining sparse matricesOne variable doesn’t make a GAM
~y = ~x + ~v
~x = N (0,Q−1x )
~v = N (0,Q−1v )
Q(x ,y) =
[Qx + Qv −Qv
−Qv Qv
].
In the additive model, we need this type of addition several times.
Making the matrices sparseBy putting the f -model somewhere else than at the data
What if we have observations (in space or time or covariates) thatis not at a grid?
I Many observations at irregular locations here
I A large gap there
Only define f on the grid u, not on the covariate values v .
~y = ~v + ~ε
~u = N (0,Qu (θ))
~v = A~u(~y~u
)has Q =
[τεI τεAτεA
T Qu(θ) + τεATA
].
Sparse matrices (GMRF)Gaussian Markov Random FieldsAre necessary, not optional!
I Precision matrix Q is Σ−1
I E.g. for 104 observations of a time series, from N = 108 toN = 104.
I Sparse precision matrices are not “approximations”, butnatural models
I In numerics you use either “fourier-transform-tools” or sparsematrices
Learn more about precision matrices and GMRFs
End of this part
Thank you for the attention!
Questions?
Top Related