1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of...

52
1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009

Transcript of 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of...

Page 1: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

1

MCMC and SMC for Nonlinear Time Series Models

Chiranjit Mukherjee

STA395 TalkDepartment of Statistical Science, Duke University

February 16, 2009

Page 2: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

2

Outline

1. Problem Statement

2. Markov Chain Monte CarloDynamic Linear Models, Forward Filtering Backward Sampling,

Nonlinear Models, Mixture of Gaussians, Approximate FFBS as a proposal to Metropolis-Hastings

3. Sequential Monte CarloImportance Sampling, Sequential Importance Sampling,

Optimal Proposal, Resampling, Auxiliary Particle Filters, Parameter Degeneracy, Marginal Likelihood Calculation, Issues with Resampling, Scalability of SMC techniques

4. Minimal Quorum Sensing ModelBackground, Differential Equations Model, Discretized Version,

Features

5. Results

6. Summery

7. References

Page 3: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

3

Problem Statement

We will focus on Markovian, nonlinear, non-Gaussian State Space Models:

Priors:

System Evolution:

Observation:

Given the data y1, y2, …, yT the objective is to find the following posterior distribution:

Page 4: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

4

MCMC Techniques for State Space Models

Page 5: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

5

Dynamic Linear Models

[West and Harrison, 1997]

where are all known .

One can sample from the joint distribution of x0:T given and

using a Forward Filtering Backward Sampling Algorithm.

[Carter & Kohn, 1994]

Page 6: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

6

Forward Filtering

Note that:

If are all Gaussian distributions,

is also Gaussian.

Filtering:1. Start with

2. For update

Page 7: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

7

Backward Sampling

Note that:

Since

and are Gaussian, is also Gaussian.

Sample:

For sample:

Page 8: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

8

Nonlinear Dynamic Models

where ft, gt are known nonlinear functions and are all known.

An approximate FFBS is based on the Taylor Series expansion of the functions:

where and

Page 9: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

9

Mixture Normal Approximation

Filtering:

When each of are Normal or mixture of Normals then is also a mixture of Normals.

Smoothing:

is mixture Normal and is Normal implies

is a mixture Normal.

Page 10: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

10

Metropolis Step

In order to sample from for a general State Space model, we can

Use the approximate FFBS procedure to propose a sample and

accept it

with a Metropolis-Hastings step.

Let us call this proposal .

One can explicitly write an expression for the joint density as

it is a product of Normal Densities or Mixture of Normal Densities.

One accepts the proposed sample with probability:

The main problem is as T increases, the approximation goes bad and the

Metropolis acceptance rate falls down quickly.

Page 11: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

11

Sequential Monte Carlo

Page 12: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

12

Importance Sampling

Objective: Want to sample from ¼(x) which is difficult. We use an approximatedistribution q(x) which is easy to sample from.

For any distribution q(.) such that ¼(x) > 0 implies q(x) > 0, we have

where

Page 13: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

13

A Sequential Importance Sampling ApproachLet where

Key Idea: If is not too different from then we should be able to reuse our estimate of as an Importance Distribution for .

ALGORITHM

Start with sampling from the prior:

Suppose at time (n-1) we have the following particulate approximation:

Update:

Page 14: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

14

Updating the IS Approximation

We want to reuse the samples fromused to build approximation of .

This only makes sense if

We select :

Unnormalized particle weights are updated in the following way

Page 15: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

15

A Simple SIS for State Space Models

For a State Space model

Let

If we use the following proposal:

Then

Page 16: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

16

Optimal Importance Distribution

The algorithm described above collapses as n increases, because after a few stepsonly few particles have non-negligible weights.

An optimal zero-variance proposal at time t is simply given by:

For performing SIS in this optimal setting we need for

which is not readily available in general.

Instead people deploy a Locally Optimum Importance Distribution, whichconsists of selecting at time t that minimizes the variance ofthe importance weights.

Page 17: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

17

Locally Optimal Importance Distribution

It follows that:

and

In the case of State Space Models:

and

Page 18: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

18

Resampling

Even with locally optimal proposal, as time index n increases, the variance of theunnormalized weights {wn (x0:n)} tend to increase, and all the mass is concentratedon a few particles/samples.

We wish to focus our computational efforts on high-density regions of the space.

IS approximation:

Resample with weights M times:to build the new approximation

Now the samples become statistically dependent, so hard to analyze theoretically. However resampling is a necessary step to avoid particle degeneracy.

Page 19: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

19

Resampling

With the Locally Optimal Filter a Standard SIS Algorithm would be:

Sample:

Compute weights:

Resample to obtain equal weighted samples

An alternative strategy is:

Calculate weights:

Resample to obtain equal weighted samples

Sample:

This algorithm ensures more diverse particles at time n. Changing of the order can be performed because

is independent of xn.

Page 20: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

20

Auxiliary Particle Filter

For a general State Space Model it is not always possible to either explicitly sample from or calculate weights

We can use an approximation to , say

In literature it is often suggested to takewhere is the mean, median or the mode of the distribution

Let:

ALGORITHM Compute weights:

Resample to obtain

Sample:

Calculate weights:

Page 21: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

21

Degeneracy Issues

The SMC strategy performs remarkably well in terms of estimating marginals

However the joint distribution is poorly estimated when n is large.

One can not hope to estimate a target distribution with increasing dimension with fixedprecision when the number of particles remains fixed.

Since we are interested in the marginal , SMC serves well for our purpose.

For bounded functions Á and p>1, we can expect results of the following form

if the model has nice forgetting/mixing properties. ML is increasing in L.

Page 22: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

22

Degeneracy in the Parameter Space

All the algorithms we have described so far tries to minimize degeneracy in the state space. Resampling is performed in order to achieve diverse particles for xn.

However we have sampled particles for µ ~ ¼ (µ) right in the beginning and the resampling step would reduce the distinct particles of µ as time n increases.

[Liu & West, 2001] suggests using a smooth kernel density for and sampleµ particles from the smoothed density to break degeneracy.

Let denote samples from time n posterior (not that µ is time dependent). If

[Liu & West, 2001] suggests:

where ; and are sample mean andvariance of .

Page 23: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

23

Liu & West, 2001

The authors suggested shrinkage in order avoid over-dispersion of the smoothkernel density

Choice of h comes from the choice of discounting factor usually 0.95-0.99

They also recommend using Auxiliary Particle Filtering to improve performance.

ALGORITHM1. For calculate , and

2. Resample with weights

3. Sample: and

4. Evaluate the corresponding weights:

Page 24: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

24

Using Sufficient Statistics

Another approach to break particle degeneracy in the parameter space is to use conditional sufficient statistics st for the parameters.

One can propagate the following joint distribution over time

Usually the conditional sufficient statistic follows a recursive relationship:

One can use any of the algorithms for updating the conditional distribution of thestates. For example with the locally optimal importance distribution one shouldhave the following relationship:

Note that unlike smooth kernel density approximation technique to avoid degeneracy, this is an exact technique. So it should be used whenever possible.

Page 25: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

25

Marginal Likelihood Calculation

Often times we need to compute the marginal likelihood for model comparison purpose.

For a general State Space Model, marginal likelihood of is:

Note that for the case of Vanilla filter (with the resampling step):

Page 26: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

26

Issues with Resampling

The most intuitive resampling scheme is Multinomial resampling. At time n we do

where Ni = # times particle i is replicated.

has complexity O(M).

has complexity O(M2).

Resampling becomes the bottleneck computation in a SMC procedure if a Multinomial sampler is used.

0 W1 W2W3 W4W5 W6 WM-1WM-2 1

Page 27: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

27

A Faster Resampling Scheme

Systematic Resampling:

Like Multinomial, but only one randomsample

Complexity O(M)

0 1W1 W2W3 W4W5 W6 WM-1WM-2

Page 28: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

28

Scalability of SMC

Every SMC algorithm has three essential steps:(i) Sampling Step - Generate from(ii) Importance Step – Compute particle weights(iii) Resampling Step – Draw M particles from with probability proportional to weight

Sampling and Importance steps are completely parallelizable without the need ofany from of communication between the particles.

Resampling step needs communication while normalizing the weights.

Some algorithms need further communication, like Liu & West need to computesample mean and variance and .

If we implement a SMC algorithm on a distributed architecture we shouldtransfer some particles from particle surplus processors to particle deficient processorsafter a resampling step in order to keep the computational load even.

Page 29: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

29

Resampling on a Distributed ArchitectureALGORITHM (1 master, K slaves)

Each slave processor calculates the total weight of processor k and sendsit to the master processor.

Master processor performs Inter-Resampling:

Master processor sends back to processor k.

Each slave processor performs Intra-Resampling: (in parallel)

Particle Routing – to equalize computational load of the processors:

-- This depends on the architecture.

Page 30: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

30

Minimal Quorum Sensing Model

Page 31: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

31

Minimal Quorum Sensing Motif

Tanouchi Y, Tu D, Kim J, You L (2008) : “Noise Reduction by Diffusional Dissipation in a Minimal Quorum Sensing Motif”. PLoS Comput Biol 4(8).

Two genes, encoding proteins LuxI and LuxR

LuxI is AHL synthase

AHL freely diffuses in and out of the cell

As cell density increases, AHL density increases in the environment and in the cell

At sufficient high concentration, AHL binds to and activates LuxR

This will in turn activate downstream genes.

Ai: Intracellular AHL levelAe: Extracellular AHL levelR: LuxR protein levelC: AHL-LuxR complex level

Page 32: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

32

Stochastic Differential Equations Model

Page 33: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

33

Discretized Model

The Stochastic Differential Equation

When discretized, will yield the following difference equation:

For our Minimal QS Motif the discretized version is the following:

where

Page 34: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

34

Some Notations

Let

With these notations our discretized model becomes:

Let us use the notation for Then,

Page 35: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

35

Some Notations

Page 36: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

36

As a State Space Model

Systems Equation:

We assume that we can observe xt = (Ai,t, Ae,t, Rt, Ct)’ with some measurement errors.

Let yt denote the observations made for the unknown states xt. Let us represent it as:

Observational Equation:

where V is unknown.

This makes our model fall into the general category of State Space Models:

Page 37: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

37

Features of this Model

This model is nonlinear.

System evolution variance matrix is not fixed. depends onlatent states and parameters.

So the basic assumption for a DLM (that are known) does not hold here.

This does not make any problem is Forward Filtering.

Note that for Backward Sampling the key identity is:

Now has xt appearing in the variance matrix, so

is no longer a Gaussian density.

Page 38: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

38

MCMC Algorithm

Note that andare linear in the mean.

We have used the following approximation to run a FFBSthat approximates the distributionsand

As mentioned before, proposed states are accepted with a Metropolis –Hastingsacceptance step.

Complete conditional distribution of V is Inverse-Wishart. It is updated using aGibbs step.

Component parameters of µ appear in . There the complete conditionalfor µ parameters are NOT Gaussian. We update them using a Random-WalkMetropolis-Hastings step within Gibbs.

Page 39: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

39

Synthetic Data

We do not have real data.

For data simulation:

We use values for parameter µ as suggested in the literature.

For V we’ve made an arbitrary choice. The choice for Ai,0, Ae,0, R0, C0 are the expected values at a

steady state.

We have generated synthetic observations y1, y2, …, y999, y1000

Page 40: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

40

Bayesian Analysis

Prior Distributions: Relatively flat Normal distributions truncated over zero for the

µ parameters. Relatively flat Normal distributions truncated over zero for the

initial states

Ai,0, Ae,0, R0, C0 .

Inverse Wishart distribution for the unknown variance matrix V.

An Identification Issue:

Since the parameters P, Vc, Ve are involved in the model only through the

ratios and we do not have identifiability for all the three parameters.

We can only learn about these two ratios. Therefore we use the ratios as model

parameters rather than the individual ones.

Page 41: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

41

MCMC Results

We have run the MCMC for 106 iterations and the following results are from thelast 105 iterations of the generated Markov Chain.

Page 42: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

42

Trace Plots and Autocorrelation Functions

Page 43: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

43

SMC Algorithm

We have used Auxiliary Particle Filter to reduce particle

degeneracy.

For the observational equation variance matrix V, a sufficient statistics structureexists. We use the sufficient statistics to exactly sample from on each step.

For the parameters in µ no sufficient statistics structure exists. We use thekernel smoothing technique to reduce particle degeneracy in the parameter space.

We have run our particle filters with 106 particles.

Page 44: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

44

Quantiles

Content

RED for MCMCGREEN for SMC

Page 45: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

45

Title

Content

RED MCMCGREEN SMC

Page 46: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

46

Box Plots of Posterior Samples at time T = 1000Content

RED MCMCGREEN SMC

Page 47: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

47

Smoothed Posteriors at time T = 1000

RED MCMCGREEN SMCGREY PRIOR

Page 48: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

48

Marginal Likelihood Plot – Model ComparisonContent

Page 49: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

49

Summery

For nonlinear, non-Gaussian State space models with long

time series data MCMC is slow, and has issues with convergence.

Sequential Monte Carlo techniques provide an alternative class of non-iterative algorithms to solve this class of problems.

For a long time series SMC methods suffer from degeneracy issues, particularly while computing entities like Marginal Likelihood.

SMC is scalable, so with enough resources one can imagine of tackling problems with big data which otherwise takes an enormous amount of time to solve with MCMC methods.

Model comparison becomes handy with easy computation of marginal likelihood.

Page 50: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

50

References

1. M West. Approximating Posterior Distributions by Mixtures. Journal of Royal Statistical Socity, (55): 409–422, 1993a.

2. M West. Mixture Models, Monte Carlo, Bayesian Updating and Dynamic Models. J.H.Newton (ed.), Computing Science and Statistics: Proceedings of 24th Symposium on the Interface, pages 325–333, 1993b.

3. C K Carter and R Cohn. On Gibbs Sampling for State Space Models. Biometrica, 81(3):541–553, August 1994.

4. J Liu and M West. Combined Parameter and State Estimation in Simulation-based Filtering. Sequential Monte Carlo Methods in Practice, pages 197–223, 2000.

5. P Fearnhead. MCMC, Sufficient Statistics, and Particle Filters. Journal of Computational and Graphical Statistics, (11):848–862, 2002.

6. G Storvik. Particle Filters in State Space Models with the Presence of Unknown Static Parameters. IEEE. Trans. of Signal Processing, (50):281–289, 2002.

Page 51: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

51

References

7. S J Godsill, A Doucet, and M West. Monte Carlo Smoothing for Nonlinear Time Series. Journal of the American Statistical Association, 99(465):DOI: 10.1198/016214504000000151, March 2004.

8. M Boli´c, P M Djuri´c, and S Hong. New Resampling Algorithms for Particle Filters. IEEE International Conference on Acoustics, Speech, and Signal Processing, Proceedings, April 2003.

9. MBoli´c, PM Djuri´c, and S Hong. Resampling Algorithms for Particle Filters: A Computational Complexity Perspective. EURASIP Journal of Applied Signal Processing, (15):2267–2277, 2004.

10.M S Johannes and N Polson. Particle Filtering and Parameter Learning. Social Science Research Network, page http://ssrn.com/abstract=983646, March 2007.

11.C M Carvalho, M Johannes, H F Lopes, and N Polson. Particle Learning and Smoothing. Working Paper, 2008.

12.Y Tanouchi, D Tu, J Kim, and L You. Noise Reduction by Diffusional Dissipation in a Minimal Quorum Sensing Motif. PLoS Computational Biology, 4(8):e1000167.doi:10.1371/journal.pcbi.1000167, August 2008.

Page 52: 1 MCMC and SMC for Nonlinear Time Series Models Chiranjit Mukherjee STA395 Talk Department of Statistical Science, Duke University February 16, 2009.

52

THANK YOU