von Mises lecture, Berlin

107
ABC Methods for Bayesian Model Choice ABC Methods for Bayesian Model Choice Christian P. Robert Universit´ e Paris-Dauphine, IuF, & CREST http://www.ceremade.dauphine.fr/ ~ xian Richard von Mises Lecture 2011, Humboldt–Universit¨ at zu Berlin June 30, 2011

description

Slides on ABC and ABC model choice, updated from Zürich with quotes from von Mises, defending the Bayesian approach versus the likelihood approach.

Transcript of von Mises lecture, Berlin

Page 1: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC Methods for Bayesian Model Choice

Christian P. Robert

Universite Paris-Dauphine, IuF, & CRESThttp://www.ceremade.dauphine.fr/~xian

Richard von Mises Lecture 2011, Humboldt–Universitat zuBerlin

June 30, 2011

Page 2: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Introduction

Introductory notions

Introduction

Approximate Bayesian computation

ABC for model choice

Page 3: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Introduction

The ABC of [Bayesian] statistics

”What meaning the likelihood may have in any case (...)is inconceivable to me.” — R. von Mises (1957)

In a classical (Fisher, 1921) perspective, a statistical model isdefined by the law of the observations, also called likelihood

L(θ|y1, . . . , yn) = L(θ|y) e.g.=

n∏i=1

f(yi|θ)

Parameters θ are then estimated based on this function L(θ|y) andon the probabilistic properties of the distribution of the data.

Page 4: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Introduction

The ABC of Bayesian statistics

”Bayes’ Theorem can also be generalized in such a way asto apply to statistical functions.” – R. von Mises (1957)

In the Bayesian approach (Bayes, 1763; Laplace, 1773), theparameter is endowed with a probability distribution as well, calledthe prior distribution and the likelihood becomes a conditionaldistribution of the data given the parameter, understood as arandom variable.

Inference then based on the posterior distribution, with density

π(θ|y) ∝ π(θ)L(θ|y) Bayes’ Theorem

(also called the posterior)

Page 5: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Introduction

The ABC of Bayesian statistics

”Bayes’ Theorem can also be generalized in such a way asto apply to statistical functions.” – R. von Mises (1957)

In the Bayesian approach (Bayes, 1763; Laplace, 1773), theparameter is endowed with a probability distribution as well, calledthe prior distribution and the likelihood becomes a conditionaldistribution of the data given the parameter, understood as arandom variable.Inference then based on the posterior distribution, with density

π(θ|y) ∝ π(θ)L(θ|y) Bayes’ Theorem

(also called the posterior)

Page 6: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Introduction

A few details

”The main argument that p is not a [random] variablebut an ‘unknown constant’ does not mean anything tome.” — R. von Mises (1957)

The parameter θ does not become a random variable (instead of anunknown constant) in the Bayesian paradygm. Probability calculusis used to quantify the uncertainty about θ as a calibrated quantity.

Page 7: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Introduction

A few details

”Laplace, who had a laissez-faire attitude, deduced somesimple rules used to an extent quite out of proportion tohis modest starting point. Neither does he ever revealhow the sudden transition to real statistical events is tobe made.” — R. von Mises (1957)

The prior density π(·) is to be understood as a reference measurewhich, in informative situations, may summarise the available priorinformation.

Page 8: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Introduction

A few details

”If the number of experiments n is very large, then weknow that the influence of the initial probabilities is notvery considerable.” — R. von Mises (1957)

The impact of the prior density π(·) on the resulting inference isreal but (mostly) vanishes when the number of observations grows.The only exception is the area of hypothesis testing where bothapproaches remain unreconcilable.

Page 9: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Introduction

von Mises’ conclusion

”We can only hope that statisticians will return to theuse of the simple, lucid reasoning of Bayes’ conceptions,and accord to the likelihood theory its proper role.” — R.von Mises (1957)

[Berger and Wolpert, The Likelihood Principle, 1987]

Page 10: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Approximate Bayesian computation

Introduction

Approximate Bayesian computationABC basicsAlphabet soupCalibration of ABC

ABC for model choice

Page 11: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Regular Bayesian computation issues

When faced with a non-standard posterior distribution

π(θ|y) ∝ π(θ)L(θ|y)

the standard solution is to use simulation (Monte Carlo) toproduce a sample

θ1, . . . , θT

from π(θ|y) (or approximately by Markov chain Monte Carlomethods)

[Robert & Casella, 2004]

Page 12: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Untractable likelihoods

Cases when the likelihood function f(y|θ) is unavailable and whenthe completion step

f(y|θ) =∫

Zf(y, z|θ) dz

is impossible or too costly because of the dimension of zc© MCMC cannot be implemented!

Page 13: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Illustrations

Example

Stochastic volatility model: fort = 1, . . . , T,

yt = exp(zt)εt , zt = a+bzt−1+σηt ,

T very large makes it difficult toinclude z within the simulatedparameters

0 200 400 600 800 1000

−0.

4−

0.2

0.0

0.2

0.4

t

Highest weight trajectories

Page 14: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Illustrations

Example

Potts model: if y takes values on a grid Y of size kn and

f(y|θ) ∝ exp

{θ∑l∼i

Iyl=yi

}

where l∼i denotes a neighbourhood relation, n moderately largeprohibits the computation of the normalising constant

Page 15: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Illustrations

Example

Inference on CMB: in cosmology, study of the Cosmic MicrowaveBackground via likelihoods immensely slow to computate (e.gWMAP, Plank), because of numerically costly spectral transforms[Data is a Fortran program]

[Kilbinger et al., 2010, MNRAS]

Page 16: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Illustrations

Example

Coalescence tree: in populationgenetics, reconstitution of a commonancestor from a sample of genes viaa phylogenetic tree that is close toimpossible to integrate out[100 processor days with 4parameters]

[Cornuet et al., 2009, Bioinformatics]

Page 17: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

The ABC method

Bayesian setting: target is π(θ)f(x|θ)

When likelihood f(x|θ) not in closed form, likelihood-free rejectiontechnique:

ABC algorithm

For an observation y ∼ f(y|θ), under the prior π(θ), keep jointlysimulating

θ′ ∼ π(θ) , z ∼ f(z|θ′) ,

until the auxiliary variable z is equal to the observed value, z = y.

[Tavare et al., 1997]

Page 18: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

The ABC method

Bayesian setting: target is π(θ)f(x|θ)When likelihood f(x|θ) not in closed form, likelihood-free rejectiontechnique:

ABC algorithm

For an observation y ∼ f(y|θ), under the prior π(θ), keep jointlysimulating

θ′ ∼ π(θ) , z ∼ f(z|θ′) ,

until the auxiliary variable z is equal to the observed value, z = y.

[Tavare et al., 1997]

Page 19: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

The ABC method

Bayesian setting: target is π(θ)f(x|θ)When likelihood f(x|θ) not in closed form, likelihood-free rejectiontechnique:

ABC algorithm

For an observation y ∼ f(y|θ), under the prior π(θ), keep jointlysimulating

θ′ ∼ π(θ) , z ∼ f(z|θ′) ,

until the auxiliary variable z is equal to the observed value, z = y.

[Tavare et al., 1997]

Page 20: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Why does it work?!

The proof is trivial:

f(θi) ∝∑z∈D

π(θi)f(z|θi)Iy(z)

∝ π(θi)f(y|θi)= π(θi|y) .

[Accept–Reject 101]

Page 21: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Earlier occurrence

‘Bayesian statistics and Monte Carlo methods are ideallysuited to the task of passing many models over onedataset’

[Don Rubin, Annals of Statistics, 1984]

Note Rubin (1984) does not promote this algorithm forlikelihood-free simulation but frequentist intuition on posteriordistributions: parameters from posteriors are more likely to bethose that could have generated the data.

Page 22: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

A as approximative

When y is a continuous random variable, equality z = y is replacedwith a tolerance condition,

%(y, z) ≤ ε

where % is a distance

Output distributed from

π(θ)Pθ{%(y, z) < ε} ∝ π(θ|%(y, z) < ε)

Page 23: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

A as approximative

When y is a continuous random variable, equality z = y is replacedwith a tolerance condition,

%(y, z) ≤ ε

where % is a distanceOutput distributed from

π(θ)Pθ{%(y, z) < ε} ∝ π(θ|%(y, z) < ε)

Page 24: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

ABC algorithm

Algorithm 1 Likelihood-free rejection sampler

for i = 1 to N dorepeat

generate θ′ from the prior distribution π(·)generate z from the likelihood f(·|θ′)

until ρ{η(z), η(y)} ≤ εset θi = θ′

end for

where η(y) defines a (maybe in-sufficient) statistic

Page 25: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Output

The likelihood-free algorithm samples from the marginal in z of:

πε(θ, z|y) =π(θ)f(z|θ)IAε,y(z)∫

Aε,y×Θ π(θ)f(z|θ)dzdθ,

where Aε,y = {z ∈ D|ρ(η(z), η(y)) < ε}.

The idea behind ABC is that the summary statistics coupled with asmall tolerance should provide a good approximation of theposterior distribution:

πε(θ|y) =∫πε(θ, z|y)dz ≈ π(θ|y) .

Page 26: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Output

The likelihood-free algorithm samples from the marginal in z of:

πε(θ, z|y) =π(θ)f(z|θ)IAε,y(z)∫

Aε,y×Θ π(θ)f(z|θ)dzdθ,

where Aε,y = {z ∈ D|ρ(η(z), η(y)) < ε}.

The idea behind ABC is that the summary statistics coupled with asmall tolerance should provide a good approximation of theposterior distribution:

πε(θ|y) =∫πε(θ, z|y)dz ≈ π(θ|y) .

Page 27: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Pima Indian benchmark

−0.005 0.010 0.020 0.030

020

4060

8010

0

Dens

ity

−0.05 −0.03 −0.01

020

4060

80

Dens

ity

−1.0 0.0 1.0 2.0

0.00.2

0.40.6

0.81.0

Dens

ityFigure: Comparison between density estimates of the marginals on β1(left), β2 (center) and β3 (right) from ABC rejection samples (red) andMCMC samples (black)

.

Page 28: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

MA example

Consider the MA(q) model

xt = εt +

q∑i=1

ϑiεt−i

Simple prior: uniform prior over the identifiability zone, e.g.triangle for MA(2)

Page 29: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

MA example (2)

ABC algorithm thus made of

1. picking a new value (ϑ1, ϑ2) in the triangle

2. generating an iid sequence (εt)−q<t≤T

3. producing a simulated series (x′t)1≤t≤T

Distance: basic distance between the series

ρ((x′t)1≤t≤T , (xt)1≤t≤T ) =T∑t=1

(xt − x′t)2

or between summary statistics like the first q autocorrelations

τj =T∑

t=j+1

xtxt−j

Page 30: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

MA example (2)

ABC algorithm thus made of

1. picking a new value (ϑ1, ϑ2) in the triangle

2. generating an iid sequence (εt)−q<t≤T

3. producing a simulated series (x′t)1≤t≤T

Distance: basic distance between the series

ρ((x′t)1≤t≤T , (xt)1≤t≤T ) =T∑t=1

(xt − x′t)2

or between summary statistics like the first q autocorrelations

τj =

T∑t=j+1

xtxt−j

Page 31: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Comparison of distance impact

Evaluation of the tolerance on the ABC sample against bothdistances (ε = 100%, 10%, 1%, 0.1%) for an MA(2) model

Page 32: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Comparison of distance impact

0.0 0.2 0.4 0.6 0.8

01

23

4

θ1

−2.0 −1.0 0.0 0.5 1.0 1.5

0.00.5

1.01.5

θ2

Evaluation of the tolerance on the ABC sample against bothdistances (ε = 100%, 10%, 1%, 0.1%) for an MA(2) model

Page 33: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

Comparison of distance impact

0.0 0.2 0.4 0.6 0.8

01

23

4

θ1

−2.0 −1.0 0.0 0.5 1.0 1.5

0.00.5

1.01.5

θ2

Evaluation of the tolerance on the ABC sample against bothdistances (ε = 100%, 10%, 1%, 0.1%) for an MA(2) model

Page 34: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

ABC advances

Simulating from the prior is often poor in efficiency

Either modify the proposal distribution on θ to increase the densityof x’s within the vicinity of y...

[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]

...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger ε

[Beaumont et al., 2002]

.....or even by including ε in the inferential framework [ABCµ][Ratmann et al., 2009]

Page 35: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

ABC advances

Simulating from the prior is often poor in efficiencyEither modify the proposal distribution on θ to increase the densityof x’s within the vicinity of y...

[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]

...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger ε

[Beaumont et al., 2002]

.....or even by including ε in the inferential framework [ABCµ][Ratmann et al., 2009]

Page 36: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

ABC advances

Simulating from the prior is often poor in efficiencyEither modify the proposal distribution on θ to increase the densityof x’s within the vicinity of y...

[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]

...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger ε

[Beaumont et al., 2002]

.....or even by including ε in the inferential framework [ABCµ][Ratmann et al., 2009]

Page 37: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

ABC basics

ABC advances

Simulating from the prior is often poor in efficiencyEither modify the proposal distribution on θ to increase the densityof x’s within the vicinity of y...

[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]

...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger ε

[Beaumont et al., 2002]

.....or even by including ε in the inferential framework [ABCµ][Ratmann et al., 2009]

Page 38: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

ABC-NP

Better usage of [prior] simulations byadjustement: instead of throwing awayθ′ such that ρ(η(z), η(y)) > ε, replaceθs with locally regressed

θ∗ = θ − {η(z)− η(y)}Tβ[Csillery et al., TEE, 2010]

where β is obtained by [NP] weighted least square regression on(η(z)− η(y)) with weights

Kδ {ρ(η(z), η(y))}

[Beaumont et al., 2002, Genetics]

Page 39: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

ABC-MCMC

Markov chain (θ(t)) created via the transition function

θ(t+1) =

θ′ ∼ Kω(θ

′|θ(t)) if x ∼ f(x|θ′) is such that x = y

and u ∼ U(0, 1) ≤ π(θ′)Kω(θ(t)|θ′)π(θ(t))Kω(θ′|θ(t)) ,

θ(t) otherwise,

has the posterior π(θ|y) as stationary distribution[Marjoram et al, 2003]

Page 40: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

ABC-MCMC

Markov chain (θ(t)) created via the transition function

θ(t+1) =

θ′ ∼ Kω(θ

′|θ(t)) if x ∼ f(x|θ′) is such that x = y

and u ∼ U(0, 1) ≤ π(θ′)Kω(θ(t)|θ′)π(θ(t))Kω(θ′|θ(t)) ,

θ(t) otherwise,

has the posterior π(θ|y) as stationary distribution[Marjoram et al, 2003]

Page 41: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

ABC-MCMC (2)

Algorithm 2 Likelihood-free MCMC sampler

Use Algorithm 1 to get (θ(0), z(0))for t = 1 to N do

Generate θ′ from Kω

(·|θ(t−1)

),

Generate z′ from the likelihood f(·|θ′),Generate u from U[0,1],

if u ≤ π(θ′)Kω(θ(t−1)|θ′)π(θ(t−1)Kω(θ′|θ(t−1))

IAε,y(z′) then

set (θ(t), z(t)) = (θ′, z′)else(θ(t), z(t))) = (θ(t−1), z(t−1)),

end ifend for

Page 42: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

Why does it work?

Acceptance probability that does not involve the calculation of thelikelihood and

πε(θ′, z′|y)

πε(θ(t−1), z(t−1)|y)× Kω(θ

(t−1)|θ′)f(z(t−1)|θ(t−1))

Kω(θ′|θ(t−1))f(z′|θ′)

=π(θ′) f(z′|θ′) IAε,y(z′)

π(θ(t−1)) f(z(t−1)|θ(t−1))IAε,y(z(t−1))

× Kω(θ(t−1)|θ′) f(z(t−1)|θ(t−1))

Kω(θ′|θ(t−1)) f(z′|θ′)

=π(θ′)Kω(θ

(t−1)|θ′)π(θ(t−1)Kω(θ′|θ(t−1))

IAε,y(z′) .

Page 43: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

ABCµ

[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]

Use of a joint density

f(θ, ε|y) ∝ ξ(ε|y, θ)× πθ(θ)× πε(ε)

where y is the data, and ξ(ε|y, θ) is the prior predictive density ofρ(η(z), η(y)) given θ and x when z ∼ f(z|θ)

Warning! Replacement of ξ(ε|y, θ) with a non-parametric kernelapproximation.

Page 44: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

ABCµ

[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]

Use of a joint density

f(θ, ε|y) ∝ ξ(ε|y, θ)× πθ(θ)× πε(ε)

where y is the data, and ξ(ε|y, θ) is the prior predictive density ofρ(η(z), η(y)) given θ and x when z ∼ f(z|θ)Warning! Replacement of ξ(ε|y, θ) with a non-parametric kernelapproximation.

Page 45: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

ABCµ details

Multidimensional distances ρk (k = 1, . . . ,K) and errorsεk = ρk(ηk(z), ηk(y)), with

εk ∼ ξk(ε|y, θ) ≈ ξk(ε|y, θ) =1

Bhk

∑b

K[{εk−ρk(ηk(zb), ηk(y))}/hk]

then used in replacing ξ(ε|y, θ) with mink ξk(ε|y, θ)

ABCµ involves acceptance probability

π(θ′, ε′)

π(θ, ε)

q(θ′, θ)q(ε′, ε)

q(θ, θ′)q(ε, ε′)

mink ξk(ε′|y, θ′)

mink ξk(ε|y, θ)

Page 46: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

ABCµ details

Multidimensional distances ρk (k = 1, . . . ,K) and errorsεk = ρk(ηk(z), ηk(y)), with

εk ∼ ξk(ε|y, θ) ≈ ξk(ε|y, θ) =1

Bhk

∑b

K[{εk−ρk(ηk(zb), ηk(y))}/hk]

then used in replacing ξ(ε|y, θ) with mink ξk(ε|y, θ)ABCµ involves acceptance probability

π(θ′, ε′)

π(θ, ε)

q(θ′, θ)q(ε′, ε)

q(θ, θ′)q(ε, ε′)

mink ξk(ε′|y, θ′)

mink ξk(ε|y, θ)

Page 47: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

ABCµ multiple errors

[ c© Ratmann et al., PNAS, 2009]

Page 48: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

ABCµ for model choice

[ c© Ratmann et al., PNAS, 2009]

Page 49: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

Questions about ABCµ

For each model under comparison, marginal posterior on ε used toassess the fit of the model (HPD includes 0 or not).

I Is the data informative about ε? [Identifiability]

I How is the prior π(ε) impacting the comparison?

I How is using both ξ(ε|x0, θ) and πε(ε) compatible with astandard probability model? [remindful of Wilkinson]

I Where is the penalisation for complexity in the modelcomparison?

[X, Mengersen & Chen, 2010, PNAS]

Page 50: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

Questions about ABCµ

For each model under comparison, marginal posterior on ε used toassess the fit of the model (HPD includes 0 or not).

I Is the data informative about ε? [Identifiability]

I How is the prior π(ε) impacting the comparison?

I How is using both ξ(ε|x0, θ) and πε(ε) compatible with astandard probability model? [remindful of Wilkinson]

I Where is the penalisation for complexity in the modelcomparison?

[X, Mengersen & Chen, 2010, PNAS]

Page 51: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

A PMC version

Use of the same kernel idea as ABC-PRC but with IS correctionGenerate a sample at iteration t by

πt(θ(t)) ∝

N∑j=1

ω(t−1)j Kt(θ

(t)|θ(t−1)j )

modulo acceptance of the associated xt, and use an importance

weight associated with an accepted simulation θ(t)i

ω(t)i ∝ π(θ

(t)i )/πt(θ

(t)i ) .

c© Still likelihood free[Beaumont et al., Biometrika, 2009]

Page 52: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

Sequential Monte Carlo

SMC is a simulation technique to approximate a sequence ofrelated probability distributions πn with π0 “easy” and πT target.Iterated IS as PMC: particles moved from time n to time n viakernel Kn and use of a sequence of extended targets πn

πn(z0:n) = πn(zn)

n∏j=0

Lj(zj+1, zj)

where the Lj ’s are backward Markov kernels [check that πn(zn) isa marginal]

[Del Moral, Doucet & Jasra, Series B, 2006]

Page 53: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

ABC-SMC

True derivation of an SMC-ABC algorithmUse of a kernel Kn associated with target πεn and derivation of thebackward kernel

Ln−1(z, z′) =

πεn(z′)Kn(z

′, z)

πn(z)

Update of the weights

win ∝ wi(n−1)

∑Mm=1 IAεn (x

min)∑M

m=1 IAεn−1(xmi(n−1))

when xmin ∼ K(xi(n−1), ·)

Page 54: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

Properties of ABC-SMC

The ABC-SMC method properly uses a backward kernel L(z, z′) tosimplify the importance weight and to remove the dependence onthe unknown likelihood from this weight. Update of importanceweights is reduced to the ratio of the proportions of survivingparticlesMajor assumption: the forward kernel K is supposed to beinvariant against the true target [tempered version of the trueposterior]

Adaptivity in ABC-SMC algorithm only found in on-lineconstruction of the thresholds εt, slowly enough to keep a largenumber of accepted transitions

[Del Moral, Doucet & Jasra, 2009]

Page 55: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Alphabet soup

Properties of ABC-SMC

The ABC-SMC method properly uses a backward kernel L(z, z′) tosimplify the importance weight and to remove the dependence onthe unknown likelihood from this weight. Update of importanceweights is reduced to the ratio of the proportions of survivingparticlesMajor assumption: the forward kernel K is supposed to beinvariant against the true target [tempered version of the trueposterior]Adaptivity in ABC-SMC algorithm only found in on-lineconstruction of the thresholds εt, slowly enough to keep a largenumber of accepted transitions

[Del Moral, Doucet & Jasra, 2009]

Page 56: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Calibration of ABC

Which summary statistics?

Fundamental difficulty of the choice of the summary statistic whenthere is no non-trivial sufficient statistic [except when done by theexperimenters in the field]

Starting from a large collection of summary statistics is available,Joyce and Marjoram (2008) consider the sequential inclusion intothe ABC target, with a stopping rule based on a likelihood ratiotest.

I Does not taking into account the sequential nature of the tests

I Depends on parameterisation

I Order of inclusion matters.

Page 57: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Calibration of ABC

Which summary statistics?

Fundamental difficulty of the choice of the summary statistic whenthere is no non-trivial sufficient statistic [except when done by theexperimenters in the field]Starting from a large collection of summary statistics is available,Joyce and Marjoram (2008) consider the sequential inclusion intothe ABC target, with a stopping rule based on a likelihood ratiotest.

I Does not taking into account the sequential nature of the tests

I Depends on parameterisation

I Order of inclusion matters.

Page 58: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

Approximate Bayesian computation

Calibration of ABC

Which summary statistics?

Fundamental difficulty of the choice of the summary statistic whenthere is no non-trivial sufficient statistic [except when done by theexperimenters in the field]Starting from a large collection of summary statistics is available,Joyce and Marjoram (2008) consider the sequential inclusion intothe ABC target, with a stopping rule based on a likelihood ratiotest.

I Does not taking into account the sequential nature of the tests

I Depends on parameterisation

I Order of inclusion matters.

Page 59: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

ABC for model choice

Introduction

Approximate Bayesian computation

ABC for model choiceModel choiceGibbs random fieldsGeneric ABC model choice

Page 60: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Model choice

Bayesian model choice

Several models M1,M2, . . . are considered simultaneously for adataset y and the model index M is part of the inference.Use of a prior distribution. π(M = m), plus a prior distribution onthe parameter conditional on the value m of the model index,πm(θm)Goal is to derive the posterior distribution of M , challengingcomputational target when models are complex.

Page 61: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Model choice

Generic ABC for model choice

Algorithm 3 Likelihood-free model choice sampler (ABC-MC)

for t = 1 to T dorepeat

Generate m from the prior π(M = m)Generate θm from the prior πm(θm)Generate z from the model fm(z|θm)

until ρ{η(z), η(y)} < εSet m(t) = m and θ(t) = θm

end for

[Toni, Welch, Strelkowa, Ipsen & Stumpf, 2009]

Page 62: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Model choice

ABC estimates

Posterior probability π(M = m|y) approximated by the frequencyof acceptances from model m

1

T

T∑t=1

Im(t)=m .

Early issues with implementation:

I should tolerances ε be the same for all models?

I should summary statistics vary across models?

I should the distance measure ρ vary as well?

Extension to a weighted polychotomous logistic regression estimateof π(M = m|y), with non-parametric kernel weights

[Cornuet et al., DIYABC, 2009]

Page 63: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Model choice

ABC estimates

Posterior probability π(M = m|y) approximated by the frequencyof acceptances from model m

1

T

T∑t=1

Im(t)=m .

Early issues with implementation:

I should tolerances ε be the same for all models?

I should summary statistics vary across models?

I should the distance measure ρ vary as well?

Extension to a weighted polychotomous logistic regression estimateof π(M = m|y), with non-parametric kernel weights

[Cornuet et al., DIYABC, 2009]

Page 64: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Model choice

The Great ABC controversy [# 1?]

On-going controvery in phylogeographic genetics about the validityof using ABC for testing

Against: Templeton, 2008,2009, 2010a, 2010b, 2010cargues that nested hypothesescannot have higher probabilitiesthan nesting hypotheses (!)

Page 65: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Model choice

The Great ABC controversy [# 1?]

On-going controvery in phylogeographic genetics about the validityof using ABC for testing

Against: Templeton, 2008,2009, 2010a, 2010b, 2010cargues that nested hypothesescannot have higher probabilitiesthan nesting hypotheses (!)

Replies: Fagundes et al., 2008,Beaumont et al., 2010, Berger etal., 2010, Csillery et al., 2010point out that the criticisms areaddressed at [Bayesian]model-based inference and havenothing to do with ABC...

Page 66: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Gibbs random fields

Gibbs random fields

Gibbs distribution

The rv y = (y1, . . . , yn) is a Gibbs random field associated withthe graph G if

f(y) =1

Zexp

{−∑c∈C

Vc(yc)

},

where Z is the normalising constant, C is the set of cliques of Gand Vc is any function also called potentialU(y) =

∑c∈C Vc(yc) is the energy function

c© Z is usually unavailable in closed form

Page 67: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Gibbs random fields

Gibbs random fields

Gibbs distribution

The rv y = (y1, . . . , yn) is a Gibbs random field associated withthe graph G if

f(y) =1

Zexp

{−∑c∈C

Vc(yc)

},

where Z is the normalising constant, C is the set of cliques of Gand Vc is any function also called potentialU(y) =

∑c∈C Vc(yc) is the energy function

c© Z is usually unavailable in closed form

Page 68: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Gibbs random fields

Potts model

Potts model

Vc(y) is of the form

Vc(y) = θS(y) = θ∑l∼i

δyl=yi

where l∼i denotes a neighbourhood structure

In most realistic settings, summation

Zθ =∑x∈X

exp{θTS(x)}

involves too many terms to be manageable and numericalapproximations cannot always be trusted

Page 69: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Gibbs random fields

Potts model

Potts model

Vc(y) is of the form

Vc(y) = θS(y) = θ∑l∼i

δyl=yi

where l∼i denotes a neighbourhood structure

In most realistic settings, summation

Zθ =∑x∈X

exp{θTS(x)}

involves too many terms to be manageable and numericalapproximations cannot always be trusted

Page 70: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Gibbs random fields

Bayesian Model Choice

Comparing a model with energy S0 taking values in Rp0 versus amodel with energy S1 taking values in Rp1 can be done throughthe Bayes factor corresponding to the priors π0 and π1 on eachparameter space

Bm0/m1(x) =

∫exp{θT

0 S0(x)}/Zθ0,0π0(dθ0)∫

exp{θT1 S1(x)}/Zθ1,1

π1(dθ1)

Page 71: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Gibbs random fields

Neighbourhood relations

Choice to be made between M neighbourhood relations

im∼ i′ (0 ≤ m ≤M − 1)

withSm(x) =

∑im∼i′

I{xi=xi′}

driven by the posterior probabilities of the models.

Page 72: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Gibbs random fields

Model index

Computational target:

P(M = m|x) ∝∫

Θm

fm(x|θm)πm(θm) dθm π(M = m) ,

If S(x) sufficient statistic for the joint parameters(M, θ0, . . . , θM−1),

P(M = m|x) = P(M = m|S(x)) .

Page 73: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Gibbs random fields

Model index

Computational target:

P(M = m|x) ∝∫

Θm

fm(x|θm)πm(θm) dθm π(M = m) ,

If S(x) sufficient statistic for the joint parameters(M, θ0, . . . , θM−1),

P(M = m|x) = P(M = m|S(x)) .

Page 74: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Gibbs random fields

Sufficient statistics in Gibbs random fields

Each model m has its own sufficient statistic Sm(·) andS(·) = (S0(·), . . . , SM−1(·)) is also (model-)sufficient.For Gibbs random fields,

x|M = m ∼ fm(x|θm) = f1m(x|S(x))f2

m(S(x)|θm)

=1

n(S(x))f2m(S(x)|θm)

wheren(S(x)) = ] {x ∈ X : S(x) = S(x)}

c© S(x) is therefore also sufficient for the joint parameters

Page 75: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Gibbs random fields

Sufficient statistics in Gibbs random fields

Each model m has its own sufficient statistic Sm(·) andS(·) = (S0(·), . . . , SM−1(·)) is also (model-)sufficient.

For Gibbs random fields,

x|M = m ∼ fm(x|θm) = f1m(x|S(x))f2

m(S(x)|θm)

=1

n(S(x))f2m(S(x)|θm)

wheren(S(x)) = ] {x ∈ X : S(x) = S(x)}

c© S(x) is therefore also sufficient for the joint parameters

Page 76: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Gibbs random fields

Sufficient statistics in Gibbs random fields

Each model m has its own sufficient statistic Sm(·) andS(·) = (S0(·), . . . , SM−1(·)) is also (model-)sufficient.For Gibbs random fields,

x|M = m ∼ fm(x|θm) = f1m(x|S(x))f2

m(S(x)|θm)

=1

n(S(x))f2m(S(x)|θm)

wheren(S(x)) = ] {x ∈ X : S(x) = S(x)}

c© S(x) is therefore also sufficient for the joint parameters

Page 77: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Gibbs random fields

ABC model choice Algorithm

ABC-MCI Generate m∗ from the prior π(M = m).

I Generate θ∗m∗ from the prior πm∗(·).I Generate x∗ from the model fm∗(·|θ∗m∗).I Compute the distance ρ(S(x0), S(x∗)).

I Accept (θ∗m∗ ,m∗) if ρ(S(x0), S(x∗)) < ε.

Note When ε = 0 the algorithm is exact

Page 78: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Gibbs random fields

Toy example

iid Bernoulli model versus two-state first-order Markov chain, i.e.

f0(x|θ0) = exp

(θ0

n∑i=1

I{xi=1}

)/{1 + exp(θ0)}n ,

versus

f1(x|θ1) =1

2exp

(θ1

n∑i=2

I{xi=xi−1}

)/{1 + exp(θ1)}n−1 ,

with priors θ0 ∼ U(−5, 5) and θ1 ∼ U(0, 6) (inspired by “phasetransition” boundaries).

Page 79: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Gibbs random fields

Toy example (2)

−40 −20 0 10

−50

5

BF01

BF01

−40 −20 0 10−10

−50

510

BF01

BF01

(left) Comparison of the true BFm0/m1(x0) with BFm0/m1

(x0)(in logs) over 2, 000 simulations and 4.106 proposals from theprior. (right) Same when using tolerance ε corresponding to the1% quantile on the distances.

Page 80: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Back to sufficiency

‘Sufficient statistics for individual models are unlikely tobe very informative for the model probability.’

[Scott Sisson, Jan. 31, 2011, X.’Og]

If η1(x) sufficient statistic for model m = 1 and parameter θ1 andη2(x) sufficient statistic for model m = 2 and parameter θ2,(η1(x), η2(x)) is not always sufficient for (m, θm)

c© Potential loss of information at the testing level

Page 81: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Back to sufficiency

‘Sufficient statistics for individual models are unlikely tobe very informative for the model probability.’

[Scott Sisson, Jan. 31, 2011, X.’Og]

If η1(x) sufficient statistic for model m = 1 and parameter θ1 andη2(x) sufficient statistic for model m = 2 and parameter θ2,(η1(x), η2(x)) is not always sufficient for (m, θm)

c© Potential loss of information at the testing level

Page 82: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Back to sufficiency

‘Sufficient statistics for individual models are unlikely tobe very informative for the model probability.’

[Scott Sisson, Jan. 31, 2011, X.’Og]

If η1(x) sufficient statistic for model m = 1 and parameter θ1 andη2(x) sufficient statistic for model m = 2 and parameter θ2,(η1(x), η2(x)) is not always sufficient for (m, θm)

c© Potential loss of information at the testing level

Page 83: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Limiting behaviour of B12 (T →∞)

ABC approximation

B12(y) =

∑Tt=1 Imt=1 Iρ{η(zt),η(y)}≤ε∑Tt=1 Imt=2 Iρ{η(zt),η(y)}≤ε

,

where the (mt, zt)’s are simulated from the (joint) prior

As T go to infinity, limit

Bε12(y) =

∫Iρ{η(z),η(y)}≤επ1(θ1)f1(z|θ1) dz dθ1∫Iρ{η(z),η(y)}≤επ2(θ2)f2(z|θ2) dz dθ2

=

∫Iρ{η,η(y)}≤επ1(θ1)f

η1 (η|θ1) dη dθ1∫

Iρ{η,η(y)}≤επ2(θ2)fη2 (η|θ2) dη dθ2

,

where fη1 (η|θ1) and fη2 (η|θ2) distributions of η(z)

Page 84: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Limiting behaviour of B12 (T →∞)

ABC approximation

B12(y) =

∑Tt=1 Imt=1 Iρ{η(zt),η(y)}≤ε∑Tt=1 Imt=2 Iρ{η(zt),η(y)}≤ε

,

where the (mt, zt)’s are simulated from the (joint) priorAs T go to infinity, limit

Bε12(y) =

∫Iρ{η(z),η(y)}≤επ1(θ1)f1(z|θ1) dz dθ1∫Iρ{η(z),η(y)}≤επ2(θ2)f2(z|θ2) dz dθ2

=

∫Iρ{η,η(y)}≤επ1(θ1)f

η1 (η|θ1) dη dθ1∫

Iρ{η,η(y)}≤επ2(θ2)fη2 (η|θ2) dη dθ2

,

where fη1 (η|θ1) and fη2 (η|θ2) distributions of η(z)

Page 85: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Limiting behaviour of B12 (ε→ 0)

When ε goes to zero,

Bη12(y) =

∫π1(θ1)f

η1 (η(y)|θ1) dθ1∫

π2(θ2)fη2 (η(y)|θ2) dθ2

,

c© Bayes factor based on the sole observation of η(y)

Page 86: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Limiting behaviour of B12 (ε→ 0)

When ε goes to zero,

Bη12(y) =

∫π1(θ1)f

η1 (η(y)|θ1) dθ1∫

π2(θ2)fη2 (η(y)|θ2) dθ2

,

c© Bayes factor based on the sole observation of η(y)

Page 87: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Limiting behaviour of B12 (under sufficiency)

If η(y) sufficient statistic for both models,

fi(y|θi) = gi(y)fηi (η(y)|θi)

Thus

B12(y) =

∫Θ1π(θ1)g1(y)f

η1 (η(y)|θ1) dθ1∫

Θ2π(θ2)g2(y)f

η2 (η(y)|θ2) dθ2

=g1(y)

∫π1(θ1)f

η1 (η(y)|θ1) dθ1

g2(y)∫π2(θ2)f

η2 (η(y)|θ2) dθ2

=g1(y)

g2(y)Bη

12(y) .

[Didelot, Everitt, Johansen & Lawson, 2011]

c© No discrepancy only when cross-model sufficiency

Page 88: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Limiting behaviour of B12 (under sufficiency)

If η(y) sufficient statistic for both models,

fi(y|θi) = gi(y)fηi (η(y)|θi)

Thus

B12(y) =

∫Θ1π(θ1)g1(y)f

η1 (η(y)|θ1) dθ1∫

Θ2π(θ2)g2(y)f

η2 (η(y)|θ2) dθ2

=g1(y)

∫π1(θ1)f

η1 (η(y)|θ1) dθ1

g2(y)∫π2(θ2)f

η2 (η(y)|θ2) dθ2

=g1(y)

g2(y)Bη

12(y) .

[Didelot, Everitt, Johansen & Lawson, 2011]

c© No discrepancy only when cross-model sufficiency

Page 89: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Poisson/geometric example

Samplex = (x1, . . . , xn)

from either a Poisson P(λ) or from a geometric G(p) Then

S =

n∑i=1

yi = η(x)

sufficient statistic for either model but not simultaneously

Discrepancy ratio

g1(x)

g2(x)=S!n−S/

∏i yi!

1/(

n+S−1S

)

Page 90: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Poisson/geometric discrepancy

Range of B12(x) versus Bη12(x) B12(x): The values produced have

nothing in common.

Page 91: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Formal recovery

Creating an encompassing exponential family

f(x|θ1, θ2, α1, α2) ∝ exp{θT1 η1(x)+ θT

1 η1(x)+α1t1(x)+α2t2(x)}

leads to a sufficient statistic (η1(x), η2(x), t1(x), t2(x))[Didelot, Everitt, Johansen & Lawson, 2011]

Page 92: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Formal recovery

Creating an encompassing exponential family

f(x|θ1, θ2, α1, α2) ∝ exp{θT1 η1(x)+ θT

1 η1(x)+α1t1(x)+α2t2(x)}

leads to a sufficient statistic (η1(x), η2(x), t1(x), t2(x))[Didelot, Everitt, Johansen & Lawson, 2011]

In the Poisson/geometric case, if∏i xi! is added to S, no

discrepancy

Page 93: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Formal recovery

Creating an encompassing exponential family

f(x|θ1, θ2, α1, α2) ∝ exp{θT1 η1(x)+ θT

1 η1(x)+α1t1(x)+α2t2(x)}

leads to a sufficient statistic (η1(x), η2(x), t1(x), t2(x))[Didelot, Everitt, Johansen & Lawson, 2011]

Only applies in genuine sufficiency settings...

c© Inability to evaluate loss brought by summary statistics

Page 94: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Meaning of the ABC-Bayes factor

‘This is also why focus on model discrimination typically(...) proceeds by (...) accepting that the Bayes Factorthat one obtains is only derived from the summarystatistics and may in no way correspond to that of thefull model.’

[Scott Sisson, Jan. 31, 2011, X.’Og]

In the Poisson/geometric case, if E[yi] = θ0 > 0,

limn→∞

Bη12(y) =

(θ0 + 1)2

θ0e−θ0

Page 95: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Meaning of the ABC-Bayes factor

‘This is also why focus on model discrimination typically(...) proceeds by (...) accepting that the Bayes Factorthat one obtains is only derived from the summarystatistics and may in no way correspond to that of thefull model.’

[Scott Sisson, Jan. 31, 2011, X.’Og]

In the Poisson/geometric case, if E[yi] = θ0 > 0,

limn→∞

Bη12(y) =

(θ0 + 1)2

θ0e−θ0

Page 96: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

MA(q) divergence

1 2

0.0

0.1

0.2

0.3

0.4

0.5

0.6

1 2

0.0

0.1

0.2

0.3

0.4

0.5

0.6

1 2

0.0

0.1

0.2

0.3

0.4

0.5

0.6

1 2

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

Evolution [against ε] of ABC Bayes factor, in terms of frequencies ofvisits to models MA(1) (left) and MA(2) (right) when ε equal to10, 1, .1, .01% quantiles on insufficient autocovariance distances. Sampleof 50 points from a MA(2) with θ1 = 0.6, θ2 = 0.2. True Bayes factorequal to 17.71.

Page 97: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

MA(q) divergence

1 2

0.0

0.2

0.4

0.6

1 2

0.0

0.2

0.4

0.6

1 2

0.0

0.2

0.4

0.6

1 2

0.0

0.2

0.4

0.6

0.8

Evolution [against ε] of ABC Bayes factor, in terms of frequencies ofvisits to models MA(1) (left) and MA(2) (right) when ε equal to10, 1, .1, .01% quantiles on insufficient autocovariance distances. Sampleof 50 points from a MA(1) model with θ1 = 0.6. True Bayes factor B21

equal to .004.

Page 98: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Further comments

‘There should be the possibility that for the same model,but different (non-minimal) [summary] statistics (sodifferent η’s: η1 and η∗1) the ratio of evidences may nolonger be equal to one.’

[Michael Stumpf, Jan. 28, 2011, ’Og]

Using different summary statistics [on different models] mayindicate the loss of information brought by each set but agreementdoes not lead to trustworthy approximations.

Page 99: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

A population genetics evaluation

Population genetics example with

I 3 populations

I 2 scenari

I 15 individuals

I 5 loci

I single mutation parameter

I 24 summary statistics

I 2 million ABC proposal

I importance [tree] sampling alternative

Page 100: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

A population genetics evaluation

Population genetics example with

I 3 populations

I 2 scenari

I 15 individuals

I 5 loci

I single mutation parameter

I 24 summary statistics

I 2 million ABC proposal

I importance [tree] sampling alternative

Page 101: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Stability of importance sampling

0.0

0.2

0.4

0.6

0.8

1.0

0.0

0.2

0.4

0.6

0.8

1.0

0.0

0.2

0.4

0.6

0.8

1.0

0.0

0.2

0.4

0.6

0.8

1.0

0.0

0.2

0.4

0.6

0.8

1.0

Page 102: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Comparison with ABC

Use of 24 summary statistics and DIY-ABC logistic correction

●●

●●

●●

●●

●●

●●

●●

●●

● ●

●●

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Importance Sampling estimates of P(M=1|y)

AB

C e

stim

ates

of P

(M=

1|y)

Page 103: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Comparison with ABC

Use of 15 summary statistics and DIY-ABC logistic correction

●●

●●

●●

● ●

● ●

●●

●●

●●

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Importance Sampling estimates of P(M=1|y)

AB

C e

stim

ates

of P

(M=

1|y)

Page 104: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

Comparison with ABC

Use of 24 summary statistics and DIY-ABC logistic correction

●●

●●

● ●

●●

●●

●●

●●

●●

●●

● ●

●●

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Importance Sampling estimates of P(M=1|y)

AB

C e

stim

ates

of P

(M=

1|y)

Page 105: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

The only safe cases

Besides specific models like Gibbs random fields,

using distances over the data itself escapes the discrepancy...[Toni & Stumpf, 2010;Sousa et al., 2009]

...and so does the use of more informal model fitting measures[Ratmann, Andrieu, Richardson and Wiujf, 2009]

...but moment conditions on the summary statistics are actuallyavailable to discriminate between “good” and “bad” summarystatistics

[Marin, Pillai, X. and Rousseau, 2011, in prep.]

Page 106: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

The only safe cases

Besides specific models like Gibbs random fields,

using distances over the data itself escapes the discrepancy...[Toni & Stumpf, 2010;Sousa et al., 2009]

...and so does the use of more informal model fitting measures[Ratmann, Andrieu, Richardson and Wiujf, 2009]

...but moment conditions on the summary statistics are actuallyavailable to discriminate between “good” and “bad” summarystatistics

[Marin, Pillai, X. and Rousseau, 2011, in prep.]

Page 107: von Mises lecture, Berlin

ABC Methods for Bayesian Model Choice

ABC for model choice

Generic ABC model choice

The only safe cases

Besides specific models like Gibbs random fields,

using distances over the data itself escapes the discrepancy...[Toni & Stumpf, 2010;Sousa et al., 2009]

...and so does the use of more informal model fitting measures[Ratmann, Andrieu, Richardson and Wiujf, 2009]

...but moment conditions on the summary statistics are actuallyavailable to discriminate between “good” and “bad” summarystatistics

[Marin, Pillai, X. and Rousseau, 2011, in prep.]