[A]BCel : a presentation at ABC in Roma

136
(Approximate) Bayesian Computation as a new empirical Bayes method Christian P. Robert Universit´ e Paris-Dauphine & CREST, Paris Joint work with K.L. Mengersen and P. Pudlo ABC in Roma, 30 mai 2013

description

slides (updated) of my talk on Bayesian computation using empirical likelihood

Transcript of [A]BCel : a presentation at ABC in Roma

Page 1: [A]BCel : a presentation at ABC in Roma

(Approximate) Bayesian Computation as a newempirical Bayes method

Christian P. RobertUniversite Paris-Dauphine & CREST, Paris

Joint work with K.L. Mengersen and P. PudloABC in Roma, 30 mai 2013

Page 2: [A]BCel : a presentation at ABC in Roma

meeting of potential interest?!

MCMSki IV to be held in Chamonix Mt Blanc, France, fromMonday, Jan. 6 to Wed., Jan. 8, 2014All aspects of MCMC++ theory and methodology and applicationsand more!

Website http://www.pages.drexel.edu/∼mwl25/mcmski/

Page 3: [A]BCel : a presentation at ABC in Roma

Outline

Intractable likelihoods

ABC methods

ABC as an inference machine

[A]BCel

Conclusion and perspectives

Page 4: [A]BCel : a presentation at ABC in Roma

Intractable likelihood

Case of a well-defined statistical model where the likelihoodfunction

`(θ|y) = f (y1, . . . , yn|θ)

I is (really!) not available in closed form

I cannot (easily!) be either completed or demarginalised

I cannot be (at all!) estimated by an unbiased estimator

c© Prohibits direct implementation of a generic MCMC algorithmlike Metropolis–Hastings

Page 5: [A]BCel : a presentation at ABC in Roma

Intractable likelihood

Case of a well-defined statistical model where the likelihoodfunction

`(θ|y) = f (y1, . . . , yn|θ)

I is (really!) not available in closed form

I cannot (easily!) be either completed or demarginalised

I cannot be (at all!) estimated by an unbiased estimator

c© Prohibits direct implementation of a generic MCMC algorithmlike Metropolis–Hastings

Page 6: [A]BCel : a presentation at ABC in Roma

The abc alternative

Case of a well-defined statistical model where the likelihoodfunction

`(θ|y) = f (y1, . . . , yn|θ)

is out of reach

Empirical approximations to the original B problem

I Degrading the data precision down to a tolerance ε

I Replacing the likelihood with a non-parametric approximation

I Summarising/replacing the data with insufficient statistics

Page 7: [A]BCel : a presentation at ABC in Roma

The abc alternative

Case of a well-defined statistical model where the likelihoodfunction

`(θ|y) = f (y1, . . . , yn|θ)

is out of reach

Empirical approximations to the original B problem

I Degrading the data precision down to a tolerance ε

I Replacing the likelihood with a non-parametric approximation

I Summarising/replacing the data with insufficient statistics

Page 8: [A]BCel : a presentation at ABC in Roma

The abc alternative

Case of a well-defined statistical model where the likelihoodfunction

`(θ|y) = f (y1, . . . , yn|θ)

is out of reach

Empirical approximations to the original B problem

I Degrading the data precision down to a tolerance ε

I Replacing the likelihood with a non-parametric approximation

I Summarising/replacing the data with insufficient statistics

Page 9: [A]BCel : a presentation at ABC in Roma

Different [levels of] worries about abc

Impact on B inference

I a mere computational issue (that will eventually end up beingsolved by more powerful computers, &tc, even if too costly inthe short term, as for regular Monte Carlo methods) Not!

I an inferential issue (opening opportunities for new inferencemachine on complex/Big Data models, with legitimitydiffering from classical B approach)

I a Bayesian conundrum (how closely related to the/a Bapproach? more than recycling B tools? true B with rawdata? a new type of empirical Bayes inference?)

Page 10: [A]BCel : a presentation at ABC in Roma

Different [levels of] worries about abc

Impact on B inference

I a mere computational issue (that will eventually end up beingsolved by more powerful computers, &tc, even if too costly inthe short term, as for regular Monte Carlo methods) Not!

I an inferential issue (opening opportunities for new inferencemachine on complex/Big Data models, with legitimitydiffering from classical B approach)

I a Bayesian conundrum (how closely related to the/a Bapproach? more than recycling B tools? true B with rawdata? a new type of empirical Bayes inference?)

Page 11: [A]BCel : a presentation at ABC in Roma

Different [levels of] worries about abc

Impact on B inference

I a mere computational issue (that will eventually end up beingsolved by more powerful computers, &tc, even if too costly inthe short term, as for regular Monte Carlo methods) Not!

I an inferential issue (opening opportunities for new inferencemachine on complex/Big Data models, with legitimitydiffering from classical B approach)

I a Bayesian conundrum (how closely related to the/a Bapproach? more than recycling B tools? true B with rawdata? a new type of empirical Bayes inference?)

Page 12: [A]BCel : a presentation at ABC in Roma

summary: reply to Lange at al. [ISR, 2013]

“Surprisingly, the confident prediction of the previous generation that Bayesian methods wouldultimately supplant frequentist methods has given way to a realization that Markov chain MonteCarlo (MCMC) may be too slow to handle modern data sets. Size matters because large data setsstress computer storage and processing power to the breaking point. The most successfulcompromises between Bayesian and frequentist methods now rely on penalization andoptimization.”

I sad reality constraint that size does matter

I focus on much smaller dimensions and on sparse summaries

I many (fast) ways of producing those summaries

I Bayesian inference can kick in almost automatically at thisstage

Page 13: [A]BCel : a presentation at ABC in Roma

summary: reply to Lange at al. [ISR, 2013]

“Surprisingly, the confident prediction of the previous generation that Bayesian methods wouldultimately supplant frequentist methods has given way to a realization that Markov chain MonteCarlo (MCMC) may be too slow to handle modern data sets. Size matters because large data setsstress computer storage and processing power to the breaking point. The most successfulcompromises between Bayesian and frequentist methods now rely on penalization andoptimization.”

I sad reality constraint that size does matter

I focus on much smaller dimensions and on sparse summaries

I many (fast) ways of producing those summaries

I Bayesian inference can kick in almost automatically at thisstage

Page 14: [A]BCel : a presentation at ABC in Roma

Econom’ections

Similar exploration of simulation-based and approximationtechniques in Econometrics

I Simulated method of moments

I Method of simulated moments

I Simulated pseudo-maximum-likelihood

I Indirect inference

[Gourieroux & Monfort, 1996]

even though motivation is partly-defined models rather thancomplex likelihoods

Page 15: [A]BCel : a presentation at ABC in Roma

Econom’ections

Similar exploration of simulation-based and approximationtechniques in Econometrics

I Simulated method of moments

I Method of simulated moments

I Simulated pseudo-maximum-likelihood

I Indirect inference

[Gourieroux & Monfort, 1996]

even though motivation is partly-defined models rather thancomplex likelihoods

Page 16: [A]BCel : a presentation at ABC in Roma

Indirect inference

Minimise [in θ] a distance between estimators β based on apseudo-model for genuine observations and for observationssimulated under the true model and the parameter θ.

[Gourieroux, Monfort, & Renault, 1993;Smith, 1993; Gallant & Tauchen, 1996]

Page 17: [A]BCel : a presentation at ABC in Roma

Indirect inference (PML vs. PSE)

Example of the pseudo-maximum-likelihood (PML)

β(y) = arg maxβ

∑t

log f ?(yt |β, y1:(t−1))

leading to

arg minθ

||β(yo) − β(y1(θ), . . . ,yS(θ))||2

whenys(θ) ∼ f (y|θ) s = 1, . . . ,S

Page 18: [A]BCel : a presentation at ABC in Roma

Indirect inference (PML vs. PSE)

Example of the pseudo-score-estimator (PSE)

β(y) = arg minβ

{∑t

∂ log f ?

∂β(yt |β, y1:(t−1))

}2

leading to

arg minθ

||β(yo) − β(y1(θ), . . . ,yS(θ))||2

whenys(θ) ∼ f (y|θ) s = 1, . . . ,S

Page 19: [A]BCel : a presentation at ABC in Roma

Consistent indirect inference

“...in order to get a unique solution the dimension ofthe auxiliary parameter β must be larger than or equal tothe dimension of the initial parameter θ. If the problem isjust identified the different methods become easier...”

Consistency depending on the criterion and on the asymptoticidentifiability of θ

[Gourieroux & Monfort, 1996, p. 66]

Which connection [if any] with the B perspective?

Page 20: [A]BCel : a presentation at ABC in Roma

Consistent indirect inference

“...in order to get a unique solution the dimension ofthe auxiliary parameter β must be larger than or equal tothe dimension of the initial parameter θ. If the problem isjust identified the different methods become easier...”

Consistency depending on the criterion and on the asymptoticidentifiability of θ

[Gourieroux & Monfort, 1996, p. 66]

Which connection [if any] with the B perspective?

Page 21: [A]BCel : a presentation at ABC in Roma

Consistent indirect inference

“...in order to get a unique solution the dimension ofthe auxiliary parameter β must be larger than or equal tothe dimension of the initial parameter θ. If the problem isjust identified the different methods become easier...”

Consistency depending on the criterion and on the asymptoticidentifiability of θ

[Gourieroux & Monfort, 1996, p. 66]

Which connection [if any] with the B perspective?

Page 22: [A]BCel : a presentation at ABC in Roma

Approximate Bayesian computation

Intractable likelihoods

ABC methodsGenesis of ABCABC basicsAdvances and interpretationsABC as knn

ABC as an inference machine

[A]BCel

Conclusion and perspectives

Page 23: [A]BCel : a presentation at ABC in Roma

Genetic background of ABC

skip genetics

ABC is a recent computational technique that only requires beingable to sample from the likelihood f (·|θ)

This technique stemmed from population genetics models, about15 years ago, and population geneticists still contributesignificantly to methodological developments of ABC.

[Griffith & al., 1997; Tavare & al., 1999]

Page 24: [A]BCel : a presentation at ABC in Roma

Demo-genetic inference

Each model is characterized by a set of parameters θ that coverhistorical (time divergence, admixture time ...), demographics(population sizes, admixture rates, migration rates, ...) and genetic(mutation rate, ...) factors

The goal is to estimate these parameters from a dataset ofpolymorphism (DNA sample) y observed at the present time

Problem:

most of the time, we cannot calculate the likelihood of thepolymorphism data f (y|θ)...

Page 25: [A]BCel : a presentation at ABC in Roma

Demo-genetic inference

Each model is characterized by a set of parameters θ that coverhistorical (time divergence, admixture time ...), demographics(population sizes, admixture rates, migration rates, ...) and genetic(mutation rate, ...) factors

The goal is to estimate these parameters from a dataset ofpolymorphism (DNA sample) y observed at the present time

Problem:

most of the time, we cannot calculate the likelihood of thepolymorphism data f (y|θ)...

Page 26: [A]BCel : a presentation at ABC in Roma

Neutral model at a given microsatellite locus, in a closedpanmictic population at equilibrium

Sample of 8 genes

Mutations according tothe Simple stepwiseMutation Model(SMM)• date of the mutations ∼

Poisson process withintensity θ/2 over thebranches• MRCA = 100• independent mutations:±1 with pr. 1/2

Page 27: [A]BCel : a presentation at ABC in Roma

Neutral model at a given microsatellite locus, in a closedpanmictic population at equilibrium

Kingman’s genealogyWhen time axis isnormalized,T (k) ∼ Exp(k(k − 1)/2)

Mutations according tothe Simple stepwiseMutation Model(SMM)• date of the mutations ∼

Poisson process withintensity θ/2 over thebranches• MRCA = 100• independent mutations:±1 with pr. 1/2

Page 28: [A]BCel : a presentation at ABC in Roma

Neutral model at a given microsatellite locus, in a closedpanmictic population at equilibrium

Kingman’s genealogyWhen time axis isnormalized,T (k) ∼ Exp(k(k − 1)/2)

Mutations according tothe Simple stepwiseMutation Model(SMM)• date of the mutations ∼

Poisson process withintensity θ/2 over thebranches• MRCA = 100• independent mutations:±1 with pr. 1/2

Page 29: [A]BCel : a presentation at ABC in Roma

Neutral model at a given microsatellite locus, in a closedpanmictic population at equilibrium

Observations: leafs of the treeθ = ?

Kingman’s genealogyWhen time axis isnormalized,T (k) ∼ Exp(k(k − 1)/2)

Mutations according tothe Simple stepwiseMutation Model(SMM)• date of the mutations ∼

Poisson process withintensity θ/2 over thebranches• MRCA = 100• independent mutations:±1 with pr. 1/2

Page 30: [A]BCel : a presentation at ABC in Roma

Much more interesting models. . .

I several independent locusIndependent gene genealogies and mutations

I different populationslinked by an evolutionary scenario made of divergences,admixtures, migrations between populations, etc.

I larger sample sizeusually between 50 and 100 genes

A typical evolutionary scenario:

MRCA

POP 0 POP 1 POP 2

τ1

τ2

Page 31: [A]BCel : a presentation at ABC in Roma

Intractable likelihood

Missing (too much missing!) data structure:

f (y|θ) =

∫G

f (y|G ,θ)f (G |θ)dG

cannot be computed in a manageable way...[Stephens & Donnelly, 2000]

The genealogies are considered as nuisance parameters

This modelling clearly differs from the phylogenetic perspective

where the tree is the parameter of interest.

Page 32: [A]BCel : a presentation at ABC in Roma

Intractable likelihood

Missing (too much missing!) data structure:

f (y|θ) =

∫G

f (y|G ,θ)f (G |θ)dG

cannot be computed in a manageable way...[Stephens & Donnelly, 2000]

The genealogies are considered as nuisance parameters

This modelling clearly differs from the phylogenetic perspective

where the tree is the parameter of interest.

Page 33: [A]BCel : a presentation at ABC in Roma

A?B?C?

I A stands for approximate[wrong likelihood / pic?]

I B stands for Bayesian

I C stands for computation[producing a parametersample]

Page 34: [A]BCel : a presentation at ABC in Roma

A?B?C?

I A stands for approximate[wrong likelihood / pic?]

I B stands for Bayesian

I C stands for computation[producing a parametersample]

Page 35: [A]BCel : a presentation at ABC in Roma

A?B?C?

I A stands for approximate[wrong likelihood / pic?]

I B stands for Bayesian

I C stands for computation[producing a parametersample]

ESS=155.6

θ

Den

sity

−0.5 0.0 0.5 1.0

0.0

1.0

ESS=75.93

θ

Den

sity

−0.4 −0.2 0.0 0.2 0.4

0.0

1.0

2.0

ESS=76.87

θ

Den

sity

−0.4 −0.2 0.0 0.2

01

23

4

ESS=91.54

θ

Den

sity

−0.6 −0.4 −0.2 0.0 0.2

01

23

4

ESS=108.4

θ

Den

sity

−0.4 0.0 0.2 0.4 0.6

0.0

1.0

2.0

3.0

ESS=85.13

θ

Den

sity

−0.2 0.0 0.2 0.4 0.6

0.0

1.0

2.0

3.0

ESS=149.1

θ

Den

sity

−0.5 0.0 0.5 1.0

0.0

1.0

2.0

ESS=96.31

θ

Den

sity

−0.4 0.0 0.2 0.4 0.6

0.0

1.0

2.0

ESS=83.77

θ

Den

sity

−0.6 −0.4 −0.2 0.0 0.2 0.4

01

23

4

ESS=155.7

θ

Den

sity

−0.5 0.0 0.5

0.0

1.0

2.0

ESS=92.42

θ

Den

sity

−0.4 0.0 0.2 0.4 0.6

0.0

1.0

2.0

3.0 ESS=95.01

θ

Den

sity

−0.4 0.0 0.2 0.4 0.6

0.0

1.5

3.0

ESS=139.2

Den

sity

−0.6 −0.2 0.2 0.6

0.0

1.0

2.0

ESS=99.33

Den

sity

−0.4 −0.2 0.0 0.2 0.4

0.0

1.0

2.0

3.0

ESS=87.28

Den

sity

−0.2 0.0 0.2 0.4 0.6

01

23

Page 36: [A]BCel : a presentation at ABC in Roma

How Bayesian is aBc?

Could we turn the resolution into a Bayesian answer?

I ideally so (not meaningfull: requires ∞-ly powerful computer

I approximation error unknown (w/o costly simulation)

I true Bayes for wrong model (formal and artificial)

I true Bayes for noisy model (much more convincing)

I true Bayes for estimated likelihood (back to econometrics?)

I illuminating the tension between information and precision

Page 37: [A]BCel : a presentation at ABC in Roma

ABC methodology

Bayesian setting: target is π(θ)f (x |θ)When likelihood f (x |θ) not in closed form, likelihood-free rejectiontechnique:

Foundation

For an observation y ∼ f (y|θ), under the prior π(θ), if one keepsjointly simulating

θ′ ∼ π(θ) , z ∼ f (z|θ′) ,

until the auxiliary variable z is equal to the observed value, z = y,then the selected

θ′ ∼ π(θ|y)

[Rubin, 1984; Diggle & Gratton, 1984; Tavare et al., 1997]

Page 38: [A]BCel : a presentation at ABC in Roma

ABC methodology

Bayesian setting: target is π(θ)f (x |θ)When likelihood f (x |θ) not in closed form, likelihood-free rejectiontechnique:

Foundation

For an observation y ∼ f (y|θ), under the prior π(θ), if one keepsjointly simulating

θ′ ∼ π(θ) , z ∼ f (z|θ′) ,

until the auxiliary variable z is equal to the observed value, z = y,then the selected

θ′ ∼ π(θ|y)

[Rubin, 1984; Diggle & Gratton, 1984; Tavare et al., 1997]

Page 39: [A]BCel : a presentation at ABC in Roma

ABC methodology

Bayesian setting: target is π(θ)f (x |θ)When likelihood f (x |θ) not in closed form, likelihood-free rejectiontechnique:

Foundation

For an observation y ∼ f (y|θ), under the prior π(θ), if one keepsjointly simulating

θ′ ∼ π(θ) , z ∼ f (z|θ′) ,

until the auxiliary variable z is equal to the observed value, z = y,then the selected

θ′ ∼ π(θ|y)

[Rubin, 1984; Diggle & Gratton, 1984; Tavare et al., 1997]

Page 40: [A]BCel : a presentation at ABC in Roma

A as A...pproximative

When y is a continuous random variable, strict equality z = y isreplaced with a tolerance zone

ρ(y, z) 6 ε

where ρ is a distanceOutput distributed from

π(θ)Pθ{ρ(y, z) < ε}def∝ π(θ|ρ(y, z) < ε)

[Pritchard et al., 1999]

Page 41: [A]BCel : a presentation at ABC in Roma

A as A...pproximative

When y is a continuous random variable, strict equality z = y isreplaced with a tolerance zone

ρ(y, z) 6 ε

where ρ is a distanceOutput distributed from

π(θ)Pθ{ρ(y, z) < ε}def∝ π(θ|ρ(y, z) < ε)

[Pritchard et al., 1999]

Page 42: [A]BCel : a presentation at ABC in Roma

ABC algorithm

In most implementations, further degree of A...pproximation:

Algorithm 1 Likelihood-free rejection sampler

for i = 1 to N dorepeat

generate θ ′ from the prior distribution π(·)generate z from the likelihood f (·|θ ′)

until ρ{η(z),η(y)} 6 εset θi = θ

end for

where η(y) defines a (not necessarily sufficient) statistic

Page 43: [A]BCel : a presentation at ABC in Roma

Output

The likelihood-free algorithm samples from the marginal in z of:

πε(θ, z|y) =π(θ)f (z|θ)IAε,y(z)∫

Aε,y×Θ π(θ)f (z|θ)dzdθ,

where Aε,y = {z ∈ D|ρ(η(z),η(y)) < ε}.

The idea behind ABC is that the summary statistics coupled with asmall tolerance should provide a good approximation of theposterior distribution:

πε(θ|y) =

∫πε(θ, z|y)dz ≈ π(θ|y) .

...does it?!

Page 44: [A]BCel : a presentation at ABC in Roma

Output

The likelihood-free algorithm samples from the marginal in z of:

πε(θ, z|y) =π(θ)f (z|θ)IAε,y(z)∫

Aε,y×Θ π(θ)f (z|θ)dzdθ,

where Aε,y = {z ∈ D|ρ(η(z),η(y)) < ε}.

The idea behind ABC is that the summary statistics coupled with asmall tolerance should provide a good approximation of theposterior distribution:

πε(θ|y) =

∫πε(θ, z|y)dz ≈ π(θ|y) .

...does it?!

Page 45: [A]BCel : a presentation at ABC in Roma

Output

The likelihood-free algorithm samples from the marginal in z of:

πε(θ, z|y) =π(θ)f (z|θ)IAε,y(z)∫

Aε,y×Θ π(θ)f (z|θ)dzdθ,

where Aε,y = {z ∈ D|ρ(η(z),η(y)) < ε}.

The idea behind ABC is that the summary statistics coupled with asmall tolerance should provide a good approximation of theposterior distribution:

πε(θ|y) =

∫πε(θ, z|y)dz ≈ π(θ|y) .

...does it?!

Page 46: [A]BCel : a presentation at ABC in Roma

Output

The likelihood-free algorithm samples from the marginal in z of:

πε(θ, z|y) =π(θ)f (z|θ)IAε,y(z)∫

Aε,y×Θ π(θ)f (z|θ)dzdθ,

where Aε,y = {z ∈ D|ρ(η(z),η(y)) < ε}.

The idea behind ABC is that the summary statistics coupled with asmall tolerance should provide a good approximation of therestricted posterior distribution:

πε(θ|y) =

∫πε(θ, z|y)dz ≈ π(θ|η(y)) .

Not so good..!skip convergence details! to non-parametrics

Page 47: [A]BCel : a presentation at ABC in Roma

Convergence of ABC

What happens when ε→ 0?

For B ⊂ Θ, we have∫B

∫Aε,y

f (z|θ)dz∫Aε,y×Θ π(θ)f (z|θ)dzdθ

π(θ)dθ =

∫Aε,y

∫B f (z|θ)π(θ)dθ∫

Aε,y×Θ π(θ)f (z|θ)dzdθdz

=

∫Aε,y

∫B f (z|θ)π(θ)dθ

m(z)

m(z)∫Aε,y×Θ π(θ)f (z|θ)dzdθ

dz

=

∫Aε,y

π(B |z)m(z)∫

Aε,y×Θ π(θ)f (z|θ)dzdθdz

which indicates convergence for a continuous π(B |z).

Page 48: [A]BCel : a presentation at ABC in Roma

Convergence of ABC

What happens when ε→ 0?

For B ⊂ Θ, we have∫B

∫Aε,y

f (z|θ)dz∫Aε,y×Θ π(θ)f (z|θ)dzdθ

π(θ)dθ =

∫Aε,y

∫B f (z|θ)π(θ)dθ∫

Aε,y×Θ π(θ)f (z|θ)dzdθdz

=

∫Aε,y

∫B f (z|θ)π(θ)dθ

m(z)

m(z)∫Aε,y×Θ π(θ)f (z|θ)dzdθ

dz

=

∫Aε,y

π(B |z)m(z)∫

Aε,y×Θ π(θ)f (z|θ)dzdθdz

which indicates convergence for a continuous π(B |z).

Page 49: [A]BCel : a presentation at ABC in Roma

Convergence (do not attempt!)

...and the above does not apply to insufficient statistics:

If η(y) is not a sufficient statistics, the best one can hope for is

π(θ|η(y)) , not π(θ|y)

If η(y) is an ancillary statistic, the whole information contained iny is lost!, the “best” one can “hope” for is

π(θ|η(y)) = π(θ)

Page 50: [A]BCel : a presentation at ABC in Roma

Convergence (do not attempt!)

...and the above does not apply to insufficient statistics:

If η(y) is not a sufficient statistics, the best one can hope for is

π(θ|η(y)) , not π(θ|y)

If η(y) is an ancillary statistic, the whole information contained iny is lost!, the “best” one can “hope” for is

π(θ|η(y)) = π(θ)

Page 51: [A]BCel : a presentation at ABC in Roma

Convergence (do not attempt!)

...and the above does not apply to insufficient statistics:

If η(y) is not a sufficient statistics, the best one can hope for is

π(θ|η(y)) , not π(θ|y)

If η(y) is an ancillary statistic, the whole information contained iny is lost!, the “best” one can “hope” for is

π(θ|η(y)) = π(θ)

Page 52: [A]BCel : a presentation at ABC in Roma

Comments

I Role of distance paramount (because ε 6= 0)

I Scaling of components of η(y) is also determinant

I ε matters little if “small enough”

I representative of “curse of dimensionality”

I small is beautiful!

I the data as a whole may be paradoxically weakly informativefor ABC

Page 53: [A]BCel : a presentation at ABC in Roma

ABC (simul’) advances

how approximative is ABC? ABC as knn

Simulating from the prior is often poor in efficiencyEither modify the proposal distribution on θ to increase the densityof x ’s within the vicinity of y ...

[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]

...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger ε

[Beaumont et al., 2002]

.....or even by including ε in the inferential framework [ABCµ][Ratmann et al., 2009]

Page 54: [A]BCel : a presentation at ABC in Roma

ABC (simul’) advances

how approximative is ABC? ABC as knn

Simulating from the prior is often poor in efficiencyEither modify the proposal distribution on θ to increase the densityof x ’s within the vicinity of y ...

[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]

...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger ε

[Beaumont et al., 2002]

.....or even by including ε in the inferential framework [ABCµ][Ratmann et al., 2009]

Page 55: [A]BCel : a presentation at ABC in Roma

ABC (simul’) advances

how approximative is ABC? ABC as knn

Simulating from the prior is often poor in efficiencyEither modify the proposal distribution on θ to increase the densityof x ’s within the vicinity of y ...

[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]

...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger ε

[Beaumont et al., 2002]

.....or even by including ε in the inferential framework [ABCµ][Ratmann et al., 2009]

Page 56: [A]BCel : a presentation at ABC in Roma

ABC (simul’) advances

how approximative is ABC? ABC as knn

Simulating from the prior is often poor in efficiencyEither modify the proposal distribution on θ to increase the densityof x ’s within the vicinity of y ...

[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]

...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger ε

[Beaumont et al., 2002]

.....or even by including ε in the inferential framework [ABCµ][Ratmann et al., 2009]

Page 57: [A]BCel : a presentation at ABC in Roma

ABC-NP

Better usage of [prior] simulations byadjustement: instead of throwing awayθ ′ such that ρ(η(z),η(y)) > ε, replaceθ’s with locally regressed transforms

θ∗ = θ− {η(z) − η(y)}Tβ[Csillery et al., TEE, 2010]

where β is obtained by [NP] weighted least square regression on(η(z) − η(y)) with weights

Kδ {ρ(η(z),η(y))}

[Beaumont et al., 2002, Genetics]

Page 58: [A]BCel : a presentation at ABC in Roma

ABC-NP (regression)

Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) :weight directly simulation by

Kδ {ρ(η(z(θ)),η(y))}

or

1

S

S∑s=1

Kδ {ρ(η(zs(θ)),η(y))}

[consistent estimate of f (η|θ)]Curse of dimensionality: poor estimate when d = dim(η) is large...

Page 59: [A]BCel : a presentation at ABC in Roma

ABC-NP (regression)

Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) :weight directly simulation by

Kδ {ρ(η(z(θ)),η(y))}

or

1

S

S∑s=1

Kδ {ρ(η(zs(θ)),η(y))}

[consistent estimate of f (η|θ)]Curse of dimensionality: poor estimate when d = dim(η) is large...

Page 60: [A]BCel : a presentation at ABC in Roma

ABC-NP (density estimation)

Use of the kernel weights

Kδ {ρ(η(z(θ)),η(y))}

leads to the NP estimate of the posterior expectation∑i θi Kδ {ρ(η(z(θi )),η(y))}∑

i Kδ {ρ(η(z(θi )),η(y))}

[Blum, JASA, 2010]

Page 61: [A]BCel : a presentation at ABC in Roma

ABC-NP (density estimation)

Use of the kernel weights

Kδ {ρ(η(z(θ)),η(y))}

leads to the NP estimate of the posterior conditional density∑i Kb(θi − θ)Kδ {ρ(η(z(θi )),η(y))}∑

i Kδ {ρ(η(z(θi )),η(y))}

[Blum, JASA, 2010]

Page 62: [A]BCel : a presentation at ABC in Roma

ABC-NP (density estimations)

Other versions incorporating regression adjustments∑i Kb(θ

∗i − θ)Kδ {ρ(η(z(θi )),η(y))}∑

i Kδ {ρ(η(z(θi )),η(y))}

In all cases, error

E[g(θ|y)] − g(θ|y) = cb2 + cδ2 + OP(b2 + δ2) + OP(1/nδd)

var(g(θ|y)) =c

nbδd(1 + oP(1))

Page 63: [A]BCel : a presentation at ABC in Roma

ABC-NP (density estimations)

Other versions incorporating regression adjustments∑i Kb(θ

∗i − θ)Kδ {ρ(η(z(θi )),η(y))}∑

i Kδ {ρ(η(z(θi )),η(y))}

In all cases, error

E[g(θ|y)] − g(θ|y) = cb2 + cδ2 + OP(b2 + δ2) + OP(1/nδd)

var(g(θ|y)) =c

nbδd(1 + oP(1))

[Blum, JASA, 2010]

Page 64: [A]BCel : a presentation at ABC in Roma

ABC-NP (density estimations)

Other versions incorporating regression adjustments∑i Kb(θ

∗i − θ)Kδ {ρ(η(z(θi )),η(y))}∑

i Kδ {ρ(η(z(θi )),η(y))}

In all cases, error

E[g(θ|y)] − g(θ|y) = cb2 + cδ2 + OP(b2 + δ2) + OP(1/nδd)

var(g(θ|y)) =c

nbδd(1 + oP(1))

[standard NP calculations]

Page 65: [A]BCel : a presentation at ABC in Roma

ABC as knn

see and listen to Gerard’s talk

[Biau et al., 2013, Annales de l’IHP]Practice of ABC: determine tolerance ε as a quantile on observeddistances, say 10% or 1% quantile,

ε = εN = qα(d1, . . . , dN)

I Interpretation of ε as nonparametric bandwidth onlyapproximation of the actual practice

[Blum & Francois, 2010]

I ABC is a k-nearest neighbour (knn) method with kN = NεN

[Loftsgaarden & Quesenberry, 1965]

Page 66: [A]BCel : a presentation at ABC in Roma

ABC as knn

see and listen to Gerard’s talk

[Biau et al., 2013, Annales de l’IHP]Practice of ABC: determine tolerance ε as a quantile on observeddistances, say 10% or 1% quantile,

ε = εN = qα(d1, . . . , dN)

I Interpretation of ε as nonparametric bandwidth onlyapproximation of the actual practice

[Blum & Francois, 2010]

I ABC is a k-nearest neighbour (knn) method with kN = NεN

[Loftsgaarden & Quesenberry, 1965]

Page 67: [A]BCel : a presentation at ABC in Roma

ABC as knn

see and listen to Gerard’s talk

[Biau et al., 2013, Annales de l’IHP]Practice of ABC: determine tolerance ε as a quantile on observeddistances, say 10% or 1% quantile,

ε = εN = qα(d1, . . . , dN)

I Interpretation of ε as nonparametric bandwidth onlyapproximation of the actual practice

[Blum & Francois, 2010]

I ABC is a k-nearest neighbour (knn) method with kN = NεN

[Loftsgaarden & Quesenberry, 1965]

Page 68: [A]BCel : a presentation at ABC in Roma

ABC consistency

Provided

kN/ log log N −→∞ and kN/N −→ 0

as N →∞, for almost all s0 (with respect to the distribution ofS), with probability 1,

1

kN

kN∑j=1

ϕ(θj) −→ E[ϕ(θj)|S = s0]

[Devroye, 1982]

Biau et al. (2012) also recall pointwise and integrated mean square errorconsistency results on the corresponding kernel estimate of theconditional posterior distribution, under constraints

kN →∞, kN/N → 0, hN → 0 and hpN kN →∞,

Page 69: [A]BCel : a presentation at ABC in Roma

ABC consistency

Provided

kN/ log log N −→∞ and kN/N −→ 0

as N →∞, for almost all s0 (with respect to the distribution ofS), with probability 1,

1

kN

kN∑j=1

ϕ(θj) −→ E[ϕ(θj)|S = s0]

[Devroye, 1982]

Biau et al. (2012) also recall pointwise and integrated mean square errorconsistency results on the corresponding kernel estimate of theconditional posterior distribution, under constraints

kN →∞, kN/N → 0, hN → 0 and hpN kN →∞,

Page 70: [A]BCel : a presentation at ABC in Roma

Rates of convergence

Further assumptions (on target and kernel) allow for precise(integrated mean square) convergence rates (as a power of thesample size N), derived from classical k-nearest neighbourregression, like

I when m = 1, 2, 3, kN ≈ N(p+4)/(p+8) and rate N− 4p+8

I when m = 4, kN ≈ N(p+4)/(p+8) and rate N− 4p+8 log N

I when m > 4, kN ≈ N(p+4)/(m+p+4) and rate N− 4m+p+4

[Biau et al., 2012, arxiv:1207.6461]

Drag: Only applies to sufficient summary statistics

Page 71: [A]BCel : a presentation at ABC in Roma

Rates of convergence

Further assumptions (on target and kernel) allow for precise(integrated mean square) convergence rates (as a power of thesample size N), derived from classical k-nearest neighbourregression, like

I when m = 1, 2, 3, kN ≈ N(p+4)/(p+8) and rate N− 4p+8

I when m = 4, kN ≈ N(p+4)/(p+8) and rate N− 4p+8 log N

I when m > 4, kN ≈ N(p+4)/(m+p+4) and rate N− 4m+p+4

[Biau et al., 2012, arxiv:1207.6461]

Drag: Only applies to sufficient summary statistics

Page 72: [A]BCel : a presentation at ABC in Roma

ABC inference machine

Intractable likelihoods

ABC methods

ABC as an inference machineError inc.Exact BC and approximatetargetssummary statistic

[A]BCel

Conclusion and perspectives

Page 73: [A]BCel : a presentation at ABC in Roma

How Bayesian is aBc..?

I may be a convergent method of inference (meaningful?sufficient? foreign?)

I approximation error unknown (w/o massive simulation)

I pragmatic/empirical B (there is no other solution!)

I many calibration issues (tolerance, distance, statistics)

I the NP side should be incorporated into the whole B picture

I the approximation error should also be part of the B inference

to ABCel

Page 74: [A]BCel : a presentation at ABC in Roma

ABCµ

Idea Infer about the error as well as about the parameter:Use of a joint density

f (θ, ε|y) ∝ ξ(ε|y, θ)× πθ(θ)× πε(ε)

where y is the data, and ξ(ε|y, θ) is the prior predictive density ofρ(η(z),η(y)) given θ and y when z ∼ f (z|θ)Warning! Replacement of ξ(ε|y, θ) with a non-parametric kernelapproximation.

[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]

Page 75: [A]BCel : a presentation at ABC in Roma

ABCµ

Idea Infer about the error as well as about the parameter:Use of a joint density

f (θ, ε|y) ∝ ξ(ε|y, θ)× πθ(θ)× πε(ε)

where y is the data, and ξ(ε|y, θ) is the prior predictive density ofρ(η(z),η(y)) given θ and y when z ∼ f (z|θ)Warning! Replacement of ξ(ε|y, θ) with a non-parametric kernelapproximation.

[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]

Page 76: [A]BCel : a presentation at ABC in Roma

ABCµ

Idea Infer about the error as well as about the parameter:Use of a joint density

f (θ, ε|y) ∝ ξ(ε|y, θ)× πθ(θ)× πε(ε)

where y is the data, and ξ(ε|y, θ) is the prior predictive density ofρ(η(z),η(y)) given θ and y when z ∼ f (z|θ)Warning! Replacement of ξ(ε|y, θ) with a non-parametric kernelapproximation.

[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]

Page 77: [A]BCel : a presentation at ABC in Roma

ABCµ details

Multidimensional distances ρk (k = 1, . . . ,K ) and errorsεk = ρk(ηk(z),ηk(y)), with

εk ∼ ξk(ε|y, θ) ≈ ξk(ε|y, θ) =1

Bhk

∑b

K [{εk−ρk(ηk(zb),ηk(y))}/hk ]

then used in replacing ξ(ε|y, θ) with mink ξk(ε|y, θ)ABCµ involves acceptance probability

π(θ ′, ε ′)

π(θ, ε)

q(θ ′, θ)q(ε ′, ε)

q(θ, θ ′)q(ε, ε ′)

mink ξk(ε′|y, θ ′)

mink ξk(ε|y, θ)

Page 78: [A]BCel : a presentation at ABC in Roma

ABCµ details

Multidimensional distances ρk (k = 1, . . . ,K ) and errorsεk = ρk(ηk(z),ηk(y)), with

εk ∼ ξk(ε|y, θ) ≈ ξk(ε|y, θ) =1

Bhk

∑b

K [{εk−ρk(ηk(zb),ηk(y))}/hk ]

then used in replacing ξ(ε|y, θ) with mink ξk(ε|y, θ)ABCµ involves acceptance probability

π(θ ′, ε ′)

π(θ, ε)

q(θ ′, θ)q(ε ′, ε)

q(θ, θ ′)q(ε, ε ′)

mink ξk(ε′|y, θ ′)

mink ξk(ε|y, θ)

Page 79: [A]BCel : a presentation at ABC in Roma

Wilkinson’s exact BC (not exactly!)

ABC approximation error (i.e. non-zero tolerance) replaced withexact simulation from a controlled approximation to the target,convolution of true posterior with kernel function

πε(θ, z|y) =π(θ)f (z|θ)Kε(y− z)∫π(θ)f (z|θ)Kε(y− z)dzdθ

,

with Kε kernel parameterised by bandwidth ε.[Wilkinson, 2013]

Theorem

The ABC algorithm based on the assumption of a randomisedobservation y = y+ ξ, ξ ∼ Kε, and an acceptance probability of

Kε(y− z)/M

gives draws from the posterior distribution π(θ|y).

Page 80: [A]BCel : a presentation at ABC in Roma

Wilkinson’s exact BC (not exactly!)

ABC approximation error (i.e. non-zero tolerance) replaced withexact simulation from a controlled approximation to the target,convolution of true posterior with kernel function

πε(θ, z|y) =π(θ)f (z|θ)Kε(y− z)∫π(θ)f (z|θ)Kε(y− z)dzdθ

,

with Kε kernel parameterised by bandwidth ε.[Wilkinson, 2013]

Theorem

The ABC algorithm based on the assumption of a randomisedobservation y = y+ ξ, ξ ∼ Kε, and an acceptance probability of

Kε(y− z)/M

gives draws from the posterior distribution π(θ|y).

Page 81: [A]BCel : a presentation at ABC in Roma

How exact a BC?

Pros

I Pseudo-data from true model vs. observed data from noisymodel

I Outcome is completely controlled

I Link with ABCµ when assuming y observed withmeasurement error with density Kε

I Relates to the theory of model approximation[Kennedy & O’Hagan, 2001]

I leads to “noisy ABC”: justdo it!, i.e. perturbates the data down to precision ε and proceed

[Fearnhead & Prangle, 2012]

Cons

I Requires Kε to be bounded by M

I True approximation error never assessed

Page 82: [A]BCel : a presentation at ABC in Roma

How exact a BC?

Pros

I Pseudo-data from true model vs. observed data from noisymodel

I Outcome is completely controlled

I Link with ABCµ when assuming y observed withmeasurement error with density Kε

I Relates to the theory of model approximation[Kennedy & O’Hagan, 2001]

I leads to “noisy ABC”: justdo it!, i.e. perturbates the data down to precision ε and proceed

[Fearnhead & Prangle, 2012]

Cons

I Requires Kε to be bounded by M

I True approximation error never assessed

Page 83: [A]BCel : a presentation at ABC in Roma

Noisy ABC for HMMs

Specific case of a hidden Markov model summaries

Xt+1 ∼ Qθ(Xt , ·)Yt+1 ∼ gθ(·|xt)

where only y01:n is observed.

[Dean, Singh, Jasra, & Peters, 2011]

Use of specific constraints, adapted to the Markov structure:{y1 ∈ B(y 0

1 , ε)}× · · · ×

{yn ∈ B(y 0

n , ε)}

Page 84: [A]BCel : a presentation at ABC in Roma

Noisy ABC for HMMs

Specific case of a hidden Markov model summaries

Xt+1 ∼ Qθ(Xt , ·)Yt+1 ∼ gθ(·|xt)

where only y01:n is observed.

[Dean, Singh, Jasra, & Peters, 2011]

Use of specific constraints, adapted to the Markov structure:{y1 ∈ B(y 0

1 , ε)}× · · · ×

{yn ∈ B(y 0

n , ε)}

Page 85: [A]BCel : a presentation at ABC in Roma

Noisy ABC-MLE

Idea: Modify instead the data from the start

(y 01 + εζ1, . . . , yn + εζn)

[ see Fearnhead-Prangle ] andnoisy ABC-MLE estimate

arg maxθ

Pθ(Y1 ∈ B(y 0

1 + εζ1, ε), . . . ,Yn ∈ B(y 0n + εζn, ε)

)[Dean, Singh, Jasra, & Peters, 2011]

Page 86: [A]BCel : a presentation at ABC in Roma

Consistent noisy ABC-MLE

I Degrading the data improves the estimation performances:I Noisy ABC-MLE is asymptotically (in n) consistentI under further assumptions, the noisy ABC-MLE is

asymptotically normalI increase in variance of order ε−2

I likely degradation in precision or computing time due to thelack of summary statistic [curse of dimensionality]

Page 87: [A]BCel : a presentation at ABC in Roma

Which summary?

Fundamental difficulty of the choice of the summary statistic whenthere is no non-trivial sufficient statistics

I Loss of statistical information balanced against gain in dataroughening

I Approximation error and information loss remain unknown

I Choice of statistics induces choice of distance functiontowards standardisation

I may be imposed for external/practical reasons

I may gather several non-B point estimates

I can learn about efficient combination

Page 88: [A]BCel : a presentation at ABC in Roma

Which summary?

Fundamental difficulty of the choice of the summary statistic whenthere is no non-trivial sufficient statistics

I Loss of statistical information balanced against gain in dataroughening

I Approximation error and information loss remain unknown

I Choice of statistics induces choice of distance functiontowards standardisation

I may be imposed for external/practical reasons

I may gather several non-B point estimates

I can learn about efficient combination

Page 89: [A]BCel : a presentation at ABC in Roma

Which summary?

Fundamental difficulty of the choice of the summary statistic whenthere is no non-trivial sufficient statistics

I Loss of statistical information balanced against gain in dataroughening

I Approximation error and information loss remain unknown

I Choice of statistics induces choice of distance functiontowards standardisation

I may be imposed for external/practical reasons

I may gather several non-B point estimates

I can learn about efficient combination

Page 90: [A]BCel : a presentation at ABC in Roma

Which summary for model choice?

Depending on the choice of η(·), the Bayes factor based on thisinsufficient statistic,

Bη12(y) =

∫π1(θ1)f

η1 (η(y)|θ1) dθ1∫

π2(θ2)fη

2 (η(y)|θ2) dθ2,

is either consistent or inconsistent[X, Cornuet, Marin, & Pillai, 2012]

Consistency only depends on the range of

Ei [η(y)]

under both models[Marin, Pillai, X, & Rousseau, 2012] [ skip to ABCel ]

Page 91: [A]BCel : a presentation at ABC in Roma

Which summary for model choice?

Depending on the choice of η(·), the Bayes factor based on thisinsufficient statistic,

Bη12(y) =

∫π1(θ1)f

η1 (η(y)|θ1) dθ1∫

π2(θ2)fη

2 (η(y)|θ2) dθ2,

is either consistent or inconsistent[X, Cornuet, Marin, & Pillai, 2012]

Consistency only depends on the range of

Ei [η(y)]

under both models[Marin, Pillai, X, & Rousseau, 2012] [ skip to ABCel ]

Page 92: [A]BCel : a presentation at ABC in Roma

Semi-automatic ABC

Fearnhead and Prangle (2012) study ABC and the selection of thesummary statistic in close proximity to Wilkinson’s proposal

I ABC considered as inferential method and calibrated as such

I randomised (or ‘noisy’) version of the summary statistics

η(y) = η(y) + τε

I derivation of a well-calibrated version of ABC, i.e. analgorithm that gives proper predictions for the distributionassociated with this randomised summary statistic

Page 93: [A]BCel : a presentation at ABC in Roma

Summary [of F&P/statistics)

I optimality of the posterior expectation

E[θ|y]

of the parameter of interest as summary statistics η(y)![requires iterative process]

I use of the standard quadratic loss function

(θ− θ0)TA(θ− θ0)

I recent extension to model choice, optimality of Bayes factor

B12(y)

[F&P, ISBA 2012, Kyoto]

Page 94: [A]BCel : a presentation at ABC in Roma

Summary [of F&P/statistics)

I optimality of the posterior expectation

E[θ|y]

of the parameter of interest as summary statistics η(y)![requires iterative process]

I use of the standard quadratic loss function

(θ− θ0)TA(θ− θ0)

I recent extension to model choice, optimality of Bayes factor

B12(y)

[F&P, ISBA 2012, Kyoto]

Page 95: [A]BCel : a presentation at ABC in Roma

Summary [about summaries]

I Choice of summary statistics is paramount for ABCvalidation/performances

I At best, ABC approximates π(. | η(y)) [unavoidable]

I Model selection feasible with ABC [with caution!]

I For estimation, consistency if {θ;µ(θ) = µ0} = θ0 whenEθ[η(y)] = µ(θ)

I For testing consistency if{µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅

[Marin et al., 2011]

Page 96: [A]BCel : a presentation at ABC in Roma

Summary [about summaries]

I Choice of summary statistics is paramount for ABCvalidation/performances

I At best, ABC approximates π(. | η(y)) [unavoidable]

I Model selection feasible with ABC [with caution!]

I For estimation, consistency if {θ;µ(θ) = µ0} = θ0 whenEθ[η(y)] = µ(θ)

I For testing consistency if{µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅

[Marin et al., 2011]

Page 97: [A]BCel : a presentation at ABC in Roma

Summary [about summaries]

I Choice of summary statistics is paramount for ABCvalidation/performances

I At best, ABC approximates π(. | η(y)) [unavoidable]

I Model selection feasible with ABC [with caution!]

I For estimation, consistency if {θ;µ(θ) = µ0} = θ0 whenEθ[η(y)] = µ(θ)

I For testing consistency if{µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅

[Marin et al., 2011]

Page 98: [A]BCel : a presentation at ABC in Roma

Summary [about summaries]

I Choice of summary statistics is paramount for ABCvalidation/performances

I At best, ABC approximates π(. | η(y)) [unavoidable]

I Model selection feasible with ABC [with caution!]

I For estimation, consistency if {θ;µ(θ) = µ0} = θ0 whenEθ[η(y)] = µ(θ)

I For testing consistency if{µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅

[Marin et al., 2011]

Page 99: [A]BCel : a presentation at ABC in Roma

Summary [about summaries]

I Choice of summary statistics is paramount for ABCvalidation/performances

I At best, ABC approximates π(. | η(y)) [unavoidable]

I Model selection feasible with ABC [with caution!]

I For estimation, consistency if {θ;µ(θ) = µ0} = θ0 whenEθ[η(y)] = µ(θ)

I For testing consistency if{µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅

[Marin et al., 2011]

Page 100: [A]BCel : a presentation at ABC in Roma

Empirical likelihood (EL)

Intractable likelihoods

ABC methods

ABC as an inference machine

[A]BCel

ABC and ELComposite likelihoodIllustrations

Conclusion and perspectives

Page 101: [A]BCel : a presentation at ABC in Roma

Empirical likelihood (EL)

Dataset x made of n independent replicates x = (x1, . . . , xn) of arv X ∼ F

Generalized moment condition pseudo-model

EF

[h(X ,φ)

]= 0,

where h is a known function, and φ an unknown parameter

Induced empirical likelihood

Lel(φ|x) = maxp

n∏i=1

pi

for all p such that 0 6 pi 6 1,∑

i pi = 1,∑

i pi h(xi ,φ) = 0.

[Owen, 1988, Bio’ka, & Empirical Likelihood, 2001]

Page 102: [A]BCel : a presentation at ABC in Roma

Empirical likelihood (EL)

Dataset x made of n independent replicates x = (x1, . . . , xn) of arv X ∼ F

Generalized moment condition pseudo-model

EF

[h(X ,φ)

]= 0,

where h is a known function, and φ an unknown parameter

Induced empirical likelihood

Lel(φ|x) = maxp

n∏i=1

pi

for all p such that 0 6 pi 6 1,∑

i pi = 1,∑

i pi h(xi ,φ) = 0.

[Owen, 1988, Bio’ka, & Empirical Likelihood, 2001]

Page 103: [A]BCel : a presentation at ABC in Roma

EL motivation

Estimate of the data distribution under moment conditions∑i

pi h(xi ,φ) = 0

Example

Normal N(θ,σ2) sample

x1, . . . , x25

Constraints

Eθ,σ[(x − θ, (x − θ)2 − σ2)] = 0

with θ = −.45, σ = 1.12 −2.0 −1.5 −1.0 −0.5 0.0 0.5 1.0

01

23

45

θ

σ

Page 104: [A]BCel : a presentation at ABC in Roma

Convergence of EL [3.4]

Theorem 3.4 Let X ,Y1, . . . ,Yn be independent rv’s with commondistribution F0. For θ ∈ Θ, and the function h(X , θ) ∈ Rs , letθ0 ∈ Θ be such that

Var(h(Yi , θ0))

is finite and has rank q > 0. If θ0 satisfies

E(h(X , θ0)) = 0,

then

−2 log

(Lel(θ0|Y1, . . . ,Yn)

n−n

)→ χ2(q)

in distribution when n→∞.[Owen, 2001]

Page 105: [A]BCel : a presentation at ABC in Roma

Convergence of EL [3.4]

“...The interesting thing about Theorem 3.4 is what is not there. Itincludes no conditions to make θ a good estimate of θ0, nor evenconditions to ensure a unique value for θ0, nor even that any solution θ0

exists. Theorem 3.4 applies in the just determined, over-determined, andunder-determined cases. When we can prove that our estimatingequations uniquely define θ0, and provide a consistent estimator θ of it,then confidence regions and tests follow almost automatically throughTheorem 3.4.”.

[Owen, 2001]

Page 106: [A]BCel : a presentation at ABC in Roma

Raw [A]BCelsampler

Naıve implementation: Act as if EL was an exact likelihood[Lazar, 2003]

for i = 1→ N do

generate φi from the prior distribution π(·)set the weight ωi = Lel(φi |xobs)

end for

return (φi ,ωi ), i = 1, . . . ,N

I Output weighted sample of size N

Page 107: [A]BCel : a presentation at ABC in Roma

Raw [A]BCelsampler

Naıve implementation: Act as if EL was an exact likelihood[Lazar, 2003]

for i = 1→ N do

generate φi from the prior distribution π(·)set the weight ωi = Lel(φi |xobs)

end for

return (φi ,ωi ), i = 1, . . . ,N

I Performance evaluated through effective sample size

ESS = 1/ N∑

i=1

ωi

/ N∑j=1

ωj

2

Page 108: [A]BCel : a presentation at ABC in Roma

AMIS implementation

More advanced algorithms can be adapted to EL: E.g., adaptivemultiple importance sampling (AMIS) to speed up computations

[Cornuet et al., 2012]

for i = 1 to M do

Generate θ1,i ∼ q1(·)Set ω1,i = Lel(θ1,i |y)

end for

for t = 2 to TM do

Compute weighted mean mt and variance Σt

for i = 1 to M do

Generate θt,i ∼ qt(·) = t3(·|mt ,Σt)

Set ωt,i = π(θt,i )Lel(θt,i |y)/∑t−1

s=1 qs(θt,i )end for

for r = 1 to t − 1 and i = 1 to M do

Update weights as

ωr ,i = π(θt,i )Lel(θr ,i |y)/∑t−1

s=1 qs(θr ,i )end for

end for

Page 109: [A]BCel : a presentation at ABC in Roma

Moment condition in population genetics?

EL does not require a fully defined and often complex (hencedebatable) parametric model

Main difficulty

Derive a constraintEF

[h(X ,φ)

]= 0,

on the parameters of interest φ when X is made of the genotypesof the sample of individuals at a given locus

E.g., in phylogeography, φ is composed of

I dates of divergence between populations,

I ratio of population sizes,

I mutation rates, etc.

None of them moments of the distribution of the allelic states ofthe sample

Page 110: [A]BCel : a presentation at ABC in Roma

Moment condition in population genetics?

EL does not require a fully defined and often complex (hencedebatable) parametric model

Main difficulty

Derive a constraintEF

[h(X ,φ)

]= 0,

on the parameters of interest φ when X is made of the genotypesof the sample of individuals at a given locus

c© take an h made of pairwise composite scores (whose zero is thepairwise maximum likelihood estimator)

Page 111: [A]BCel : a presentation at ABC in Roma

Pairwise composite likelihood

The intra-locus pairwise likelihood

`2(xk|φ) =∏i<j

`2(xik , x j

k |φ)

with x1k , . . . , xn

k : allelic states of the gene sample at the k-th locus

The pairwise score function

∇φ log `2(xk|φ) =∑i<j

∇φ log `2(xik , x j

k |φ)

� Composite likelihoods are often much narrower than theoriginal likelihood of the model

Safe(r) with EL because we only use position of its mode

Page 112: [A]BCel : a presentation at ABC in Roma

Pairwise likelihood: a simple case

Assumptions

I sample ⊂ closed, panmicticpopulation at equilibrium

I marker: microsatellite

I mutation rate: θ/2

if x ik et x j

k are two genes of thesample,

`2(xik , x j

k |θ) depends only on

δ = x ik − x j

k

`2(δ|θ) =1√

1 + 2θρ (θ)|δ|

with

ρ(θ) =θ

1 + θ+√

1 + 2θ

Pairwise score function

∂θ log `2(δ|θ) =

−1

1 + 2θ+

|δ|

θ√

1 + 2θ

Page 113: [A]BCel : a presentation at ABC in Roma

Pairwise likelihood: a simple case

Assumptions

I sample ⊂ closed, panmicticpopulation at equilibrium

I marker: microsatellite

I mutation rate: θ/2

if x ik et x j

k are two genes of thesample,

`2(xik , x j

k |θ) depends only on

δ = x ik − x j

k

`2(δ|θ) =1√

1 + 2θρ (θ)|δ|

with

ρ(θ) =θ

1 + θ+√

1 + 2θ

Pairwise score function

∂θ log `2(δ|θ) =

−1

1 + 2θ+

|δ|

θ√

1 + 2θ

Page 114: [A]BCel : a presentation at ABC in Roma

Pairwise likelihood: a simple case

Assumptions

I sample ⊂ closed, panmicticpopulation at equilibrium

I marker: microsatellite

I mutation rate: θ/2

if x ik et x j

k are two genes of thesample,

`2(xik , x j

k |θ) depends only on

δ = x ik − x j

k

`2(δ|θ) =1√

1 + 2θρ (θ)|δ|

with

ρ(θ) =θ

1 + θ+√

1 + 2θ

Pairwise score function

∂θ log `2(δ|θ) =

−1

1 + 2θ+

|δ|

θ√

1 + 2θ

Page 115: [A]BCel : a presentation at ABC in Roma

Pairwise likelihood: 2 diverging populations

MRCA

POP a POP b

τ

Assumptions

I τ: divergence date ofpop. a and b

I θ/2: mutation rate

Let x ik and x j

k be two genescoming resp. from pop. a andbSet δ = x i

k − x jk .

Then `2(δ|θ, τ) =

e−τθ√1 + 2θ

+∞∑k=−∞ ρ(θ)

|k |Iδ−k(τθ).

whereIn(z) nth-order modifiedBessel function of the firstkind

Page 116: [A]BCel : a presentation at ABC in Roma

Pairwise likelihood: 2 diverging populations

MRCA

POP a POP b

τ

Assumptions

I τ: divergence date ofpop. a and b

I θ/2: mutation rate

Let x ik and x j

k be two genescoming resp. from pop. a andbSet δ = x i

k − x jk .

A 2-dim score function∂τ log `2(δ|θ, τ) = −θ+θ

2

`2(δ− 1|θ, τ) + `2(δ+ 1|θ, τ)

`2(δ|θ, τ)

∂θ log `2(δ|θ, τ) =

−τ−1

1+ 2θ+

q(δ|θ, τ)

`2(δ|θ, τ)+

τ

2

`2(δ− 1|θ, τ) + `2(δ+ 1|θ, τ)

`2(δ|θ, τ)

where

q(δ|θ, τ) :=

e−τθ√1+ 2θ

ρ ′(θ)

ρ(θ)

∞∑k=−∞ |k |ρ(θ)|k|Iδ−k(τθ)

Page 117: [A]BCel : a presentation at ABC in Roma

Example: normal posterior

[A]BCel with two constraintsESS=155.6

θ

Den

sity

−0.5 0.0 0.5 1.0

0.0

1.0

ESS=75.93

θD

ensi

ty

−0.4 −0.2 0.0 0.2 0.4

0.0

1.0

2.0

ESS=76.87

θ

Den

sity

−0.4 −0.2 0.0 0.2

01

23

4

ESS=91.54

θ

Den

sity

−0.6 −0.4 −0.2 0.0 0.2

01

23

4

ESS=108.4

θ

Den

sity

−0.4 0.0 0.2 0.4 0.6

0.0

1.0

2.0

3.0

ESS=85.13

θ

Den

sity

−0.2 0.0 0.2 0.4 0.6

0.0

1.0

2.0

3.0

ESS=149.1

θ

Den

sity

−0.5 0.0 0.5 1.0

0.0

1.0

2.0

ESS=96.31

θ

Den

sity

−0.4 0.0 0.2 0.4 0.6

0.0

1.0

2.0

ESS=83.77

θ

Den

sity

−0.6 −0.4 −0.2 0.0 0.2 0.40

12

34

ESS=155.7

θ

Den

sity

−0.5 0.0 0.5

0.0

1.0

2.0

ESS=92.42

θ

Den

sity

−0.4 0.0 0.2 0.4 0.6

0.0

1.0

2.0

3.0 ESS=95.01

θ

Den

sity

−0.4 0.0 0.2 0.4 0.6

0.0

1.5

3.0

ESS=139.2

Den

sity

−0.6 −0.2 0.2 0.6

0.0

1.0

2.0

ESS=99.33

Den

sity

−0.4 −0.2 0.0 0.2 0.4

0.0

1.0

2.0

3.0

ESS=87.28

Den

sity

−0.2 0.0 0.2 0.4 0.6

01

23

Sample sizes are of 25 (column 1), 50 (column 2) and 75 (column 3)

observations

Page 118: [A]BCel : a presentation at ABC in Roma

Example: normal posterior

[A]BCel with three constraintsESS=300.1

θ

Den

sity

−0.4 0.0 0.4 0.8

0.0

1.0

ESS=205.5

θD

ensi

ty

−0.6 −0.2 0.0 0.2 0.4

0.0

1.0

2.0

ESS=179.4

θ

Den

sity

−0.2 0.0 0.2 0.4

0.0

1.5

3.0

ESS=265.1

θ

Den

sity

−0.3 −0.2 −0.1 0.0 0.1

01

23

4

ESS=250.3

θ

Den

sity

−0.6 −0.4 −0.2 0.0 0.2

0.0

1.0

2.0

ESS=134.8

θ

Den

sity

−0.4 −0.2 0.0 0.1

01

23

4

ESS=331.5

θ

Den

sity

−0.8 −0.4 0.0 0.4

0.0

1.0

2.0 ESS=167.4

θ

Den

sity

−0.9 −0.7 −0.5 −0.3

01

23

ESS=136.5

θ

Den

sity

−0.4 −0.2 0.0 0.20

12

34

ESS=322.4

θ

Den

sity

−0.2 0.0 0.2 0.4 0.6 0.8

0.0

1.0

2.0

ESS=202.7

θ

Den

sity

−0.4 −0.2 0.0 0.2 0.4

0.0

1.0

2.0

3.0 ESS=166

θ

Den

sity

−0.4 −0.2 0.0 0.2

01

23

4

ESS=263.7

Den

sity

−1.0 −0.6 −0.2

0.0

1.0

2.0

ESS=190.9

Den

sity

−0.4 −0.2 0.0 0.2 0.4 0.6

01

23

ESS=165.3

Den

sity

−0.5 −0.3 −0.1 0.1

0.0

1.5

3.0

Sample sizes are of 25 (column 1), 50 (column 2) and 75 (column 3)

observations

Page 119: [A]BCel : a presentation at ABC in Roma

Example: quantile g -and-k distributions

Distributions defined by a closed-form quantile function F−1(p; θ)equal to

A + B

(1 + c

1 − exp(−gz(r))

1 + exp(−gz(r))

)(1 + z(r)2

)kz(r)

where z(r) the r th standard normal quantile

2 4 6 8 10 12

0.0

0.2

0.4

0.6

0.8

1.0

Fn(x

)

True and fitted cdf functions with pointwise 95% credible(shaded) region centered on fitted cdf for a dataset ofn = 100 observations from the g -and-k distribution,based on M = 103 simulations of [A]BCel.

Page 120: [A]BCel : a presentation at ABC in Roma

Example: quantile g -and-k distributions

Distributions defined by a closed-form quantile function F−1(p; θ)equal to

A + B

(1 + c

1 − exp(−gz(r))

1 + exp(−gz(r))

)(1 + z(r)2

)kz(r)

where z(r) the r th standard normal quantile●

1 2 3 4 5 6 7 8 9 10

2.8

3.0

3.2

3.4

A

●●

1 2 3 4 5 6 7 8 9 10

0.6

0.8

1.0

1.2

1.4

1.6

B

1 2 3 4 5 6 7 8 9 10

1.5

2.0

2.5

3.0

3.5

g

1 2 3 4 5 6 7 8 9 10

0.2

0.4

0.6

0.8

k

Variations of posterior means for n = 100 (1 to 5) andn = 500 observations (6 to 10). Moment conditions inthe [A]BCel algorithm are quantiles of order(0.2, 0.5, 0.8) (1 and 6), (0.2, 0.4, 0.6, 0.8) (2 and 7),(0.1, 0.4, 0.6, 0.9) (3 and 8), (0.1, 0.25, 0.5, 0.75, 0.9)(4 and 9), and (0.1, 0.2, . . . .0.9) (5 and 10).

Page 121: [A]BCel : a presentation at ABC in Roma

Example: ARCH and GARCH models

Simple dynamic model, namely the ARCH(1) model:

yt = σtεt , εt ∼ N(0, 1) , σ2t = α0 + α1y 2

t−1 ,

with uniform prior over α0,α1 > 0, α0 + α1 6 1.Natural empirical likelihood representation based on thereconstituted εt ’s, defined as yt/σt when the σt ’s are derivedrecursively

ABC opti.

ABC mom.

ABCEL mom.

ABCEL corr.

01

23

45

α0

●●

ABC opti.

ABC mom.

ABCEL mom.

ABCEL corr.

0.2

0.4

0.6

0.8

α1

●●

●●●●●●

●●

ABC opti.

ABC mom.

ABCEL mom.

ABCEL corr.

0.0

0.5

1.0

1.5

2.0

2.5

●●●●●●

ABC opti.

ABC mom.

ABCEL mom.

ABCEL corr.

0.05

0.10

0.15

0.20

0.25

0.30

ABC evaluations of posterior expectations (top, with truevalues in dashed lines) and variances (bottom) of(α0,α1) with 100 observations. 1-2: two choices ofsummaries (least squares estimates and mean of thelog yt ’s plus autocorrelations of order 2 and 3,respectively). 3-4: two sets of constraints (first 3moments and second moment plus autocorrelation oforder 1 plus correlation with previous observation for thereconstituted εt ’s).

Page 122: [A]BCel : a presentation at ABC in Roma

Example: ARCH and GARCH models

GARCH(1, 1) model formalized as the observation of yt ∼ N(0,σ2t )

whenσ2

t = α0 + α1y 2t−1 + β1σ

2t−1

under the constraints α0,α1,β1 > 0 and α1 + β1 < 1, that is,yt = σtεt .

I Given constraints, natural prior is exponential distribution onα0 and Dirichlet D3(1, 1, 1) on (α1,β1, 1 − α1 − β1)

I For ABC, use of maximum likelihood estimator as summarystatistics, provided by R function garch

I Chan and Ling (2006) derived natural score constraints for theempirical likelihood, used in [A]BCel algorithm

Page 123: [A]BCel : a presentation at ABC in Roma

Example: ARCH and GARCH models

GARCH(1, 1) model formalized as the observation of yt ∼ N(0,σ2t )

whenσ2

t = α0 + α1y 2t−1 + β1σ

2t−1

under the constraints α0,α1,β1 > 0 and α1 + β1 < 1, that is,yt = σtεt .

●●

ABC BCel MLE0.00

0.10

0.20

0.30

α0

●●

ABC BCel MLE

0.00

0.10

0.20

0.30

α1

ABC BCel MLE

0.0

0.2

0.4

0.6

0.8

β1

Evaluations of posterior expectations (with true values indashed lines) of (α0,α1,β1) with 250 observations.First row: optimal ABC algorithm (using the MLE assummary statistic and with the tolerance ε chosen as the5% quantile of the distances), second row:[A]BCel algorithm based on the constraints derived in [?],and third row: MLE derived by R garch initialized at thetrue value.

Page 124: [A]BCel : a presentation at ABC in Roma

Example: Superposition of gamma processes

Example of superposition of N renewal processes with waitingtimes τij (i = 1, . . . ,M, j = 1, . . .) ∼ G(α,β), when N is unknown.Renewal processes

ζi1 = τi1, ζi2 = ζi1 + τi2, . . .

with observations made of first n values of the ζij ’s,

z1 = min{ζij }, z2 = min{ζij ; ζij > z1}, . . .

ending withzn = min{ζij ; ζij > zn−1} .

[Cox & Kartsonaki, B’ka, 2012]

Page 125: [A]BCel : a presentation at ABC in Roma

Example: Superposition of gamma processes (ABC)

Interesting testing ground for [A]BCel since data (zt) neither iidnor Markov

Recovery of an iid structure by

1. simulating a pseudo-dataset,(z?

1 , . . . , z?n ), as in regular

ABC,

2. deriving sequence ofindicators (ν1, . . . ,νn), as

z?1 = ζν11, z?

2 = ζν2j2 , . . .

3. exploiting that thoseindicators are distributedfrom the prior distributionon the νt ’s leading to an iidsample of G(α,β) variables

Comparison of ABC and[A]BCel posteriors

α

Den

sity

0 1 2 3 4

0.0

0.2

0.4

0.6

0.8

1.0

1.2

1.4

β

Den

sity

0 1 2 3 4

0.0

0.5

1.0

1.5

N

Den

sity

0 5 10 15 20

0.00

0.02

0.04

0.06

0.08

αD

ensi

ty0 1 2 3 4

0.0

0.5

1.0

1.5

β

Den

sity

0 1 2 3 4

0.0

0.2

0.4

0.6

0.8

1.0

N

Den

sity

0 5 10 15 20

0.00

0.01

0.02

0.03

0.04

0.05

0.06

Top: [A]BCel

Bottom: regular ABC

Page 126: [A]BCel : a presentation at ABC in Roma

Example: Superposition of gamma processes (ABC)

Interesting testing ground for [A]BCel since data (zt) neither iidnor Markov

Recovery of an iid structure by

1. simulating a pseudo-dataset,(z?

1 , . . . , z?n ), as in regular

ABC,

2. deriving sequence ofindicators (ν1, . . . ,νn), as

z?1 = ζν11, z?

2 = ζν2j2 , . . .

3. exploiting that thoseindicators are distributedfrom the prior distributionon the νt ’s leading to an iidsample of G(α,β) variables

Comparison of ABC and[A]BCel posteriors

α

Den

sity

0 1 2 3 4

0.0

0.2

0.4

0.6

0.8

1.0

1.2

1.4

β

Den

sity

0 1 2 3 4

0.0

0.5

1.0

1.5

N

Den

sity

0 5 10 15 20

0.00

0.02

0.04

0.06

0.08

αD

ensi

ty0 1 2 3 4

0.0

0.5

1.0

1.5

β

Den

sity

0 1 2 3 4

0.0

0.2

0.4

0.6

0.8

1.0

N

Den

sity

0 5 10 15 20

0.00

0.01

0.02

0.03

0.04

0.05

0.06

Top: [A]BCel

Bottom: regular ABC

Page 127: [A]BCel : a presentation at ABC in Roma

Pop’gen’: A first experiment

Evolutionary scenario:MRCA

POP 0 POP 1

τ

Dataset:

I 50 genes per populations,

I 100 microsat. loci

Assumptions:

I Ne identical over allpopulations

I φ = (log10 θ, log10 τ)

I uniform prior over(−1., 1.5)× (−1., 1.)

Comparison of the originalABC with [A]BCel

ESS=7034

log(theta)

Den

sity

0.00 0.05 0.10 0.15 0.20 0.25

05

1015

log(tau1)

Den

sity

−0.3 −0.2 −0.1 0.0 0.1 0.2 0.3

01

23

45

67

histogram = [A]BCel

curve = original ABCvertical line = “true”parameter

Page 128: [A]BCel : a presentation at ABC in Roma

Pop’gen’: A first experiment

Evolutionary scenario:MRCA

POP 0 POP 1

τ

Dataset:

I 50 genes per populations,

I 100 microsat. loci

Assumptions:

I Ne identical over allpopulations

I φ = (log10 θ, log10 τ)

I uniform prior over(−1., 1.5)× (−1., 1.)

Comparison of the originalABC with [A]BCel

ESS=7034

log(theta)

Den

sity

0.00 0.05 0.10 0.15 0.20 0.25

05

1015

log(tau1)

Den

sity

−0.3 −0.2 −0.1 0.0 0.1 0.2 0.3

01

23

45

67

histogram = [A]BCel

curve = original ABCvertical line = “true”parameter

Page 129: [A]BCel : a presentation at ABC in Roma

ABC vs. [A]BCel on 100 replicates of the 1st experiment

Accuracy:log10 θ log10 τ

ABC BCel ABC BCel

(1) 0.097 0.094 0.315 0.117

(2) 0.071 0.059 0.272 0.077

(3) 0.68 0.81 1.0 0.80

(1) Root Mean Square Error of the posterior mean

(2) Median Absolute Deviation of the posterior median

(3) Coverage of the credibility interval of probability 0.8

Computation time: on a recent 6-core computer(C++/OpenMP)

I ABC ≈ 4 hours

I [A]BCel≈ 2 minutes

Page 130: [A]BCel : a presentation at ABC in Roma

Pop’gen’: Second experiment

Evolutionary scenario:MRCA

POP 0 POP 1 POP 2

τ1

τ2

Dataset:

I 50 genes per populations,

I 100 microsat. loci

Assumptions:

I Ne identical over allpopulations

I φ =(log10 θ, log10 τ1, log10 τ2)

I non-informative uniformprior

Comparison of the original ABCwith [A]BCel

histogram = [A]BCel

curve = original ABCvertical line = “true” parameter

Page 131: [A]BCel : a presentation at ABC in Roma

Pop’gen’: Second experiment

Evolutionary scenario:MRCA

POP 0 POP 1 POP 2

τ1

τ2

Dataset:

I 50 genes per populations,

I 100 microsat. loci

Assumptions:

I Ne identical over allpopulations

I φ =(log10 θ, log10 τ1, log10 τ2)

I non-informative uniformprior

Comparison of the original ABCwith [A]BCel

histogram = [A]BCel

curve = original ABCvertical line = “true” parameter

Page 132: [A]BCel : a presentation at ABC in Roma

Pop’gen’: Second experiment

Evolutionary scenario:MRCA

POP 0 POP 1 POP 2

τ1

τ2

Dataset:

I 50 genes per populations,

I 100 microsat. loci

Assumptions:

I Ne identical over allpopulations

I φ =(log10 θ, log10 τ1, log10 τ2)

I non-informative uniformprior

Comparison of the original ABCwith [A]BCel

histogram = [A]BCel

curve = original ABCvertical line = “true” parameter

Page 133: [A]BCel : a presentation at ABC in Roma

Pop’gen’: Second experiment

Evolutionary scenario:MRCA

POP 0 POP 1 POP 2

τ1

τ2

Dataset:

I 50 genes per populations,

I 100 microsat. loci

Assumptions:

I Ne identical over allpopulations

I φ =(log10 θ, log10 τ1, log10 τ2)

I non-informative uniformprior

Comparison of the original ABCwith [A]BCel

histogram = [A]BCel

curve = original ABCvertical line = “true” parameter

Page 134: [A]BCel : a presentation at ABC in Roma

ABC vs. [A]BCel on 100 replicates of the 2nd experiment

Accuracy:log10 θ log10 τ1 log10 τ2

ABC BCel ABC BCel ABC BCel

(1) .0059 .0794 .472 .483 29.3 4.76

(2) .048 .053 .32 .28 4.13 3.36

(3) .79 .76 .88 .76 .89 .79

(1) Root Mean Square Error of the posterior mean

(2) Median Absolute Deviation of the posterior median

(3) Coverage of the credibility interval of probability 0.8

Computation time: on a recent 6-core computer(C++/OpenMP)

I ABC ≈ 6 hours

I [A]BCel≈ 8 minutes

Page 135: [A]BCel : a presentation at ABC in Roma

Comparison

On large datasets, [A]BCel gives more accurate results than ABC

ABC simplifies the dataset through summary statisticsDue to the large dimension of x , the original ABC algorithmestimates

π(θ∣∣∣η(xobs)

),

where η(xobs) is some (non-linear) projection of the observeddataset on a space with smaller dimension↪→ Some information is lost

[A]BCel simplifies the model through a generalized momentcondition model.↪→ Here, the moment condition model is based on pairwisecomposition likelihood

Page 136: [A]BCel : a presentation at ABC in Roma

Conclusion/perspectives

I abc part of a wider pictureto handle complex/Big Datamodels

I many formats of empirical[likelihood] Bayes methodsavailable

I lack of comparative toolsand of an assessment forinformation loss