An Introduction to Discrete–Event Simulation · 2008-05-15 · An Introduction to...

Post on 17-Jul-2020

3 views 0 download

Transcript of An Introduction to Discrete–Event Simulation · 2008-05-15 · An Introduction to...

An Introduction toDiscrete–Event Simulation

Peter W. Glynn

Peter J. Haas

IMA Workshop, May 12, 2008

Markov Jump Processes

Goal: Compute u∗(t) = (u∗(t, x) : x ∈ S),

where u∗(t, x) = Exf(X(t))

Method: Solve

u′(t) = Qu(t)

s/t u(0) = f

IMA Workshop 2 of 22

Classical method for establishing

existence/uniqueness:

u∗n(t, x) = Exf(X(t))I(J(t) ≤ n)

u∗n(t, x) = f(x)e−λ(x)t

+∑

y 6=x

∫ t

0

λ(x)e−λ(x)s · Q(x, y)

λ(x)u∗

n−1(t− s, y)ds

Now, let n→∞.

Note: Jump time out of x determined by single

exponential rv

IMA Workshop 3 of 22

Classical method for simulating Markov jump

processes:

1. n← 0, T ← 0

2. Generate η ∼ Exp(λ(X(T )))

3. Generate Y from p.m.f.Q(X(T ), ·)λ(X(T ))

4. T ← T + η

5. X(T )← Y and go to 2.

( aka Gillespie algorithm... but goes back to early days

of Markov jump processes)

IMA Workshop 4 of 22

Competing Exponentials

e.g. N independently evolving individuals

����

A-µ

�λ ��

��B

X(t) = # of type A individuals at time t

X = (X(t) : t ≥ 0) is a birth–death process with rates

λ(x) = λx and µ(x) = (N − x)µ.

Simulation: One exponential

vs

N competing exponentials

IMA Workshop 5 of 22

Competing exponentials can be more efficient when :

� one uses an efficient data structure to store FES

� “Classical method” is slow because:(

Q(X(Tn), y)

λ(X(Tn)): y ∈ S

)

must be dynamically computed and/or

(Q(X(Tn), y)

λ(X(Tn)): y ∈ S

)

is non–sparse0 p1 p2

@@IU

pn

IMA Workshop 6 of 22

Thinning

� X = (X(t) : t ≥ 0) Markov jump process with

time–dependent rate matrix Q(t)

� Assume Λ = sup {λ(t, x) : x ∈ S, t ≥ 0} <∞

� Generate points S1, S2, . . . according to a Poisson

process with rate Λ � Accept with prob. λ(Si, X(T ))/Λ

� At transition epoch, generate next state according to

p.m.f. (Q(Si, X(T ), y)/λ(Si, X(T )) : y ∈ S)

IMA Workshop 7 of 22

Variance Reduction Methods

for Markov Jump Processes

I. Discrete–Time Conversion

Y = (Yn : n ≥ 0) embedded DTMC

To estimate α = Eg(X(t) : t ≥ 0), note that

α = EW

where W = E[g(X(t) : t ≥ 0)|Y ].

e.g. g(X(t) : t ≥ 0) = inf{t ≥ 0 : X(t) ∈ A}

W =

τA−1∑

i=0

1/λ(Yi)

var W ≤ var g(X)

IMA Workshop 8 of 22

II. Antithetics

X = (X(t) : t ≥ 0) birth–death process

X1(t) = X(0) + N+

(∫ t

0

λ(X1(s))ds

)−N−

(∫ t

0

µ(X1(s))ds

)

X2(t) = X(0) + N−

(∫ t

0

λ(X2(s))ds

)−N+

(∫ t

0

µ(X2(s))ds

)

IMA Workshop 9 of 22

If g(X) (, g(X(t) : t ≥ 0)) is monotone in N+, N−, then

var

(g(X1) + g(X2)

2

)< var g(X).

IMA Workshop 10 of 22

III. Control Variates

Suppose we can compute Eβ. Then α = Eg(X) can

be estimated via

g(X)− λ(β − Eβ).

Optimal λ∗ (to reduce variance maximally) is

λ∗ =cov(g(X), β)

varβ

IMA Workshop 11 of 22

For any Markov jump process and function

f : [0,∞)× S → R,

M(t) , f(t,X(t))−∫ t

0

[∂f(s,X(s))

∂s+ (Q(s)fs)(X(s))

]ds

is a martingale, so

ExM(t) = f(0, x).

IMA Workshop 12 of 22

If we wish to compute Exg(X(t)), note that if we choose

f as the solution to

∂f

∂s= Qf

s/t f(0) = g,

then∂f(t− s,X(s))

∂s+ (Qft−s)(X(s)) = 0 and we

conclude that

E[f(0, X(t))− f(t, x)] = 0

i.e. g(X(t))− f(t, x) is a control

IMA Workshop 13 of 22

Schlogl Model

R0

-1

�3

R R2 + 2R-3

�1

3R

Xn(t) = # molecules at time t of a substance R in a

volume n

Xn is a birth–death process with birth–death rates:

λn(x) = n(1 + 3x(x− 1/n))

µn(x) = n(3x + x(x− 1/n)(x− 2/n))

IMA Workshop 14 of 22

n−3/4Xn(n1/2t)− n1/4 ⇒ Z(t)

where

dZ(t) = −Z(t)3dt + 2√

2dB(t)

In place of f(s, x) = Exg(X(s)), use

fapprox(s, x) = En−3/4x−n1/4g(n1/4 + n3/4Z(n−1/2t))

IMA Workshop 15 of 22

IV. Sensitivity Analysis

Goal: Compute α = Eg(X1)− Eg(X2)

Xi: birth–death with birth rates (λi(x) : x ≥ 0) and

death rates (µi(x) : x ≥ 1).

e.g. λ1(x) = λ(θ, x)

λ2(x) = λ(θ + h, x)

Xi(t) = Xi(0)+N+

(∫ t

0

λi(Xi(s))ds

)−N−

(∫ t

0

µ(Xi(s))ds

)

same N+, N−

IMA Workshop 16 of 22

Use of change–of–measure

Pλ: probability under which N evolves as a Poisson

process with rate λ

Pλ2(τ1 ∈ dt1, . . . , τn ∈ dtn) =

n∏

i=1

λ2e−λ2tidti

=n∏

i=1

(λ2

λ1

)e−(λ2−λ1)tiλ1e

−λ1tidti

= Eλ1I(τ1 ∈ dt1, . . . , τn ∈ dtn)Ln

Ln = exp

(−(λ2 − λ1)Tn +

[0,Tn]

(log λ2 − log λ1)N(ds)

)

IMA Workshop 17 of 22

More generally:

Pi: X evolves according to

X(t) = X(0)+N+

(∫ t

0

λi(X(s))ds

)−N−

(∫ t

0

µ(X(s))ds

)

P2(A) = E1I(A)L(t)

L(t) = exp

(−

∫ t

0

(λ2(X(s))− λ1(X(s))) ds

+

[0,t]

(log(λ2(X(s−)))− log(λ1(X(s−))))N+(ds)

)

IMA Workshop 18 of 22

For sensitivity analysis:

λ2(x) = λ(h, x)

d

dhEhg(X) = E0g(X)

d

dhL(h, t)

∣∣∣∣h=0

=E0g(X)

[∫

[0,t]

λ′(0, X(s−))

λ(0, X(s−))N+(ds)−

∫ t

0

λ′(0, X(s))ds

]

IMA Workshop 19 of 22

For rare–event simulation,

P2(A) = E1I(A)L(t)

Choose λ1 to simulate more rare reactions.

IMA Workshop 20 of 22

Massively Parallel Simulations

� Start same computation independently on many

processors

p

...i

...21 -

-

-

-t

� If one terminates computation at t, slowly running

simulations are excluded from final estimate

Bias!

IMA Workshop 21 of 22

Conclusions

� Review of discrete–event simulation and modeling

� Simulation of Markov jump processes

� Basic variance reduction techniques

IMA Workshop 22 of 22