Simulation from Distributions
Sampling Distributions can be simulated from (!) Different distributions require special treatment There are general rules of course (rejection,
composition, inversion) I will cover the basic methods here for common
distributions
Random Number generation All distribution simulation is based on random
number generation Fundamental is the generation of U(0,1) variates: a
random number is a simulation from this distribution.
However all random number generators are not really random as they are constructed mathematically from fixed sequences of numbers that essentisally mimic randomness
Linear congruential generator (LCG) Recursive sequence:
m modulus c increment is the seed All non-negative integersTo get the new value simply compute
1( )(mod )i iZ aZ c m−= +
0Z
1from ( ) /i iZ remainder aZ c m−= +
Random (uniform) Numbers To get the random number set
and thenAs you can see the sequence is fixed but if the a, c, m are chosen well the resulting random numbers will look like a sample from a uniform. • Scaling: for a uniform on different range just multiply
by the range: e.g. for just multiply• For discrete uniform compute set
/i iU Z m=(0,1)iU U
* (0,10)iU U 10* iU( , )ix DU a b
1b a− +
( 1)
. is an interger parti ix a b a U
where
= + − +
Distributions Sampling from any distribution is based on random
numbers Some basic methods:
Inverse transform Composition Convolution Rejection
Inverse Transform Target density: CDF: Denote as the inverse of and
Algorithm:1. Generate2. Set
( )Xf x
( ) Pr(X u)F u = ≤1F − F
1Pr( ) Pr( ( ) ) Pr( ( )) ( )as (0,1)
X x F U x U F x F xU U
−≤ = ≤ = ≤ =
(0,1)iU U1( )i iX F U−=
Exponential example Density and CDF:
Set and solve Then
Algorithm: 1) 2) set
1( ) exp( / )
( ) 1 exp( / )
f x x
F x x
ββ
β
= −
= − −
( )u F x=
1( ) ln(1 )and lnF u u
X uββ
− = − −= −
(0,1)iU U< − lni iX Uβ= −
Some basic intuitive examples Bernoulli
Weibull
Discrete uniform
(0,1)If , set 1, otherwise 0
i
i
U UU p X X< −
≤ = =
1 1/
1/
( ) [ ln(1 )] [ ln( )]i i
F u uSet X U
α
α
ββ
− = − −= −
( 1)
. is an interger parti ix a b a U
where
= + − +
Composition Ideal for distribution mixtures
where
Algorithm:
1) Generate a positive random integer J such that
2) Given , generate X with distribution
Examples include the Laplace (double exponential)
1( ) ( )X j j
jf x p f x
=
=
10; 1j j
jp p
∞
=
≥ =
Pr( ) for 1,2,3.....jJ j p j= = =
J j= jF
Convolution If an random variable can be expressed as a sum of IID
variables then it can be generated from a sum Algorithm 1. Generate2. Set Examples: binomial distribution (convolution of IID Bernoullis) Negative binomial (convolution of IID geometrics)Chi-squared K (convolution of IID Chi-squared df=1)Ga(a,b) b*convolution of a IID exponential(1)s (a integer)
1 2 3 4 5 6, , , , , ,........., mY Y Y Y Y Y Y
1 2 3 4 5 6 .........i mX Y Y Y Y Y Y Y= + + + + + + +
Rejection sampling Basically we specify a function that majorizes the
density ie
Define
Must be able to easily generate a variate from r(x)
( ) ( ) t x f x x≥ ∀
( ) ( ) 1
( ) ( ) / is a density.
c t x dx f x dx
but r x t x c
∞ ∞
−∞ −∞= ≥ =
=
Rejection algorithm Generate Y from density r Generate If then set otherwise return to
step 1 Algorithm loops until we get a valid X
(0,1)iU U< −( ) / ( )iU f Y t Y≤ X Y=
Beta example Rejection:Beta(4,3) with density
Mode of density is 0.6 and
So is just the uniform density
Algorithm:
Can be generated from gamma distributions also
3 2( ) 60 (1 ) 0 x 1f x x x if= − ≤ ≤
(0.6) 2.0736 ( ) 2.076 if 0 1
( ) majorises ( )
fhence set t x xthen t x f x
== ≤ ≤
( )r x
Y (0,1)If (Y) / 2.076 set X = Y otherwise reject YU and U
U f< −
≤
Figure
Special distributions GaussianBox-Mueller or Polar Marsaglia Log normalTransform of normal distribution GammaComplex rejection sampling Students tFrom Gaussian and chi-squared distributions FFrom chi-squared distributions
Correlated random variables Can exploit conditioning i.e. Generate
Not very useful
11
2 2 1
3 3 1 2
1 2 1
marginal distribution (. | )(. | , )
(. | , ,......., )
X
n n n
X F
X F XX F X X
X F X X X −
< −
< −< −
< −
Multivariate Normal generation The MVN is relatively easy to simulate from given the
covariance matrix is known Define
Algorithm:1) Generate IID N(0,1) variates2) For i=1, 2, …..,n let
1 2 3 4(X ,X ,X ,X ,....,X ) ( , )and the covariance matrix can be decomposedby Cholesky: =CC where is the ( , ) th element of C.
Tn n
Tij
N
c i j
= Σ
Σ
X μ
1 2 3 4, , , , ...., nZ Z Z Z Z
1
i
i i ij jj
X c Zμ=
= +
Adaptive Rejection (ARS) Rejection sampling is simple but can be inefficient The Beta example used a uniform constant: this yields
a lot of rejections. It would be better to get functions (t(x)) that wrap
closer to the target (f(x)) But they must be easy to simulate from
Example
Adaptive Rejection Sampling The basic idea is that getting a good fit to an envelope
could be adaptive: i.e. as you sample the sampler adjusts to the form of the function
It is designed for a log concave function: most likelihoods are log concave (exponential family are)
ARS proceeds by majorising the log densitywith piecewise linear segments instead of a uniform density. The segments are adjusted as the sampler proceeds. This can be done by using log exponential distributed linear segments An under function can also be used to help acceptances
( ) log( ( ))g x f x=
ARS
ARMS If the density is not log concave it is still possible to
use ARS but an extension for non –concavity includes a metropolis step: adaptive rejection Metropolis Sampling (ARMS) is an automatic method that can tune the sampler to any concave or non-concave function.
On software platforms such as BUGS: ARS and ARMS are implemented and represent the ‘adaptive phase’ .
Slice Sampling Radford Neal introduced this idea:
Neal, Radford M. (2003). "Slice Sampling". Annals of Statistics. 31 (3): 705–76.
Basically, slices are cut across distributions and sampling is based on the slice width
It can be more efficient than MH and can be incorporated within an McMC algorithm
Top Related