Sampling Uncertainty in Verification Measures for Binary Deterministic Forecasts
Algorithms for Uncertainty Quantification...Repetition from previous lecture Sampling methods!a...
Transcript of Algorithms for Uncertainty Quantification...Repetition from previous lecture Sampling methods!a...
Algorithms for Uncertainty QuantificationTobias Neckel, Ionut,-Gabriel Farcas,
Lehrstuhl Informatik V
Summer Semester 2017
Lecture 4: More advanced samplingtechniques
Repetition from previous lecture
Repetition from previous lecture• Sampling methods→ a popular technique for uncertainty propagation
• Most widely used sampling approach→ Monte Carlo sampling
• Monte Carlo sampling→ simple, robust, independent of probability distribution, number of randomparameters, ...
• ... but slow convergence rate
• Model problem→ damped linear oscillator
• Uncertainty in some input parameters→ higher impact in the output uncertainty
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 4
Monte Carlo sampling error analysis
Short error analysis for standard Monte CarloRemember
• for N samples, the MCS error is O( 1√N)
How did we get that?
• Monte Carlo sampling↔ averaging
• let f : [0,1]→ R and I :=∫ 1
0 f (x)dx
• I ≈ If = 1N ∑
Ni=1 f (Ui), Ui ∼U (0,1)
• if σ2f = Var[f (x)], E[(I− If )2] = σ2
f /N
• σ2f ≈ σ2
f = 1N−1 ∑
Ni=1(f (Ui)− If )2
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 6
Short error analysis for standard Monte CarloRemember
• for N samples, the MCS error is O( 1√N)
How did we get that?
• Monte Carlo sampling↔ averaging
• let f : [0,1]→ R and I :=∫ 1
0 f (x)dx
• I ≈ If = 1N ∑
Ni=1 f (Ui), Ui ∼U (0,1)
• if σ2f = Var[f (x)], E[(I− If )2] = σ2
f /N
• σ2f ≈ σ2
f = 1N−1 ∑
Ni=1(f (Ui)− If )2
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 6
Short error analysis for standard Monte CarloRemember
• for N samples, the MCS error is O( 1√N)
How did we get that?
• Monte Carlo sampling↔ averaging
• let f : [0,1]→ R and I :=∫ 1
0 f (x)dx
• I ≈ If = 1N ∑
Ni=1 f (Ui), Ui ∼U (0,1)
• if σ2f = Var[f (x)], E[(I− If )2] = σ2
f /N
• σ2f ≈ σ2
f = 1N−1 ∑
Ni=1(f (Ui)− If )2
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 6
Short error analysis for standard Monte CarloRemember
• for N samples, the MCS error is O( 1√N)
How did we get that?
• Monte Carlo sampling↔ averaging
• let f : [0,1]→ R and I :=∫ 1
0 f (x)dx
• I ≈ If = 1N ∑
Ni=1 f (Ui), Ui ∼U (0,1)
• if σ2f = Var[f (x)], E[(I− If )2] = σ2
f /N
• σ2f ≈ σ2
f = 1N−1 ∑
Ni=1(f (Ui)− If )2
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 6
Short error analysis for standard Monte CarloRemember
• for N samples, the MCS error is O( 1√N)
How did we get that?
• Monte Carlo sampling↔ averaging
• let f : [0,1]→ R and I :=∫ 1
0 f (x)dx
• I ≈ If = 1N ∑
Ni=1 f (Ui), Ui ∼U (0,1)
• if σ2f = Var[f (x)], E[(I− If )2] = σ2
f /N
• σ2f ≈ σ2
f = 1N−1 ∑
Ni=1(f (Ui)− If )2
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 6
Short error analysis for standard Monte CarloRemember
• for N samples, the MCS error is O( 1√N)
How did we get that?
• Monte Carlo sampling↔ averaging
• let f : [0,1]→ R and I :=∫ 1
0 f (x)dx
• I ≈ If = 1N ∑
Ni=1 f (Ui), Ui ∼U (0,1)
• if σ2f = Var[f (x)], E[(I− If )2] = σ2
f /N
• σ2f ≈ σ2
f = 1N−1 ∑
Ni=1(f (Ui)− If )2
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 6
Improving standard Monte Carlosampling
Towards more advanced sampling techniques• How to increase the accuracy of standard Monte Carlo? (E[(I− If )2] = σ2
f /N)
− improve your code• parallelize• vectorize• remove if statements• use memory efficiently• etc.
− increase N• not desirable
− decrease σ2f
• desirable
− improve the random number generation
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 8
Towards more advanced sampling techniques• How to increase the accuracy of standard Monte Carlo? (E[(I− If )2] = σ2
f /N)− improve your code• parallelize• vectorize• remove if statements• use memory efficiently• etc.
− increase N• not desirable
− decrease σ2f
• desirable
− improve the random number generation
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 8
Towards more advanced sampling techniques• How to increase the accuracy of standard Monte Carlo? (E[(I− If )2] = σ2
f /N)− improve your code• parallelize• vectorize• remove if statements• use memory efficiently• etc.
− increase N• not desirable
− decrease σ2f
• desirable
− improve the random number generation
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 8
Towards more advanced sampling techniques• How to increase the accuracy of standard Monte Carlo? (E[(I− If )2] = σ2
f /N)− improve your code• parallelize• vectorize• remove if statements• use memory efficiently• etc.
− increase N• not desirable
− decrease σ2f
• desirable
− improve the random number generation
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 8
Towards more advanced sampling techniques• How to increase the accuracy of standard Monte Carlo? (E[(I− If )2] = σ2
f /N)− improve your code• parallelize• vectorize• remove if statements• use memory efficiently• etc.
− increase N• not desirable
− decrease σ2f
• desirable
− improve the random number generation
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 8
Improving standard Monte Carlo sampling
Variance-minimization techniques
• antithetic sampling
• importance sampling
• stratified sampling
• control variates
Alternative random number generation techniques
• Fibonacci generators
• latin hypercube sampling
• Sobol’ sequences
• Halton sequences
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 9
Improving standard Monte Carlo samplingVariance-minimization techniques
• antithetic sampling
• importance sampling
• stratified sampling
• control variates
Alternative random number generation techniques
• Fibonacci generators
• latin hypercube sampling
• Sobol’ sequences
• Halton sequences
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 9
Improving standard Monte Carlo samplingVariance-minimization techniques
• antithetic sampling
• importance sampling
• stratified sampling
• control variates
Alternative random number generation techniques
• Fibonacci generators
• latin hypercube sampling
• Sobol’ sequences
• Halton sequences
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 9
Improving standard Monte Carlo samplingVariance-minimization techniques
• antithetic sampling
• importance sampling
• stratified sampling
• control variates
Alternative random number generation techniques
• Fibonacci generators
• latin hypercube sampling
• Sobol’ sequences
• Halton sequences
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 9
Improving standard Monte Carlo samplingVariance-minimization techniques
• antithetic sampling
• importance sampling
• stratified sampling
• control variates
Alternative random number generation techniques
• Fibonacci generators
• latin hypercube sampling
• Sobol’ sequences
• Halton sequences
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 9
Improving standard Monte Carlo samplingVariance-minimization techniques
• antithetic sampling
• importance sampling
• stratified sampling
• control variates
Alternative random number generation techniques
• Fibonacci generators
• latin hypercube sampling
• Sobol’ sequences
• Halton sequences
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 9
Improving standard Monte Carlo samplingVariance-minimization techniques
• antithetic sampling
• importance sampling
• stratified sampling
• control variates
Alternative random number generation techniques
• Fibonacci generators
• latin hypercube sampling
• Sobol’ sequences
• Halton sequences
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 9
Improving standard Monte Carlo samplingVariance-minimization techniques
• antithetic sampling
• importance sampling
• stratified sampling
• control variates
Alternative random number generation techniques
• Fibonacci generators
• latin hypercube sampling
• Sobol’ sequences
• Halton sequences
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 9
Improving standard Monte Carlo samplingVariance-minimization techniques
• antithetic sampling
• importance sampling
• stratified sampling
• control variates
Alternative random number generation techniques
• Fibonacci generators
• latin hypercube sampling
• Sobol’ sequences
• Halton sequences
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 9
Improving standard Monte Carlo samplingVariance-minimization techniques
• antithetic sampling
• importance sampling
• stratified sampling
• control variates
Alternative random number generation techniques
• Fibonacci generators
• latin hypercube sampling
• Sobol’ sequences
• Halton sequences
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 9
Variance reduction techniques
Antithetic sampling
• let X denote a (continuous) random variable with pdf ∼ fX (x), supp(X )⊂ R• assume fX (x) is symmetric, c being the center of symmetry
• let t(X1, . . . ,Xn) denote an estimator
• let x ∈ supp(X ); the symmetric of x w.r.t. c is x = 2c−x
• symmetry implies fX (x) = fX (x)
• sample n/2 samples X1, . . . ,Xn/2 from ∼ fX (x)
• the remaining n/2 samples X1, . . . , Xn/2 are obtained via reflection
• then t(X1, . . . ,Xn) = t(X1, . . . ,Xn/2)+ t(X1, . . . , Xn/2)
• example: if U ∼U (0,1), take U ∼U (0,1); the antithetic samples are, U = 1−U
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 11
Antithetic sampling• let X denote a (continuous) random variable with pdf ∼ fX (x), supp(X )⊂ R
• assume fX (x) is symmetric, c being the center of symmetry
• let t(X1, . . . ,Xn) denote an estimator
• let x ∈ supp(X ); the symmetric of x w.r.t. c is x = 2c−x
• symmetry implies fX (x) = fX (x)
• sample n/2 samples X1, . . . ,Xn/2 from ∼ fX (x)
• the remaining n/2 samples X1, . . . , Xn/2 are obtained via reflection
• then t(X1, . . . ,Xn) = t(X1, . . . ,Xn/2)+ t(X1, . . . , Xn/2)
• example: if U ∼U (0,1), take U ∼U (0,1); the antithetic samples are, U = 1−U
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 11
Antithetic sampling• let X denote a (continuous) random variable with pdf ∼ fX (x), supp(X )⊂ R• assume fX (x) is symmetric, c being the center of symmetry
• let t(X1, . . . ,Xn) denote an estimator
• let x ∈ supp(X ); the symmetric of x w.r.t. c is x = 2c−x
• symmetry implies fX (x) = fX (x)
• sample n/2 samples X1, . . . ,Xn/2 from ∼ fX (x)
• the remaining n/2 samples X1, . . . , Xn/2 are obtained via reflection
• then t(X1, . . . ,Xn) = t(X1, . . . ,Xn/2)+ t(X1, . . . , Xn/2)
• example: if U ∼U (0,1), take U ∼U (0,1); the antithetic samples are, U = 1−U
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 11
Antithetic sampling• let X denote a (continuous) random variable with pdf ∼ fX (x), supp(X )⊂ R• assume fX (x) is symmetric, c being the center of symmetry
• let t(X1, . . . ,Xn) denote an estimator
• let x ∈ supp(X ); the symmetric of x w.r.t. c is x = 2c−x
• symmetry implies fX (x) = fX (x)
• sample n/2 samples X1, . . . ,Xn/2 from ∼ fX (x)
• the remaining n/2 samples X1, . . . , Xn/2 are obtained via reflection
• then t(X1, . . . ,Xn) = t(X1, . . . ,Xn/2)+ t(X1, . . . , Xn/2)
• example: if U ∼U (0,1), take U ∼U (0,1); the antithetic samples are, U = 1−U
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 11
Antithetic sampling• let X denote a (continuous) random variable with pdf ∼ fX (x), supp(X )⊂ R• assume fX (x) is symmetric, c being the center of symmetry
• let t(X1, . . . ,Xn) denote an estimator
• let x ∈ supp(X ); the symmetric of x w.r.t. c is x = 2c−x
• symmetry implies fX (x) = fX (x)
• sample n/2 samples X1, . . . ,Xn/2 from ∼ fX (x)
• the remaining n/2 samples X1, . . . , Xn/2 are obtained via reflection
• then t(X1, . . . ,Xn) = t(X1, . . . ,Xn/2)+ t(X1, . . . , Xn/2)
• example: if U ∼U (0,1), take U ∼U (0,1); the antithetic samples are, U = 1−U
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 11
Antithetic sampling• let X denote a (continuous) random variable with pdf ∼ fX (x), supp(X )⊂ R• assume fX (x) is symmetric, c being the center of symmetry
• let t(X1, . . . ,Xn) denote an estimator
• let x ∈ supp(X ); the symmetric of x w.r.t. c is x = 2c−x
• symmetry implies fX (x) = fX (x)
• sample n/2 samples X1, . . . ,Xn/2 from ∼ fX (x)
• the remaining n/2 samples X1, . . . , Xn/2 are obtained via reflection
• then t(X1, . . . ,Xn) = t(X1, . . . ,Xn/2)+ t(X1, . . . , Xn/2)
• example: if U ∼U (0,1), take U ∼U (0,1); the antithetic samples are, U = 1−U
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 11
Antithetic sampling• let X denote a (continuous) random variable with pdf ∼ fX (x), supp(X )⊂ R• assume fX (x) is symmetric, c being the center of symmetry
• let t(X1, . . . ,Xn) denote an estimator
• let x ∈ supp(X ); the symmetric of x w.r.t. c is x = 2c−x
• symmetry implies fX (x) = fX (x)
• sample n/2 samples X1, . . . ,Xn/2 from ∼ fX (x)
• the remaining n/2 samples X1, . . . , Xn/2 are obtained via reflection
• then t(X1, . . . ,Xn) = t(X1, . . . ,Xn/2)+ t(X1, . . . , Xn/2)
• example: if U ∼U (0,1), take U ∼U (0,1); the antithetic samples are, U = 1−U
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 11
Antithetic sampling• let X denote a (continuous) random variable with pdf ∼ fX (x), supp(X )⊂ R• assume fX (x) is symmetric, c being the center of symmetry
• let t(X1, . . . ,Xn) denote an estimator
• let x ∈ supp(X ); the symmetric of x w.r.t. c is x = 2c−x
• symmetry implies fX (x) = fX (x)
• sample n/2 samples X1, . . . ,Xn/2 from ∼ fX (x)
• the remaining n/2 samples X1, . . . , Xn/2 are obtained via reflection
• then t(X1, . . . ,Xn) = t(X1, . . . ,Xn/2)+ t(X1, . . . , Xn/2)
• example: if U ∼U (0,1), take U ∼U (0,1); the antithetic samples are, U = 1−U
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 11
Antithetic sampling• let X denote a (continuous) random variable with pdf ∼ fX (x), supp(X )⊂ R• assume fX (x) is symmetric, c being the center of symmetry
• let t(X1, . . . ,Xn) denote an estimator
• let x ∈ supp(X ); the symmetric of x w.r.t. c is x = 2c−x
• symmetry implies fX (x) = fX (x)
• sample n/2 samples X1, . . . ,Xn/2 from ∼ fX (x)
• the remaining n/2 samples X1, . . . , Xn/2 are obtained via reflection
• then t(X1, . . . ,Xn) = t(X1, . . . ,Xn/2)+ t(X1, . . . , Xn/2)
• example: if U ∼U (0,1), take U ∼U (0,1); the antithetic samples are, U = 1−U
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 11
Antithetic sampling• let X denote a (continuous) random variable with pdf ∼ fX (x), supp(X )⊂ R• assume fX (x) is symmetric, c being the center of symmetry
• let t(X1, . . . ,Xn) denote an estimator
• let x ∈ supp(X ); the symmetric of x w.r.t. c is x = 2c−x
• symmetry implies fX (x) = fX (x)
• sample n/2 samples X1, . . . ,Xn/2 from ∼ fX (x)
• the remaining n/2 samples X1, . . . , Xn/2 are obtained via reflection
• then t(X1, . . . ,Xn) = t(X1, . . . ,Xn/2)+ t(X1, . . . , Xn/2)
• example: if U ∼U (0,1), take U ∼U (0,1); the antithetic samples are, U = 1−U
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 11
Stratified sampling
• let X denote a (continuous) random variable with pdf ∼ fX (x), supp(X )⊂ R• assume, without loss of generality, that supp(X ) = [0,1]
• let t(X1, . . . ,Xn) denote an estimator
• idea: prevent samples from clustering in a particular region of the interval
• select λ ∈ (0,1)
• then draw n1 = λn samples in [0,λ ] and n2 = n−n1 = (1−λ )n samples in [λ ,1]
• t(X1, . . . ,Xn) = t(X1, . . . ,Xn1) + t(X1, . . . ,Xn2)
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 12
Stratified sampling• let X denote a (continuous) random variable with pdf ∼ fX (x), supp(X )⊂ R• assume, without loss of generality, that supp(X ) = [0,1]
• let t(X1, . . . ,Xn) denote an estimator
• idea: prevent samples from clustering in a particular region of the interval
• select λ ∈ (0,1)
• then draw n1 = λn samples in [0,λ ] and n2 = n−n1 = (1−λ )n samples in [λ ,1]
• t(X1, . . . ,Xn) = t(X1, . . . ,Xn1) + t(X1, . . . ,Xn2)
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 12
Stratified sampling• let X denote a (continuous) random variable with pdf ∼ fX (x), supp(X )⊂ R• assume, without loss of generality, that supp(X ) = [0,1]
• let t(X1, . . . ,Xn) denote an estimator
• idea: prevent samples from clustering in a particular region of the interval
• select λ ∈ (0,1)
• then draw n1 = λn samples in [0,λ ] and n2 = n−n1 = (1−λ )n samples in [λ ,1]
• t(X1, . . . ,Xn) = t(X1, . . . ,Xn1) + t(X1, . . . ,Xn2)
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 12
Stratified sampling• let X denote a (continuous) random variable with pdf ∼ fX (x), supp(X )⊂ R• assume, without loss of generality, that supp(X ) = [0,1]
• let t(X1, . . . ,Xn) denote an estimator
• idea: prevent samples from clustering in a particular region of the interval
• select λ ∈ (0,1)
• then draw n1 = λn samples in [0,λ ] and n2 = n−n1 = (1−λ )n samples in [λ ,1]
• t(X1, . . . ,Xn) = t(X1, . . . ,Xn1) + t(X1, . . . ,Xn2)
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 12
Stratified sampling• let X denote a (continuous) random variable with pdf ∼ fX (x), supp(X )⊂ R• assume, without loss of generality, that supp(X ) = [0,1]
• let t(X1, . . . ,Xn) denote an estimator
• idea: prevent samples from clustering in a particular region of the interval
• select λ ∈ (0,1)
• then draw n1 = λn samples in [0,λ ] and n2 = n−n1 = (1−λ )n samples in [λ ,1]
• t(X1, . . . ,Xn) = t(X1, . . . ,Xn1) + t(X1, . . . ,Xn2)
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 12
Stratified sampling• let X denote a (continuous) random variable with pdf ∼ fX (x), supp(X )⊂ R• assume, without loss of generality, that supp(X ) = [0,1]
• let t(X1, . . . ,Xn) denote an estimator
• idea: prevent samples from clustering in a particular region of the interval
• select λ ∈ (0,1)
• then draw n1 = λn samples in [0,λ ] and n2 = n−n1 = (1−λ )n samples in [λ ,1]
• t(X1, . . . ,Xn) = t(X1, . . . ,Xn1) + t(X1, . . . ,Xn2)
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 12
Stratified sampling• let X denote a (continuous) random variable with pdf ∼ fX (x), supp(X )⊂ R• assume, without loss of generality, that supp(X ) = [0,1]
• let t(X1, . . . ,Xn) denote an estimator
• idea: prevent samples from clustering in a particular region of the interval
• select λ ∈ (0,1)
• then draw n1 = λn samples in [0,λ ] and n2 = n−n1 = (1−λ )n samples in [λ ,1]
• t(X1, . . . ,Xn) = t(X1, . . . ,Xn1) + t(X1, . . . ,Xn2)
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 12
Control variates
• remember Monte Carlo integration: estimate∫ 1
0 f (x)dx via sampling
• assume there exists φ : [0,1]→ R that can be easily integrated
• therefore∫ 1
0 f (x)dx =∫ 1
0 (f (x)+φ(x)−φ(x))dx =∫ 1
0 φ(x)dx +∫ 1
0 (f (x)−φ(x))dx
• Var(f - φ ) = Var(f) + Var(φ ) - 2cov(f, φ )
• if cov(f, φ ) is high, i.e. f and φ are “similar”, Var(f - φ ) < Var(f)
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 13
Control variates• remember Monte Carlo integration: estimate
∫ 10 f (x)dx via sampling
• assume there exists φ : [0,1]→ R that can be easily integrated
• therefore∫ 1
0 f (x)dx =∫ 1
0 (f (x)+φ(x)−φ(x))dx =∫ 1
0 φ(x)dx +∫ 1
0 (f (x)−φ(x))dx
• Var(f - φ ) = Var(f) + Var(φ ) - 2cov(f, φ )
• if cov(f, φ ) is high, i.e. f and φ are “similar”, Var(f - φ ) < Var(f)
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 13
Control variates• remember Monte Carlo integration: estimate
∫ 10 f (x)dx via sampling
• assume there exists φ : [0,1]→ R that can be easily integrated
• therefore∫ 1
0 f (x)dx =∫ 1
0 (f (x)+φ(x)−φ(x))dx =∫ 1
0 φ(x)dx +∫ 1
0 (f (x)−φ(x))dx
• Var(f - φ ) = Var(f) + Var(φ ) - 2cov(f, φ )
• if cov(f, φ ) is high, i.e. f and φ are “similar”, Var(f - φ ) < Var(f)
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 13
Control variates• remember Monte Carlo integration: estimate
∫ 10 f (x)dx via sampling
• assume there exists φ : [0,1]→ R that can be easily integrated
• therefore∫ 1
0 f (x)dx =∫ 1
0 (f (x)+φ(x)−φ(x))dx =∫ 1
0 φ(x)dx +∫ 1
0 (f (x)−φ(x))dx
• Var(f - φ ) = Var(f) + Var(φ ) - 2cov(f, φ )
• if cov(f, φ ) is high, i.e. f and φ are “similar”, Var(f - φ ) < Var(f)
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 13
Control variates• remember Monte Carlo integration: estimate
∫ 10 f (x)dx via sampling
• assume there exists φ : [0,1]→ R that can be easily integrated
• therefore∫ 1
0 f (x)dx =∫ 1
0 (f (x)+φ(x)−φ(x))dx =∫ 1
0 φ(x)dx +∫ 1
0 (f (x)−φ(x))dx
• Var(f - φ ) = Var(f) + Var(φ ) - 2cov(f, φ )
• if cov(f, φ ) is high, i.e. f and φ are “similar”, Var(f - φ ) < Var(f)
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 13
Control variates• remember Monte Carlo integration: estimate
∫ 10 f (x)dx via sampling
• assume there exists φ : [0,1]→ R that can be easily integrated
• therefore∫ 1
0 f (x)dx =∫ 1
0 (f (x)+φ(x)−φ(x))dx =∫ 1
0 φ(x)dx +∫ 1
0 (f (x)−φ(x))dx
• Var(f - φ ) = Var(f) + Var(φ ) - 2cov(f, φ )
• if cov(f, φ ) is high, i.e. f and φ are “similar”, Var(f - φ ) < Var(f)
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 13
Importance sampling
• remember Monte Carlo integration: estimate∫ 1
0 f (x)dx via sampling
• standard Monte Carlo solution∫[0,1] f (x)dx ≈ I = 1
N ∑Ni=1 f (Ui), Ui ∼U (0,1)
• however, Ui are spread all over the domain
• idea: sample from another distribution gX that better captures the structure of f
•∫ 1
0 f (x)dx =∫ 1
0f (x)
gX (x)gX (x)dx =
∫ 10 h(x)gX (x)dx
• therefore, instead of sampling from the uniform distribution, sample according to gX
• variance reduced if f and gX have similar shapes
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 14
Importance sampling• remember Monte Carlo integration: estimate
∫ 10 f (x)dx via sampling
• standard Monte Carlo solution∫[0,1] f (x)dx ≈ I = 1
N ∑Ni=1 f (Ui), Ui ∼U (0,1)
• however, Ui are spread all over the domain
• idea: sample from another distribution gX that better captures the structure of f
•∫ 1
0 f (x)dx =∫ 1
0f (x)
gX (x)gX (x)dx =
∫ 10 h(x)gX (x)dx
• therefore, instead of sampling from the uniform distribution, sample according to gX
• variance reduced if f and gX have similar shapes
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 14
Importance sampling• remember Monte Carlo integration: estimate
∫ 10 f (x)dx via sampling
• standard Monte Carlo solution∫[0,1] f (x)dx ≈ I = 1
N ∑Ni=1 f (Ui), Ui ∼U (0,1)
• however, Ui are spread all over the domain
• idea: sample from another distribution gX that better captures the structure of f
•∫ 1
0 f (x)dx =∫ 1
0f (x)
gX (x)gX (x)dx =
∫ 10 h(x)gX (x)dx
• therefore, instead of sampling from the uniform distribution, sample according to gX
• variance reduced if f and gX have similar shapes
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 14
Importance sampling• remember Monte Carlo integration: estimate
∫ 10 f (x)dx via sampling
• standard Monte Carlo solution∫[0,1] f (x)dx ≈ I = 1
N ∑Ni=1 f (Ui), Ui ∼U (0,1)
• however, Ui are spread all over the domain
• idea: sample from another distribution gX that better captures the structure of f
•∫ 1
0 f (x)dx =∫ 1
0f (x)
gX (x)gX (x)dx =
∫ 10 h(x)gX (x)dx
• therefore, instead of sampling from the uniform distribution, sample according to gX
• variance reduced if f and gX have similar shapes
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 14
Importance sampling• remember Monte Carlo integration: estimate
∫ 10 f (x)dx via sampling
• standard Monte Carlo solution∫[0,1] f (x)dx ≈ I = 1
N ∑Ni=1 f (Ui), Ui ∼U (0,1)
• however, Ui are spread all over the domain
• idea: sample from another distribution gX that better captures the structure of f
•∫ 1
0 f (x)dx =∫ 1
0f (x)
gX (x)gX (x)dx =
∫ 10 h(x)gX (x)dx
• therefore, instead of sampling from the uniform distribution, sample according to gX
• variance reduced if f and gX have similar shapes
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 14
Importance sampling• remember Monte Carlo integration: estimate
∫ 10 f (x)dx via sampling
• standard Monte Carlo solution∫[0,1] f (x)dx ≈ I = 1
N ∑Ni=1 f (Ui), Ui ∼U (0,1)
• however, Ui are spread all over the domain
• idea: sample from another distribution gX that better captures the structure of f
•∫ 1
0 f (x)dx =∫ 1
0f (x)
gX (x)gX (x)dx =
∫ 10 h(x)gX (x)dx
• therefore, instead of sampling from the uniform distribution, sample according to gX
• variance reduced if f and gX have similar shapes
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 14
Importance sampling• remember Monte Carlo integration: estimate
∫ 10 f (x)dx via sampling
• standard Monte Carlo solution∫[0,1] f (x)dx ≈ I = 1
N ∑Ni=1 f (Ui), Ui ∼U (0,1)
• however, Ui are spread all over the domain
• idea: sample from another distribution gX that better captures the structure of f
•∫ 1
0 f (x)dx =∫ 1
0f (x)
gX (x)gX (x)dx =
∫ 10 h(x)gX (x)dx
• therefore, instead of sampling from the uniform distribution, sample according to gX
• variance reduced if f and gX have similar shapes
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 14
Importance sampling• remember Monte Carlo integration: estimate
∫ 10 f (x)dx via sampling
• standard Monte Carlo solution∫[0,1] f (x)dx ≈ I = 1
N ∑Ni=1 f (Ui), Ui ∼U (0,1)
• however, Ui are spread all over the domain
• idea: sample from another distribution gX that better captures the structure of f
•∫ 1
0 f (x)dx =∫ 1
0f (x)
gX (x)gX (x)dx =
∫ 10 h(x)gX (x)dx
• therefore, instead of sampling from the uniform distribution, sample according to gX
• variance reduced if f and gX have similar shapes
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 14
Quasi-Monte Carlo: alternative samplingtechniques
Quasi-Monte Carlo sampling• standard Monte Carlo: pseudo-random
samples
⇒
• quasi-Monte Carlo (QMC):deterministic samples
• in this lecture: QMC based on low discrepancy sequences• note: QMC methods defined for [0,1]d ; for any other domain, we need transformations
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 16
Quasi-Monte Carlo sampling• standard Monte Carlo: pseudo-random
samples
⇒
• quasi-Monte Carlo (QMC):deterministic samples
• in this lecture: QMC based on low discrepancy sequences• note: QMC methods defined for [0,1]d ; for any other domain, we need transformations
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 16
Quasi-Monte Carlo sampling• standard Monte Carlo: pseudo-random
samples
⇒
• quasi-Monte Carlo (QMC):deterministic samples
• in this lecture: QMC based on low discrepancy sequences
• note: QMC methods defined for [0,1]d ; for any other domain, we need transformations
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 16
Quasi-Monte Carlo sampling• standard Monte Carlo: pseudo-random
samples
⇒
• quasi-Monte Carlo (QMC):deterministic samples
• in this lecture: QMC based on low discrepancy sequences• note: QMC methods defined for [0,1]d ; for any other domain, we need transformations
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 16
Upper bound of integration error
Koksma-Hlawa inequalityRemember
• I :=∫ 1
0 f (x)dx
• If = 1N ∑
Ni=1 f (xi)
Theorem
Koksma-Hlawka inequality: |I− If | ≤ V (f )DN
• V (f )→ variation of f
• DN = supA⊂[0,1]
∣∣∣card(A)N −vol(A)
∣∣∣→ discrepancy of {xi}Ni=1
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 18
Koksma-Hlawa inequalityRemember
• I :=∫ 1
0 f (x)dx
• If = 1N ∑
Ni=1 f (xi)
Theorem
Koksma-Hlawka inequality: |I− If | ≤ V (f )DN
• V (f )→ variation of f
• DN = supA⊂[0,1]
∣∣∣card(A)N −vol(A)
∣∣∣→ discrepancy of {xi}Ni=1
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 18
Koksma-Hlawa inequalityRemember
• I :=∫ 1
0 f (x)dx
• If = 1N ∑
Ni=1 f (xi)
Theorem
Koksma-Hlawka inequality: |I− If | ≤ V (f )DN
• V (f )→ variation of f
• DN = supA⊂[0,1]
∣∣∣card(A)N −vol(A)
∣∣∣→ discrepancy of {xi}Ni=1
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 18
Low discrepancy sequences
Low discrepancy sequencesBasic idea
• in |I− If | ≤ V (f )DN , assume that V (f ) = constant
• idea: minimize error by reducing DN , i.e.
• produce {xi}Ni=1 that are “well” spaced
• in this way: O( 1√N)→ O( log(N)d
N ), where d is the dimension
⇐ ⇒
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 20
Low discrepancy sequencesBasic idea
• in |I− If | ≤ V (f )DN , assume that V (f ) = constant
• idea: minimize error by reducing DN , i.e.
• produce {xi}Ni=1 that are “well” spaced
• in this way: O( 1√N)→ O( log(N)d
N ), where d is the dimension
⇐ ⇒
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 20
Low discrepancy sequencesBasic idea
• in |I− If | ≤ V (f )DN , assume that V (f ) = constant
• idea: minimize error by reducing DN , i.e.
• produce {xi}Ni=1 that are “well” spaced
• in this way: O( 1√N)→ O( log(N)d
N ), where d is the dimension
⇐ ⇒
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 20
Low discrepancy sequences example:Halton sequences
Halton sequences• start with a prime number p
• construct a sequence based on finer and finer p-based divisions of sub-intervals of [0,1]• e.g. let p = 3− break [0,1] into 3 equal subintervals
− break each sub-interval into 3 equal subintervals
− now, the sequence is 1/3, 2/3, 1/9, 4/9, 7/9, 2/9, 5/9, 8/9− repeat until desired length
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 22
Halton sequences• start with a prime number p
• construct a sequence based on finer and finer p-based divisions of sub-intervals of [0,1]
• e.g. let p = 3− break [0,1] into 3 equal subintervals
− break each sub-interval into 3 equal subintervals
− now, the sequence is 1/3, 2/3, 1/9, 4/9, 7/9, 2/9, 5/9, 8/9− repeat until desired length
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 22
Halton sequences• start with a prime number p
• construct a sequence based on finer and finer p-based divisions of sub-intervals of [0,1]• e.g. let p = 3
− break [0,1] into 3 equal subintervals
− break each sub-interval into 3 equal subintervals
− now, the sequence is 1/3, 2/3, 1/9, 4/9, 7/9, 2/9, 5/9, 8/9− repeat until desired length
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 22
Halton sequences• start with a prime number p
• construct a sequence based on finer and finer p-based divisions of sub-intervals of [0,1]• e.g. let p = 3− break [0,1] into 3 equal subintervals
− break each sub-interval into 3 equal subintervals
− now, the sequence is 1/3, 2/3, 1/9, 4/9, 7/9, 2/9, 5/9, 8/9− repeat until desired length
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 22
Halton sequences• start with a prime number p
• construct a sequence based on finer and finer p-based divisions of sub-intervals of [0,1]• e.g. let p = 3− break [0,1] into 3 equal subintervals
− break each sub-interval into 3 equal subintervals
− now, the sequence is 1/3, 2/3, 1/9, 4/9, 7/9, 2/9, 5/9, 8/9− repeat until desired length
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 22
Halton sequences• start with a prime number p
• construct a sequence based on finer and finer p-based divisions of sub-intervals of [0,1]• e.g. let p = 3− break [0,1] into 3 equal subintervals
− break each sub-interval into 3 equal subintervals
− now, the sequence is 1/3, 2/3, 1/9, 4/9, 7/9, 2/9, 5/9, 8/9
− repeat until desired length
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 22
Halton sequences• start with a prime number p
• construct a sequence based on finer and finer p-based divisions of sub-intervals of [0,1]• e.g. let p = 3− break [0,1] into 3 equal subintervals
− break each sub-interval into 3 equal subintervals
− now, the sequence is 1/3, 2/3, 1/9, 4/9, 7/9, 2/9, 5/9, 8/9− repeat until desired length
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 22
Halton sequences example• 2D Halton grid with 100 elements
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 23
Quasi-Monte Carlo sampling: example
Model problem – damped linear oscillator
d2ydt2 (t)+c dy
dt (t)+ky(t) = f cos(ω t)y(0) = y0dydt (0) = y1
• t ∈ [0,30]
• k = 0.035
• f = 0.100
• ω = 1.000
• y0 = 0.500
• y1 = 0.000
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 25
Model problem – damped linear oscillator
d2ydt2 (t)+c dy
dt (t)+ky(t) = f cos(ω t)y(0) = y0dydt (0) = y1
• t ∈ [0,30]
• k = 0.035
• f = 0.100
• ω = 1.000
• y0 = 0.500
• y1 = 0.000
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 25
Quasi-Monte Carlo – example• t0 = 15
Deterministic result
• y(t0) =−1.51e−01
Stochastic results
• assume c ∼U (0.08,0.12)
• 100 samples, standard Monte Carlo→ E [y(t0)] =−1.61e−01, Var[y(t0)] = 6.51e−04
• 100 samples, QMC, Halton sequences→ E [y(t0)] =−1.53e−01, Var[y(t0)] = 7.78e−04
• 1000 samples, standard Monte Carlo→ E [y(t0)] =−1.52e−01, Var[y(t0)] = 7.30e−04
• 1000 samples, QMC, Halton sequences→ E [y(t0)] =−1.52e−01, Var[y(t0)] = 7.81e−04
• 10000 samples, standard Monte Carlo→ E [y(t0)] =−1.52e−01, Var[y(t0)] = 7.84e−04
• 10000 samples, QMC, Halton sequences→ E [y(t0)] =−1.52e−01, Var[y(t0)] = 7.80e−04
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 26
Quasi-Monte Carlo – example• t0 = 15
Deterministic result
• y(t0) =−1.51e−01
Stochastic results
• assume c ∼U (0.08,0.12)
• 100 samples, standard Monte Carlo→ E [y(t0)] =−1.61e−01, Var[y(t0)] = 6.51e−04
• 100 samples, QMC, Halton sequences→ E [y(t0)] =−1.53e−01, Var[y(t0)] = 7.78e−04
• 1000 samples, standard Monte Carlo→ E [y(t0)] =−1.52e−01, Var[y(t0)] = 7.30e−04
• 1000 samples, QMC, Halton sequences→ E [y(t0)] =−1.52e−01, Var[y(t0)] = 7.81e−04
• 10000 samples, standard Monte Carlo→ E [y(t0)] =−1.52e−01, Var[y(t0)] = 7.84e−04
• 10000 samples, QMC, Halton sequences→ E [y(t0)] =−1.52e−01, Var[y(t0)] = 7.80e−04
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 26
Quasi-Monte Carlo – example• t0 = 15
Deterministic result
• y(t0) =−1.51e−01
Stochastic results
• assume c ∼U (0.08,0.12)
• 100 samples, standard Monte Carlo→ E [y(t0)] =−1.61e−01, Var[y(t0)] = 6.51e−04
• 100 samples, QMC, Halton sequences→ E [y(t0)] =−1.53e−01, Var[y(t0)] = 7.78e−04
• 1000 samples, standard Monte Carlo→ E [y(t0)] =−1.52e−01, Var[y(t0)] = 7.30e−04
• 1000 samples, QMC, Halton sequences→ E [y(t0)] =−1.52e−01, Var[y(t0)] = 7.81e−04
• 10000 samples, standard Monte Carlo→ E [y(t0)] =−1.52e−01, Var[y(t0)] = 7.84e−04
• 10000 samples, QMC, Halton sequences→ E [y(t0)] =−1.52e−01, Var[y(t0)] = 7.80e−04
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 26
Quasi-Monte Carlo – example• t0 = 15
Deterministic result
• y(t0) =−1.51e−01
Stochastic results
• assume c ∼U (0.08,0.12)
• 100 samples, standard Monte Carlo→ E [y(t0)] =−1.61e−01, Var[y(t0)] = 6.51e−04
• 100 samples, QMC, Halton sequences→ E [y(t0)] =−1.53e−01, Var[y(t0)] = 7.78e−04
• 1000 samples, standard Monte Carlo→ E [y(t0)] =−1.52e−01, Var[y(t0)] = 7.30e−04
• 1000 samples, QMC, Halton sequences→ E [y(t0)] =−1.52e−01, Var[y(t0)] = 7.81e−04
• 10000 samples, standard Monte Carlo→ E [y(t0)] =−1.52e−01, Var[y(t0)] = 7.84e−04
• 10000 samples, QMC, Halton sequences→ E [y(t0)] =−1.52e−01, Var[y(t0)] = 7.80e−04
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 26
Quasi-Monte Carlo – example• t0 = 15
Deterministic result
• y(t0) =−1.51e−01
Stochastic results
• assume c ∼U (0.08,0.12)
• 100 samples, standard Monte Carlo→ E [y(t0)] =−1.61e−01, Var[y(t0)] = 6.51e−04
• 100 samples, QMC, Halton sequences→ E [y(t0)] =−1.53e−01, Var[y(t0)] = 7.78e−04
• 1000 samples, standard Monte Carlo→ E [y(t0)] =−1.52e−01, Var[y(t0)] = 7.30e−04
• 1000 samples, QMC, Halton sequences→ E [y(t0)] =−1.52e−01, Var[y(t0)] = 7.81e−04
• 10000 samples, standard Monte Carlo→ E [y(t0)] =−1.52e−01, Var[y(t0)] = 7.84e−04
• 10000 samples, QMC, Halton sequences→ E [y(t0)] =−1.52e−01, Var[y(t0)] = 7.80e−04
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 26
Summary
Summary• the accuracy of standard Monte Carlo can be improved via− optimizing your code− increasing the number of samples− decreasing the variance of the estimators− changing the sampling technique
• variance reduction techniques− antithetic sampling− importance sampling− stratified sampling− control variates• alternative random number generation techniques− Fibonacci generators− latin hypercube sampling− Sobol’ sequences− Halton sequences
• example low-discrepancy sequences: Halton sequences
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 28
Summary• the accuracy of standard Monte Carlo can be improved via− optimizing your code− increasing the number of samples− decreasing the variance of the estimators− changing the sampling technique• variance reduction techniques− antithetic sampling− importance sampling− stratified sampling− control variates
• alternative random number generation techniques− Fibonacci generators− latin hypercube sampling− Sobol’ sequences− Halton sequences
• example low-discrepancy sequences: Halton sequences
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 28
Summary• the accuracy of standard Monte Carlo can be improved via− optimizing your code− increasing the number of samples− decreasing the variance of the estimators− changing the sampling technique• variance reduction techniques− antithetic sampling− importance sampling− stratified sampling− control variates• alternative random number generation techniques− Fibonacci generators− latin hypercube sampling− Sobol’ sequences− Halton sequences
• example low-discrepancy sequences: Halton sequences
Dr. rer. nat. Tobias Neckel | Algorithms for Uncertainty Quantification | Summer Semester 2017 28