Advanced Computer Graphics (Spring 2005) COMS 4162, Lectures 18, 19: Monte Carlo Integration Ravi...

Post on 19-Dec-2015

217 views 2 download

Tags:

Transcript of Advanced Computer Graphics (Spring 2005) COMS 4162, Lectures 18, 19: Monte Carlo Integration Ravi...

Advanced Computer Graphics Advanced Computer Graphics (Spring 2005) (Spring 2005)

COMS 4162, Lectures 18, 19: Monte Carlo Integration

Ravi Ramamoorthi

http://www.cs.columbia.edu/~cs4162

Acknowledgements and many slides courtesy: Thomas Funkhouser, Szymon Rusinkiewicz and Pat Hanrahan

To DoTo Do

Continue working on raytracer assignment (ass 3)

Start thinking about final project (ass 4)

MotivationMotivation

Rendering = integration Reflectance equation: Integrate over incident illumination Rendering equation: Integral equation

Many sophisticated shading effects involve integrals Antialiasing Soft shadows Indirect illumination Caustics

Example: Soft ShadowsExample: Soft Shadows

Monte CarloMonte Carlo

Algorithms based on statistical sampling and random numbers

Coined in the beginning of 1940s. Originally used for neutron transport, nuclear simulations Von Neumann, Ulam, Metropolis, …

Canonical example: 1D integral done numerically Choose a set of random points to evaluate function, and

then average (expectation or statistical average)

Monte Carlo AlgorithmsMonte Carlo Algorithms

Advantages Robust for complex integrals in computer graphics

(irregular domains, shadow discontinuities and so on) Efficient for high dimensional integrals (common in

graphics: time, light source directions, and so on) Quite simple to implement Work for general scenes, surfaces Easy to reason about (but care taken re statistical bias)

Disadvantages Noisy Slow (many samples needed for convergence) Not used if alternative analytic approaches exist (but those

are rare)

OutlineOutline

Motivation

Overview, 1D integration

Basic probability and sampling

Monte Carlo estimation of integrals

Integration in 1DIntegration in 1D

x=1x=1

f(x)f(x)

?)(1

0 dxxf ?)(1

0 dxxf

Slide courtesy of Slide courtesy of Peter ShirleyPeter Shirley

We can approximate We can approximate

x=1x=1

f(x)f(x) g(x)g(x)

1 1

0 0

( ) ( )f x dx g x dx 1 1

0 0

( ) ( )f x dx g x dx

Slide courtesy of Slide courtesy of Peter ShirleyPeter Shirley

Standard integration methods like trapezoidalrule and Simpsons rule

Advantages: • Converges fast for smooth integrands• Deterministic

Disadvantages:• Exponential complexity in many dimensions• Not rapid convergence for discontinuities

Or we can averageOr we can average

x=1x=1

f(x)f(x) E(f(x))E(f(x))

))(()(1

0

xfEdxxf ))(()(1

0

xfEdxxf

Slide courtesy of Slide courtesy of Peter ShirleyPeter Shirley

Estimating the averageEstimating the average

xx11

f(x)f(x)

xxNN

N

iixf

Ndxxf

1

1

0

)(1

)(

N

iixf

Ndxxf

1

1

0

)(1

)(

E(f(x))E(f(x))

Slide courtesy of Slide courtesy of Peter ShirleyPeter Shirley

Monte Carlo methods (random choose samples)

Advantages: • Robust for discontinuities• Converges reasonably for large dimensions• Can handle complex geometry, integrals• Relatively simple to implement, reason about

Other DomainsOther Domains

x=bx=b

f(x)f(x) < f >< f >abab

x=ax=a

N

ii

b

a

xfN

abdxxf

1

)()(

N

ii

b

a

xfN

abdxxf

1

)()(

Slide courtesy of Slide courtesy of Peter ShirleyPeter Shirley

Multidimensional DomainsMultidimensional Domains

Same ideas apply for integration over … Pixel areas Surfaces Projected areas Directions Camera apertures Time Paths

N

ii

UGLY

xfN

dxxf1

)(1

)(

N

ii

UGLY

xfN

dxxf1

)(1

)(

SurfaceSurface

EyeEye

PixelPixel

xx

OutlineOutline

Motivation

Overview, 1D integration

Basic probability and sampling

Monte Carlo estimation of integrals

Random VariablesRandom Variables

Describes possible outcomes of an experiment

In discrete case, e.g. value of a dice roll [x = 1-6]

Probability p associated with each x (1/6 for dice)

Continuous case is obvious extension

Expected ValueExpected Value

Expectation

For Dice example:

1

1

0

Discrete: ( )

Continuous: ( ) ( ) ( )

n

i ii

E x p x

E x p x f x dx

1

1 1( ) 1 2 3 4 5 6 3.5

6 6

n

ii

E x x

Sampling TechniquesSampling Techniques

Problem: how do we generate random points/directions during path tracing? Non-rectilinear domains Importance (BRDF) Stratified

SurfaceSurface

EyeEye

xx

Generating Random PointsGenerating Random Points

Uniform distribution: Use random number generator

Pro

bab

ility

Pro

bab

ility

00

11

Generating Random PointsGenerating Random Points

Specific probability distribution: Function inversion Rejection Metropolis

Pro

bab

ility

Pro

bab

ility

00

11

Common OperationsCommon Operations

Want to sample probability distributions Draw samples distributed according to probability Useful for integration, picking important regions, etc.

Common distributions Disk or circle Uniform Upper hemisphere for visibility Area luminaire Complex lighting like and environment map Complex reflectance like a BRDF

Generating Random PointsGenerating Random PointsC

um

ula

tive

Cum

ula

tive

Pro

bab

ility

Pro

bab

ility

00

11

Rejection SamplingRejection SamplingPro

bab

ility

Pro

bab

ility

00

11

xxxx

xxxx

xxxx

xxxx xxxx

xxxxxxxx

xxxx

xxxx xxxx

OutlineOutline

Motivation

Overview, 1D integration

Basic probability and sampling

Monte Carlo estimation of integrals

Monte Carlo Path TracingMonte Carlo Path Tracing

Big diffuse light source, 20 minutesBig diffuse light source, 20 minutes

JensenJensenMotivation for rendering in graphics: Covered in detail in next lecture

Monte Carlo Path TracingMonte Carlo Path Tracing

1000 paths/pixel1000 paths/pixel

JensenJensen

Estimating the averageEstimating the average

xx11

f(x)f(x)

xxNN

N

iixf

Ndxxf

1

1

0

)(1

)(

N

iixf

Ndxxf

1

1

0

)(1

)(

E(f(x))E(f(x))

Slide courtesy of Slide courtesy of Peter ShirleyPeter Shirley

Monte Carlo methods (random choose samples)

Advantages: • Robust for discontinuities• Converges reasonably for large dimensions• Can handle complex geometry, integrals• Relatively simple to implement, reason about

Other DomainsOther Domains

x=bx=b

f(x)f(x) < f >< f >abab

x=ax=a

N

ii

b

a

xfN

abdxxf

1

)()(

N

ii

b

a

xfN

abdxxf

1

)()(

Slide courtesy of Slide courtesy of Peter ShirleyPeter Shirley

More formallyMore formally

VarianceVariance

xx11 xxNN

E(f(x))E(f(x))

2

1

))](()([1

)( xfExfN

xfVarN

ii

2

1

))](()([1

)( xfExfN

xfVarN

ii

Variance for Dice Example?Variance for Dice Example?

Work out on board (variance for single dice roll)

VarianceVariance

xx11 xxNN

E(f(x))E(f(x))

)(1

))(( xfVarN

xfEVar )(1

))(( xfVarN

xfEVar

Variance decreases as 1/NVariance decreases as 1/NError decreases as 1/sqrt(N)Error decreases as 1/sqrt(N)

VarianceVariance

Problem: variance decreases with 1/N Increasing # samples removes noise slowly

xx11 xxNN

E(f(x))E(f(x))

Variance Reduction TechniquesVariance Reduction Techniques

Importance sampling

Stratified sampling

N

iixf

Ndxxf

1

1

0

)(1

)(

N

iixf

Ndxxf

1

1

0

)(1

)(

Importance SamplingImportance Sampling

Put more samples where f(x) is bigger

)(

)(

1)(

1

i

ii

N

ii

xp

xfY

YN

dxxf

)(

)(

1)(

1

i

ii

N

ii

xp

xfY

YN

dxxf

xx11 xxNN

E(f(x))E(f(x))

Importance SamplingImportance Sampling

This is still unbiased

xx11 xxNN

E(f(x))E(f(x))

dxxf

dxxpxp

xf

dxxpxYYE i

)(

)()(

)(

)()(

dxxf

dxxpxp

xf

dxxpxYYE i

)(

)()(

)(

)()(

for all Nfor all N

Importance SamplingImportance Sampling

Zero variance if p(x) ~ f(x)

x1 xN

E(f(x))

Less variance with betterLess variance with betterimportance samplingimportance sampling

0)(

1

)(

)(

)()(

YVar

cxp

xfY

xcfxp

i

ii

0)(

1

)(

)(

)()(

YVar

cxp

xfY

xcfxp

i

ii

Stratified SamplingStratified Sampling

Estimate subdomains separately

xx11 xxNN

EEkk(f(x))(f(x))

ArvoArvo

Stratified SamplingStratified Sampling

This is still unbiased

M

kii

N

iiN

FNN

xfN

F

1

1

1

)(1

M

kii

N

iiN

FNN

xfN

F

1

1

1

)(1

xx11 xxNN

EEkk(f(x))(f(x))

Stratified SamplingStratified Sampling

Less overall variance if less variance in subdomains

M

kiiN FVarN

NFVar

12

1

M

kiiN FVarN

NFVar

12

1

xx11 xxNN

EEkk(f(x))(f(x))

More InformationMore Information

Book chapter on Monte Carlo from Advanced Global Illumination (handed out)

Veach PhD thesis chapter (linked to from website)

Course Notes (links from website) Mathematical Models for Computer Graphics, Stanford, Fall 1997 State of the Art in Monte Carlo Methods for Realistic Image Synthesis,

Course 29, SIGGRAPH 2001