Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski,...

43
Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik

Transcript of Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski,...

Page 1: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Perception-Based Global Illumination, Rendering and Animation Techniques

Karol Myszkowski, Max-Planck-Institut für Informatik

Karol Myszkowski, Max-Planck-Institut für Informatik

Page 2: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Outline• Perceptually based Animation Quality MetricPerceptually based Animation Quality Metric

• AQM applicationsAQM applications

– IBR techniques in walkthrough animation

– Global illumination for dynamic environments

• Visual attention driven interactive rendering Visual attention driven interactive rendering

• The atrium modelThe atrium model

• Perceptually based Animation Quality MetricPerceptually based Animation Quality Metric

• AQM applicationsAQM applications

– IBR techniques in walkthrough animation

– Global illumination for dynamic environments

• Visual attention driven interactive rendering Visual attention driven interactive rendering

• The atrium modelThe atrium model

Page 3: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Animation Quality Metric

Page 4: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Questions of Appearance Preservation

The concern is not whether images The concern is not whether images arearethe same; rather the concern is whether images the same; rather the concern is whether images appearappear the same. the same.

How much computation is enough?How much reduction is too much?

An objective metric of image quality which takes into account basic characteristics of the human perception could be of some help to answer these questions without human assistance.

How much computation is enough?How much reduction is too much?

An objective metric of image quality which takes into account basic characteristics of the human perception could be of some help to answer these questions without human assistance.

Page 5: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Motivation• In the traditional approach to rendering of high quality In the traditional approach to rendering of high quality

animation sequences every frame is considered animation sequences every frame is considered separately. This precludes accounting for the visual separately. This precludes accounting for the visual sensitivity to temporal detail.sensitivity to temporal detail.

• Our goal is to improve the performance of walkthrough Our goal is to improve the performance of walkthrough sequences rendering by considering both the sequences rendering by considering both the spatialspatial and and temporaltemporal aspects of aspects of human perceptionhuman perception..

• We want to focus computational efforts on those image We want to focus computational efforts on those image details that can be readily perceived in the animated details that can be readily perceived in the animated sequence.sequence.

• In the traditional approach to rendering of high quality In the traditional approach to rendering of high quality animation sequences every frame is considered animation sequences every frame is considered separately. This precludes accounting for the visual separately. This precludes accounting for the visual sensitivity to temporal detail.sensitivity to temporal detail.

• Our goal is to improve the performance of walkthrough Our goal is to improve the performance of walkthrough sequences rendering by considering both the sequences rendering by considering both the spatialspatial and and temporaltemporal aspects of aspects of human perceptionhuman perception..

• We want to focus computational efforts on those image We want to focus computational efforts on those image details that can be readily perceived in the animated details that can be readily perceived in the animated sequence.sequence.

Page 6: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Modeling important characteristics of the Human Visual System

• Contrast Sensitivity FunctionContrast Sensitivity Function which specifies the which specifies the detection threshold for a stimulus as a function of its detection threshold for a stimulus as a function of its spatial and temporal frequencies.spatial and temporal frequencies.

• Temporal and spatial mechanisms (channels)Temporal and spatial mechanisms (channels) which which are used to represent the visual information at various are used to represent the visual information at various scales and orientations as it is believed that primary scales and orientations as it is believed that primary visual cortex does.visual cortex does.

• Visual maskingVisual masking affecting the detection threshold of a affecting the detection threshold of a stimulus as a function of the interfering background stimulus as a function of the interfering background stimulus which is closely coupled in space and time. stimulus which is closely coupled in space and time.

• Contrast Sensitivity FunctionContrast Sensitivity Function which specifies the which specifies the detection threshold for a stimulus as a function of its detection threshold for a stimulus as a function of its spatial and temporal frequencies.spatial and temporal frequencies.

• Temporal and spatial mechanisms (channels)Temporal and spatial mechanisms (channels) which which are used to represent the visual information at various are used to represent the visual information at various scales and orientations as it is believed that primary scales and orientations as it is believed that primary visual cortex does.visual cortex does.

• Visual maskingVisual masking affecting the detection threshold of a affecting the detection threshold of a stimulus as a function of the interfering background stimulus as a function of the interfering background stimulus which is closely coupled in space and time. stimulus which is closely coupled in space and time.

Page 7: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Spatiovelocity Contrast Sensitivity Function

• Contrast sensitivity data for traveling gratings of various spatial frequencies Contrast sensitivity data for traveling gratings of various spatial frequencies were derived in Kelly’s psychophysical experiments (1960). were derived in Kelly’s psychophysical experiments (1960).

• Daly (1998) extended Kelly’s model to account for target tracking by the eye Daly (1998) extended Kelly’s model to account for target tracking by the eye movements.movements.

• Contrast sensitivity data for traveling gratings of various spatial frequencies Contrast sensitivity data for traveling gratings of various spatial frequencies were derived in Kelly’s psychophysical experiments (1960). were derived in Kelly’s psychophysical experiments (1960).

• Daly (1998) extended Kelly’s model to account for target tracking by the eye Daly (1998) extended Kelly’s model to account for target tracking by the eye movements.movements.

log visual sensitivity

log velocity [deg/sec]

log spatial frequency [cycles/deg]

Temporal frequency [Hz]

Page 8: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Spatial and orientation mechanisms

The following filter banks are commonly used:The following filter banks are commonly used:

• Gabor functions (Marcelja80), Gabor functions (Marcelja80),

• Steerable pyramid transform (Simoncelli92), Steerable pyramid transform (Simoncelli92),

• Discrete Cosine Transform (DCT), Discrete Cosine Transform (DCT),

• Difference of Gaussians (Laplacian) pyramids Difference of Gaussians (Laplacian) pyramids (Burt83,Wilson91), (Burt83,Wilson91),

• Cortex transform (Watson87, Daly93).Cortex transform (Watson87, Daly93).

The following filter banks are commonly used:The following filter banks are commonly used:

• Gabor functions (Marcelja80), Gabor functions (Marcelja80),

• Steerable pyramid transform (Simoncelli92), Steerable pyramid transform (Simoncelli92),

• Discrete Cosine Transform (DCT), Discrete Cosine Transform (DCT),

• Difference of Gaussians (Laplacian) pyramids Difference of Gaussians (Laplacian) pyramids (Burt83,Wilson91), (Burt83,Wilson91),

• Cortex transform (Watson87, Daly93).Cortex transform (Watson87, Daly93).

Page 9: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Cortex transform: organization of the filter bank

Page 10: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Cortex transform: orientation bandsorientation bands

Input image Input image

Page 11: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Visual masking• Masking is strongest between stimuli located in the same Masking is strongest between stimuli located in the same

perceptual channel, and many vision models are limited to this perceptual channel, and many vision models are limited to this intra-channel masking. intra-channel masking.

• The following threshold elevation model is commonly used:The following threshold elevation model is commonly used:

• Masking is strongest between stimuli located in the same Masking is strongest between stimuli located in the same perceptual channel, and many vision models are limited to this perceptual channel, and many vision models are limited to this intra-channel masking. intra-channel masking.

• The following threshold elevation model is commonly used:The following threshold elevation model is commonly used:

Page 12: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Experimental findings on the human perception of animated sequences• The requirements imposed on the quality of still images The requirements imposed on the quality of still images

must be higher than for images used in an animated must be higher than for images used in an animated sequence. The quality requirements can usually be sequence. The quality requirements can usually be relaxed as the velocity of the visual pattern increases. relaxed as the velocity of the visual pattern increases.

• The perceived sharpness of blurred visual patterns The perceived sharpness of blurred visual patterns increases with their motion velocity, which is attributed increases with their motion velocity, which is attributed to the higher level processing in the visual system.to the higher level processing in the visual system.

• The human eye is less sensitive to higher spatial The human eye is less sensitive to higher spatial frequencies than to lower frequencies.frequencies than to lower frequencies.

• The requirements imposed on the quality of still images The requirements imposed on the quality of still images must be higher than for images used in an animated must be higher than for images used in an animated sequence. The quality requirements can usually be sequence. The quality requirements can usually be relaxed as the velocity of the visual pattern increases. relaxed as the velocity of the visual pattern increases.

• The perceived sharpness of blurred visual patterns The perceived sharpness of blurred visual patterns increases with their motion velocity, which is attributed increases with their motion velocity, which is attributed to the higher level processing in the visual system.to the higher level processing in the visual system.

• The human eye is less sensitive to higher spatial The human eye is less sensitive to higher spatial frequencies than to lower frequencies.frequencies than to lower frequencies.

Page 13: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Video quality metrics• Virtually all state-of-the-art perception-based video quality Virtually all state-of-the-art perception-based video quality

metrics account for the discussed HVS characteristics.metrics account for the discussed HVS characteristics.

• A majority of the existing video quality metrics have been A majority of the existing video quality metrics have been developed to evaluate performance of digital video coding and developed to evaluate performance of digital video coding and compression techniques, e.g., Lambrecht (1996), Lubin (1997), compression techniques, e.g., Lambrecht (1996), Lubin (1997), and Watson (1998).and Watson (1998).

• The lack of comparative studies make it unclear which metric The lack of comparative studies make it unclear which metric performs best.performs best.

• We use our own metric that takes advantage of data readily We use our own metric that takes advantage of data readily available for synthetic images.available for synthetic images.

• Virtually all state-of-the-art perception-based video quality Virtually all state-of-the-art perception-based video quality metrics account for the discussed HVS characteristics.metrics account for the discussed HVS characteristics.

• A majority of the existing video quality metrics have been A majority of the existing video quality metrics have been developed to evaluate performance of digital video coding and developed to evaluate performance of digital video coding and compression techniques, e.g., Lambrecht (1996), Lubin (1997), compression techniques, e.g., Lambrecht (1996), Lubin (1997), and Watson (1998).and Watson (1998).

• The lack of comparative studies make it unclear which metric The lack of comparative studies make it unclear which metric performs best.performs best.

• We use our own metric that takes advantage of data readily We use our own metric that takes advantage of data readily available for synthetic images.available for synthetic images.

Page 14: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Deriving pixel flow usingImage-Based Rendering techniques

Page 15: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Animation Quality Metric (AQM)

• Perception-based visible differences predictor for still Perception-based visible differences predictor for still images (Eriksson images (Eriksson et al.,et al., 1998) was extended. 1998) was extended.

• Pixel Flow derived via 3D Warping provides velocity Pixel Flow derived via 3D Warping provides velocity data as required by Kelly’s SV-CSF model. data as required by Kelly’s SV-CSF model.

• Perception-based visible differences predictor for still Perception-based visible differences predictor for still images (Eriksson images (Eriksson et al.,et al., 1998) was extended. 1998) was extended.

• Pixel Flow derived via 3D Warping provides velocity Pixel Flow derived via 3D Warping provides velocity data as required by Kelly’s SV-CSF model. data as required by Kelly’s SV-CSF model.

Errorpooling

AmplitudeCompress.

Spatio-velocity

CSF

GlobalContrast

CortexFiltering

VisualMasking

+

Imag

e 1

Imag

e 2

Perc

eptu

al d

iffe

renc

e

3D Warp

Pixel Flow

AmplitudeCompress.

Spatio-velocity

CSF

GlobalContrast

CortexFiltering

VisualMasking

Range Data

Camera

Page 16: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Using IBR techniques to improve the performance of animation rendering

Assumptions:

- static environments

- predefined animation path

Assumptions:

- static environments

- predefined animation path

Joint work with P. Rokita and T. TawaraJoint work with P. Rokita and T. Tawara

Page 17: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Animation rendering - objectives

• Use ray tracing to compute all key frames and Use ray tracing to compute all key frames and selected glossy and transparent objects.selected glossy and transparent objects.

• For inbetween frames, derive as many pixels as For inbetween frames, derive as many pixels as possible using computationally inexpensive Image possible using computationally inexpensive Image Based Rendering techniques. Based Rendering techniques.

• The animation quality as perceived by the human The animation quality as perceived by the human observer must not be affected. observer must not be affected.

• Use ray tracing to compute all key frames and Use ray tracing to compute all key frames and selected glossy and transparent objects.selected glossy and transparent objects.

• For inbetween frames, derive as many pixels as For inbetween frames, derive as many pixels as possible using computationally inexpensive Image possible using computationally inexpensive Image Based Rendering techniques. Based Rendering techniques.

• The animation quality as perceived by the human The animation quality as perceived by the human observer must not be affected. observer must not be affected.

Page 18: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Keyframe placement - our approach• Our goal is to find inexpensive and automatic solution, Our goal is to find inexpensive and automatic solution,

which reduces animation artifacts which can be which reduces animation artifacts which can be perceived by the human observer.perceived by the human observer.

• Our solution consists of two stages:Our solution consists of two stages:

– initial keyframe placement which reduces the number of pixels which cannot be properly derived using IBR techniques due to occlusion problems,

– further refinement of keyframe placement which takes into account perceptual considerations, and is guided by AQM predictions.

• Our goal is to find inexpensive and automatic solution, Our goal is to find inexpensive and automatic solution, which reduces animation artifacts which can be which reduces animation artifacts which can be perceived by the human observer.perceived by the human observer.

• Our solution consists of two stages:Our solution consists of two stages:

– initial keyframe placement which reduces the number of pixels which cannot be properly derived using IBR techniques due to occlusion problems,

– further refinement of keyframe placement which takes into account perceptual considerations, and is guided by AQM predictions.

Page 19: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

In-between frame

generation

Page 20: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Examples of final framesSupersampled frame used in traditional animations

Corresponding framederived using our approach

In both cases the perceived quality of animation appears to be similar!

Page 21: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Exploiting spatial and temporal coherence of indirect lighting

in high-quality animation rendering

Assumptions:

- dynamic environments

- predefined animation path

Assumptions:

- dynamic environments

- predefined animation path

Joint work with T. Tawara, H. Akamine and HP. SeidelJoint work with T. Tawara, H. Akamine and HP. Seidel

Page 22: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Indirect lighting in animated sequences

Basic characteristics:• Usually quite costly to compute.Usually quite costly to compute.

• Usually changes slowly and smoothly both in Usually changes slowly and smoothly both in temporal and spatial domains.temporal and spatial domains.

Practical approaches: Compute indirect lighting for every Compute indirect lighting for every nn-th frame and -th frame and

assume that it does not change for inbetween assume that it does not change for inbetween frames. In some global illumination frameworks frames. In some global illumination frameworks interpolation between a pair of keyframes is interpolation between a pair of keyframes is possible.possible.

Basic characteristics:• Usually quite costly to compute.Usually quite costly to compute.

• Usually changes slowly and smoothly both in Usually changes slowly and smoothly both in temporal and spatial domains.temporal and spatial domains.

Practical approaches: Compute indirect lighting for every Compute indirect lighting for every nn-th frame and -th frame and

assume that it does not change for inbetween assume that it does not change for inbetween frames. In some global illumination frameworks frames. In some global illumination frameworks interpolation between a pair of keyframes is interpolation between a pair of keyframes is possible.possible.

Page 23: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Possible problems

• Popping effects.Popping effects.

• Improper image appearance in the regions Improper image appearance in the regions illuminated mostly by indirect lighting.illuminated mostly by indirect lighting.

• Popping effects.Popping effects.

• Improper image appearance in the regions Improper image appearance in the regions illuminated mostly by indirect lighting.illuminated mostly by indirect lighting.

Page 24: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Our framework

• Density Estimation Particle Tracing algorithm.Density Estimation Particle Tracing algorithm.

• Illumination reconstruction using the histogram Illumination reconstruction using the histogram density estimation method (photon bucketing into density estimation method (photon bucketing into dense mesh).dense mesh).

• Photon storage for possible re-use in neighboring Photon storage for possible re-use in neighboring frames.frames.

• Photons are computed for each frame and “attached” Photons are computed for each frame and “attached” to mesh elements even for moving objects.to mesh elements even for moving objects.

• Direct lighting computed from scratch for each frame Direct lighting computed from scratch for each frame using ray tracing.using ray tracing.

• Density Estimation Particle Tracing algorithm.Density Estimation Particle Tracing algorithm.

• Illumination reconstruction using the histogram Illumination reconstruction using the histogram density estimation method (photon bucketing into density estimation method (photon bucketing into dense mesh).dense mesh).

• Photon storage for possible re-use in neighboring Photon storage for possible re-use in neighboring frames.frames.

• Photons are computed for each frame and “attached” Photons are computed for each frame and “attached” to mesh elements even for moving objects.to mesh elements even for moving objects.

• Direct lighting computed from scratch for each frame Direct lighting computed from scratch for each frame using ray tracing.using ray tracing.

Page 25: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Temporal photon processing

Contradictory requirement:

• maximize the number of photons collected in maximize the number of photons collected in the temporal domain to reduce the noise the temporal domain to reduce the noise which is inherent for stochastic solutions which is inherent for stochastic solutions

• minimize the number of neighboring frames minimize the number of neighboring frames for which those photons were traced for which those photons were traced to to avoid collecting invalid photons.avoid collecting invalid photons.

Contradictory requirement:

• maximize the number of photons collected in maximize the number of photons collected in the temporal domain to reduce the noise the temporal domain to reduce the noise which is inherent for stochastic solutions which is inherent for stochastic solutions

• minimize the number of neighboring frames minimize the number of neighboring frames for which those photons were traced for which those photons were traced to to avoid collecting invalid photons.avoid collecting invalid photons.

Page 26: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Temporal photon processing: our solution

• An energy based stochastic error metric is used to An energy based stochastic error metric is used to guide the photon collection in the temporal domain. guide the photon collection in the temporal domain.

– The metric is applied to each mesh element and to every frame.

– Thus, it must be inexpensive.

• The perception-based AQM is used for finding the The perception-based AQM is used for finding the minimal number of photons per frame to reduce the minimal number of photons per frame to reduce the noisenoise level level below the visibility threshold. below the visibility threshold.

• An energy based stochastic error metric is used to An energy based stochastic error metric is used to guide the photon collection in the temporal domain. guide the photon collection in the temporal domain.

– The metric is applied to each mesh element and to every frame.

– Thus, it must be inexpensive.

• The perception-based AQM is used for finding the The perception-based AQM is used for finding the minimal number of photons per frame to reduce the minimal number of photons per frame to reduce the noisenoise level level below the visibility threshold. below the visibility threshold.

Page 27: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Algorithm

1.1. Initialization: determine the initial Initialization: determine the initial number of photons per framenumber of photons per frame

2.2. Adjust the animation segment length depending on Adjust the animation segment length depending on temporal variations of indirect lighting which are temporal variations of indirect lighting which are measured using energy-based criteria.measured using energy-based criteria.

3.3. Adjust the number of photons per frame based on Adjust the number of photons per frame based on the AQM response to limit the perceivable noise.the AQM response to limit the perceivable noise.

4.4. Spatiotemporal reconstruction of indirect lighting.Spatiotemporal reconstruction of indirect lighting.

5.5. Spatial filtering step.Spatial filtering step.

1.1. Initialization: determine the initial Initialization: determine the initial number of photons per framenumber of photons per frame

2.2. Adjust the animation segment length depending on Adjust the animation segment length depending on temporal variations of indirect lighting which are temporal variations of indirect lighting which are measured using energy-based criteria.measured using energy-based criteria.

3.3. Adjust the number of photons per frame based on Adjust the number of photons per frame based on the AQM response to limit the perceivable noise.the AQM response to limit the perceivable noise.

4.4. Spatiotemporal reconstruction of indirect lighting.Spatiotemporal reconstruction of indirect lighting.

5.5. Spatial filtering step.Spatial filtering step.

Page 28: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

ReferenceReferencesolutionsolution

AAdaptivedaptivephotonphotoncollectioncollection

The AQMThe AQMpredicpredicttededdifferencesdifferences

Page 29: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

The AQM predictions

Page 30: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Distribution of mesh elements for frame K as a function of the Distribution of mesh elements for frame K as a function of the number of preceding (negative values) and following frames for number of preceding (negative values) and following frames for which temporal photon processing was possible. The maximum which temporal photon processing was possible. The maximum assumed temporal expansion is specified in the legend.assumed temporal expansion is specified in the legend.

Page 31: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

The AQM predicted percentage of pixels with The AQM predicted percentage of pixels with perceivable differences as a function of the perceivable differences as a function of the number of photons per frame.number of photons per frame.

Page 32: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

The number of photons per frame:10,000 25,000

ON OFF

Temporal processing

ON OFF

Temporal processing

Page 33: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Visual Attention Driven Progressive Rendering for Interactive Walkthroughs

Joint work with Joint work with J. HaberJ. Haber, H. , H. YamauchiYamauchi and HP. Seidel and HP. Seidel

Page 34: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Rendering glossy sufaces in interactive applications

• Lambertian lighting component is stored in illumination Lambertian lighting component is stored in illumination maps:maps:

– High quality radiosity solutions.

– Mesh and textures used to reconstruct illumination.

• The quality of graphics hardware supported solutions The quality of graphics hardware supported solutions is too low.is too low.

• Ray tracing is too costly to perform for all pixels Ray tracing is too costly to perform for all pixels representing glossy surfaces.representing glossy surfaces.

• Lambertian lighting component is stored in illumination Lambertian lighting component is stored in illumination maps:maps:

– High quality radiosity solutions.

– Mesh and textures used to reconstruct illumination.

• The quality of graphics hardware supported solutions The quality of graphics hardware supported solutions is too low.is too low.

• Ray tracing is too costly to perform for all pixels Ray tracing is too costly to perform for all pixels representing glossy surfaces.representing glossy surfaces.

Page 35: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

How to obtain the best image quality as perceived by human observers?

• Use visual attention models to drive corrective computations for Use visual attention models to drive corrective computations for glossy objects that are likely to be “attended”:glossy objects that are likely to be “attended”:

– Consider both the saliency- and task-driven selection of those objects.

– Shading artifacts of “unattended” objects are likely to remain unnoticed.

• Use progressive rendering approach: Use progressive rendering approach:

– Hierarchical sample splatting in the image space.

– Cache samples and re-use them for similar views.

• Use multiple processors to increase the sample number. Use multiple processors to increase the sample number.

• Use visual attention models to drive corrective computations for Use visual attention models to drive corrective computations for glossy objects that are likely to be “attended”:glossy objects that are likely to be “attended”:

– Consider both the saliency- and task-driven selection of those objects.

– Shading artifacts of “unattended” objects are likely to remain unnoticed.

• Use progressive rendering approach: Use progressive rendering approach:

– Hierarchical sample splatting in the image space.

– Cache samples and re-use them for similar views.

• Use multiple processors to increase the sample number. Use multiple processors to increase the sample number.

Page 36: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Visual attention model

• Bottom-up Bottom-up processingprocessing is purely is purely saliency-driven and saliency-driven and follows the attention follows the attention model developed by model developed by Itti et al. (1998).Itti et al. (1998).

• Top-down processingTop-down processing added to account for added to account for volition-controlled volition-controlled and task-dependent and task-dependent attention.attention.

• Bottom-up Bottom-up processingprocessing is purely is purely saliency-driven and saliency-driven and follows the attention follows the attention model developed by model developed by Itti et al. (1998).Itti et al. (1998).

• Top-down processingTop-down processing added to account for added to account for volition-controlled volition-controlled and task-dependent and task-dependent attention.attention.

Page 37: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

System overview

Page 38: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Visual attention processing

Open GL rendering Corrective splatting Converged solution: 5sOpen GL rendering Corrective splatting Converged solution: 5s

Saliency mapSaliency map

Page 39: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

……

scheme sampling 5

Splat levels:Splat levels: 1-2 1-3 1-61-2 1-3 1-6

22

115

by Stamminger and Drettakisby Stamminger and Drettakis

Page 40: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Warped old samplesWarped old samples

Adaptive splatting

Zoom Zoom inin

New samples:New samples: levels 1-2 levels 1-3 levels 1-2 levels 1-3 58 samples 188 samples58 samples 188 samples

Page 41: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Dynamic load balance

Onyx3 with 8 processors:Onyx3 with 8 processors:

– 6 processors for corrective ray tracing.

– Colors indicate pixels computed by different processors.

Onyx3 with 8 processors:Onyx3 with 8 processors:

– 6 processors for corrective ray tracing.

– Colors indicate pixels computed by different processors.

Page 42: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Timings measured on Onyx3

• Computing the saliency map: less than 0.3s.Computing the saliency map: less than 0.3s.

• Aging of samples: 0.01s; warping and splatting Aging of samples: 0.01s; warping and splatting cached samples: 0.02s (for 50,000 samples).cached samples: 0.02s (for 50,000 samples).

• OpenGL rendering: 0.05s for 90,000 triangles.OpenGL rendering: 0.05s for 90,000 triangles.

• Ray traced samples: 0.05s for 2,500 samples using Ray traced samples: 0.05s for 2,500 samples using 6 processors.6 processors.

• This makes it possible the frame rate 8-10fps for This makes it possible the frame rate 8-10fps for scenes composed of less than 100,000.scenes composed of less than 100,000.

• Computing the saliency map: less than 0.3s.Computing the saliency map: less than 0.3s.

• Aging of samples: 0.01s; warping and splatting Aging of samples: 0.01s; warping and splatting cached samples: 0.02s (for 50,000 samples).cached samples: 0.02s (for 50,000 samples).

• OpenGL rendering: 0.05s for 90,000 triangles.OpenGL rendering: 0.05s for 90,000 triangles.

• Ray traced samples: 0.05s for 2,500 samples using Ray traced samples: 0.05s for 2,500 samples using 6 processors.6 processors.

• This makes it possible the frame rate 8-10fps for This makes it possible the frame rate 8-10fps for scenes composed of less than 100,000.scenes composed of less than 100,000.

Page 43: Perception-Based Global Illumination, Rendering and Animation Techniques Karol Myszkowski, Max-Planck-Institut für Informatik Karol Myszkowski, Max-Planck-Institut.

Summary• We proposed an Animation Quality Metric suitable for We proposed an Animation Quality Metric suitable for

estimating quality of sequences of synthetic images.estimating quality of sequences of synthetic images.

• We developed a system for animation rendering featuring We developed a system for animation rendering featuring perception-based guidance of inbetween frames computation perception-based guidance of inbetween frames computation which reduces the rendering costs.which reduces the rendering costs.

• We proposed an animation rendering technique with spatio-We proposed an animation rendering technique with spatio-temporal photon processing, which makes possible efficient temporal photon processing, which makes possible efficient computation of global illumination for dynamic environments.computation of global illumination for dynamic environments.

• We used a visual attention model to drive corrective We used a visual attention model to drive corrective computations during walkthrough animation in environments computations during walkthrough animation in environments with arbitrary reflectance functions.with arbitrary reflectance functions.

• We proposed an Animation Quality Metric suitable for We proposed an Animation Quality Metric suitable for estimating quality of sequences of synthetic images.estimating quality of sequences of synthetic images.

• We developed a system for animation rendering featuring We developed a system for animation rendering featuring perception-based guidance of inbetween frames computation perception-based guidance of inbetween frames computation which reduces the rendering costs.which reduces the rendering costs.

• We proposed an animation rendering technique with spatio-We proposed an animation rendering technique with spatio-temporal photon processing, which makes possible efficient temporal photon processing, which makes possible efficient computation of global illumination for dynamic environments.computation of global illumination for dynamic environments.

• We used a visual attention model to drive corrective We used a visual attention model to drive corrective computations during walkthrough animation in environments computations during walkthrough animation in environments with arbitrary reflectance functions.with arbitrary reflectance functions.