A Fuzzy Logic Based Approach for Enhancing Depth Perception in Computer Graphics

Click here to load reader

download A Fuzzy Logic Based Approach for Enhancing Depth Perception  in Computer Graphics

of 38

description

A Fuzzy Logic Based Approach for Enhancing Depth Perception in Computer Graphics. M.Sc. Thesis by Zeynep Çipiloğlu Advisor: Asst. Prof. Dr. Tolga Çapın. OUTLINE. Overview Background Depth cues Cue combination Rendering methods Approach Results Conclusion. Overview. Problem: - PowerPoint PPT Presentation

Transcript of A Fuzzy Logic Based Approach for Enhancing Depth Perception in Computer Graphics

Slide 1

M.Sc. ThesisbyZeynep ipiloluAdvisor: Asst. Prof. Dr. Tolga apnA Fuzzy Logic Based Approach for Enhancing Depth Perception in Computer GraphicsOUTLINE2/35OverviewBackgroundDepth cuesCue combinationRendering methodsApproachResults Conclusion2OverviewProblem:Better visualization of 3DExtra work for content creators

Goal:

Challenges:Which depth cues?Which rendering methods?2 domains: Human Visual Perception & Computer GraphicsOriginal 3D sceneEnhanced sceneDepth perception enhancement framework3/35Background: Pictorial Depth Cues

4/35Background: Binocular and Occulomotor Depth CuesBinocular Disparity:2 eyes: from slightly different viewpoints

OcculomotorAccommodationThe amount of distortion in the eye lens to fixate on an objectConvergenceThe fixation of the eyes towards a single location to maintain binocular vision

5/35

Background: Motion-based Depth CuesMotion parallaxCloser objects move more in the visual fieldMotion perspectiveSimilar to motion parallaxViewer is stationaryKinetic depthRotation around local axisShape perception

6/35Background: Cue CombinationQuestions:How the human visual system unifies different depth cues?How do the depth cues interact with each other?What affects the strength of depth cues?

Combination models:Cue averaging [Oruc03]Cue dominance [Howard08]Cue specialization [Ware04]Range extension [Cutting95]7/35

Background: Rendering MethodsOcclusion, size gradient, relative heightMatrix transformations [Swain09]Perspective projectionGround plane, roomReference objects Lines to ground

Relative brightness, aerial perspectiveFogProximity luminance [Ware04]Gooch shading [Gooch98]

8/35Background: Rendering MethodsTexture gradientBump mappingRelief mapping [Oliveira00]

Depth-of-focusDepth-of-field [Haeberli90]

NormalParallaxRelief9/35

Background: Rendering MethodsShading, shadow

Shadow mapAmbient Occlusion[Bunnel04]Gooch shading[Gooch98]Boundary Enhancement [Luft06, Nienhaus03]Halo effect [Bruckner07]10/35Background: Rendering MethodsMotion related cuesMouse/keyboard controlled motionFace tracking [Bulbul10]Magnetic trackers

Occulomotor and binocular cuesMulti-view rendering [Dodgson05, Halle05]Parallax, lenticular, holographicAnaglyph glasses, shutter glassesHMDs11/35Background: Rendering MethodsTarini et al.Molecular visualizationAmbient occlusionEdge cueingWeiskopf and ErtlColor transformationsSwainShading, brightness, occlusion, depth-of-focusBackground, foreground, object of interest

12/35Current solutions are limited:Domain specificNot comprehensive

ApproachA system that enhances the depth perception of a given sceneConsiders Users tasks, the spatial layout of the scene, costs of the methodsHybrid systemCue specializationRange extensionWeighted linear combination

13/35Stage 1: Cue PrioritizationAim: Determining proper depth cues for the given sceneInput: Task, distance, and scene layoutOutput: Cue priority valuesMethodology: Fuzzy logic-based decision

Why fuzzy logic?Terms are fuzzy (e.g: strong, weak, effective)Multi-input (task, scene layout, etc.)Binary logic is not sufficientUsed to model the human control logic, perception [Brackstone00, Russel97]14/35Fuzzy Logic

1. Fuzzification2. Inference3. Defuzzification

15/35Cue Prioritization Architecture

16/35Cue Prioritization: FuzzificationTaskCue specializationClassification by Ware [Ware04]

Membership func.

17/35Cue Prioritization: FuzzificationDistanceRange extensionBased on [Cutting95]

Membership func.

18/35Cue Prioritization: FuzzificationScene layout

Cast shadows give the best results when the objects are slightly above the ground plane [Ware04]

Membership function

19/35Cue Prioritization: InferenceSample rulesIF scene is suitable AND (minDistance is close OR maxDistance is close) AND patterns_of_points_in_3d is high_priority THEN binocular_disparity is strongIF scene is suitable AND maxDistance is far THEN aerial_perspective is strong

Fuzzy operators

20/35Cue Prioritization: Defuzzification

Defuzzification algorithm: COGMembership function21/35

22/35Stage 2: Method SelectionAim: Selecting the methods that provide the high priority cues, with minimum costInput: Cue priority vector, method costs, cost limitOutput: Selected methods listMethodology: Cost-profit analysis using Knapsack

23/35Stage 2: Method SelectionModel: budget allocation using Knapsack problem

M: the set of all depth enhancement methodsProfiti : the profit of method iCosti : the cost of method i (in terms of frame rate)maxCost: budget limitxi : solution {0, 1}Dynamic programmingWeighted linear combination

24/35Stage 2: Method SelectionEliminationSame purpose methodsMutually-exclusiveHelpers checkDependencyMulti-passEstimated FPS ~= target FPSActual FPS < target FPS

25/35

26/35Stage 3: Rendering MethodsShadow mapShadow

Gooch shadingShadingAerial perspective

27/35Stage 3: Rendering MethodsProximity luminanceRelative brightnessAerial perspective

FogAerial perspectiveRelative brightness

28/35Stage 3: Rendering MethodsBoundary enhancement Shading Luft et al.s method [Luft06]

29/35Stage 3: Rendering MethodsFace trackingMotion parallaxBulbul et al.s method [Bulbul10]

Multi-view renderingBinocular disparityAccommodation, convergenceDepth-of-focus9-view lenticular

30/35Experimental Results: ObjectiveMain task: Judging relative positionsProcedure: Estimate the z value of the given ball5 techniques for selecting depth enhancement methods were compared.Proposed algorithm (auto selection) is the best with 3.1% RMS error.

31/35

Experimental Results: SubjectiveMain task: Judging relative positionsSurface target detection (shape perception)Procedure: Grade the given scene between 0-100Proposed methods grade: ~87 (task 1), ~73 (task 2)Statistically significant difference is shown by t-test.

32/35

33/35

34/35ConclusionA framework for enhancing depth perception of a given sceneA fuzzy logic based algorithm for automatically determining the proper depth cuesA knapsack model for selecting proper depth enhancement methodsA formal experimental studyFuture workMore testsImprove the rule base Add new rendering methodsEffects of animationDifferent multi-view technologiesDemo 35/35THANKS FOR YOUR ATTENTION

ANY QUESTIONS?References[Cutting95]Cutting J, Vishton P. Perceiving Layout and Knowing Distance: The Integration, Relative Potency and Contextual use of Different Information about Depth. Perception of Space and Motion. New York: Academic Press; 1995.[Ware04]Ware C. Space Perception and the Display of Data in Space. Information Visualization: Perception for Design: Morgan Kauffman; 2004.[Howard08] Howard IP, Rogers BJ. Seeing in Depth. Toronto; 2008.[Bunnell04]Bunnell M. Dynamic Ambient Occlusion and Indirect Lighting. GPU Gems 2: Addison-Wesley; 2004.[Luft06]Luft T, Colditz C, Deussen O. Image enhancement by unsharp masking the depth buffer. ACM Transactions on Graphics (TOG). 2006; 25(3).[Nienhaus05] Nienhaus M, Dollner J. Blueprint Rendering and "Sketchy Drawings. GPU Gems 2. New Jersey: Addison Wesley; 2005.[Gooch98]Gooch A, Gooch B, Shirley P, Cohen E. A Non-photorealistic Lighting Model for Automatic Technical Illustration. Paper presented at: Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques , 1998.[Bruckner07]Bruckner S, Grller E. Enhancing Depth-Perception with Flexible Volumetric Halos. IEEE Transactions on Visualization and Computer Graphics. 2007; 13(6): 1344-1351.

References[Blthoff96] Blthoff HH, Yuille A. A Bayesian framework for the integration of visual modules. Attention and Performance: Information Integration in Perception and Communication. 1996; XVI: 49-70. [Pfautz00]Pfautz J. Depth Perception in Computer Graphics: Cambridge University; 2000. [Brackstone00] M. Brackstone. Examination of the use of fuzzy sets to describe relative speed perception. Ergonomics, 43(4):528-542, 2000.[Bulbul10] A. Bulbul, Z. Cipiloglu, and T. Capin. A color-based face tracking algorithm for enhancing interaction with mobile devices. The Visual Computer, 26(5):311-323, 2010.[Dodgson05] N. A. Dodgson. Autostereoscopic 3d displays. Computer, 38:31-36, 2005.[Haeberli90] P. Haeberli and K. Akeley. The accumulation buffer: Hardware support for high-quality rendering. SIGGRAPH Comput. Graph., 24(4):309-318, 1990.[Halle05] M. Halle. Autostereoscopic displays and computer graphics. In SIGGRAPH'05: ACM SIGGRAPH 2005 Courses, page104, NewYork, NY, USA, 2005. ACM.[Oliveira00] M. M. Oliveira. Relief Texture Mapping. PhD thesis, University of North Carolina, 2000.[Swain09]C. T. Swain. Integration of monocular cues to improve depth perception, 2009.