Computer Graphics Research at Virginia David Luebke Department of Computer Science.
-
Upload
matilda-campbell -
Category
Documents
-
view
217 -
download
1
Transcript of Computer Graphics Research at Virginia David Luebke Department of Computer Science.
Computer Graphics Computer Graphics Research at VirginiaResearch at Virginia
David LuebkeDavid Luebke
Department of Computer ScienceDepartment of Computer Science
Outline
My current research– Perceptually Driven Interactive Rendering
Perceptual level of detail control Wacky new algorithms
– Scanning Monticello
Graphics resources– Building an immersive display– Building a rendering cluster?
Perceptual Rendering
Next few slides from a recent talk Apologies to UVA vision group
Perceptually Guided Interactive Rendering
David Luebke
University of Virginia
Motivation:Stating The Obvious
Interactive rendering of large-scale geometric datasets is important– Scientific and medical visualization– Architectural and industrial CAD– Training (military and otherwise)– Entertainment
Motivation:Model Size
Incredibly, 3-D models are getting bigger as fast as hardware is getting faster…
Courtesy General Dynamics, Electric Boat Div.
Big Models:Submarine Torpedo Room
1994: 700,000 polygons
(Anonymous)
Big Models:Coal-fired Power Plant
1997:13 million polygons
1998:16.7 million polygons
Big Models:Plant Ecosystem Simulation
Deussen et al: Realistic Modeling of Plant Ecosystems
Big Models:Double Eagle Container Ship
2000:82 million polygons
Courtesy Newport News Shipbuilding
Big Models:The Digital Michelangelo Project
2000 (David):56 million polygons
2001 (St. Matthew):372 million polygons
Cou
rtes
y D
igita
l Mic
hela
ngel
o P
roje
ct
(Part Of) The Solution:Level of Detail
Clearly, much of this geometry is redundant for a given view
The idea: simplify complex models by reducing the level of detail used for small, distant, or unimportant regions
Traditional Level of DetailIn A Nutshell…
249,924 polys 62,480 polys 7,809 polys 975 polys
Courtesy Jon Cohen
Create levels of detail (LODs) of objects:
Distant objects use coarser LODs:
Traditional Level of DetailIn A Nutshell…
The Big Question
How should we evaluate and regulate the visual fidelity of our simplifications?
Measuring Fidelity
Fidelity of a simplification to the original model is often measured geometrically:
METRO by Visual Computing Group, CNR-Pisa
Measuring Visual Fidelity
However…– The most important measure of fidelity is usually not
geometric but perceptual: does the simplification look like the original?
Therefore:– We are developing a principled framework for LOD in
interactive rendering, based on perceptual measures of visual fidelity
Perceptually Guided LOD: Questions And Issues
Several interesting offshoots:– Imperceptible simplification
When can we claim simplification is undetectable?
– Best-effort simplification How best to spend a limited time/polygon budget?
– Silhouette preservation Silhouettes are important. How important?
– Gaze-directed rendering When can we exploit reduced visual acuity
Related Work:Perceptually Guided Rendering
Lots of excellent research on perceptually guided rendering
But most work has focused on offline rendering algorithms (e.g., path tracing)– Different time frame!
Seconds or minutes vs. milliseconds
– Sophisticated metrics: Visual masking, background adaptation, etc…
Perceptually Guided LOD: Our Approach
Approach: test folds (local simplification operations) against a perceptual model to determine if they would be perceptible
3
1
2
9
8 7
10
54
6 9
8
10
54
6
A
3
Fold
Unfold
A
Perception 101:The Contrast Sensitivity Function
Perceptual scientists have long used contrast gratings to measure limits of vision:– Bars of sinusoidally
varying intensity– Can vary:
Contrast Spatial frequency Eccentricity Velocity Etc…
Perception 101: The Contrast Sensitivity Function
Contrast grating tests produce a contrast sensitivity function– Threshold contrast
vs. spatial frequency– CSF predicts the
minimum detectablestatic stimuli
Campbell-Robson Chart by Izumi Ohzawa
Your Personal CSF
Framework: View-Dependent Simplification
Next: need a framework for simplification– We use view-dependent simplification for
LOD management Traditional LOD: create several discrete LODs in a
preprocess, pick one at run time View-dependent LOD: create data structure in
preprocess, extract an LOD for the given view
View-Dependent LOD: Examples
Show nearby portions of object at higher resolution than distant portions
View from eyepoint Birds-eye view
View-Dependent LOD: Examples
Show silhouette regions of object at higher resolution than interior regions
View-Dependent LOD: Examples
Show more detail where the user is looking than in their peripheral vision:
34,321 triangles
View-Dependent LOD: Examples
Show more detail where the user is looking than in their peripheral vision:
11,726 triangles
View-Dependent LOD:Implementation
We use VDSlib, our public-domain library for view-dependent simplification
Briefly, VDSlib uses a big data structure called the vertex tree– Hierarchical clustering of model vertices – Updated each frame for current simplification
The Vertex Tree:Region Of Effect
3
1
2
9
8 7
10
54
6 9
8
10
54
6
A
3
Fold Node A
Unfold Node A
Folding a node affects a limited region:
– Some triangles change shape upon folding– Some triangles disappear completely
Wacky New Algorithms
I am interested in exploring new perceptually-driven rendering algorithms– Don’t necessarily fit constraints of today’s
hardware Ex: frameless rendering Ex: I/O differencing (time permitting)
– Give the demo, show the movie…
Non-Photorealistic Rendering (time permitting)
Fancy name, simple idea:
Make computer graphics that don’t look like computer graphics
Non-Photorealistic Rendering
Fancy name, simple idea:
Make computer graphics that don’t look like computer graphics
Non-Photorealistic Rendering
Fancy name, simple idea:
Make computer graphics that don’t look like computer graphics
NPRlib
NPRlib: flexible callback-driven NP rendering
Bunny: Traditional CG Rendering
Non-Photorealistic Rendering
Bunny: Pencil-Sketch Rendering
NPRlib: flexible callback-driven NP rendering
Non-Photorealistic Rendering
Bunny: Charcoal Smudge Rendering
NPRlib: flexible callback-driven NP rendering
Non-Photorealistic Rendering
Bunny: Two-Tone Rendering
NPRlib: flexible callback-driven NP rendering
Non-Photorealistic Rendering
Bunny: Two-Tone Rendering
NPRlib: flexible callback-driven NP rendering
Scanning Monticello
Fairly new technology: scanning the world
Scanning Monticello
Want a flagship project to showcase this Idea: scan Thomas Jefferson’s Monticello
– Historic preservation– Virtual tours– Archeological and architectural research,
documentation, and dissemination– Great driving problem for scanning & rendering
research Results from first pilot project.
– Show some data…
Scanning Monticello
Scanning Monticello
Graphics Resources
2 SGI Octanes– Midrange graphics hardware
SGI InfiniteReality2
– 2 x 225 MHz R10K, 1 Gb, 4 Mb cache– High-end graphics hardware: 13 million triangles/sec,
64 Mb texture memory Hot new PC platforms (P3s and P4s)
– High-end cards built on nVidia’s best chipsets– Stereo glasses, digital video card, miniDV stuff– Quad Xeon on loan
Software! – Maya, Renderman, Lightscape, Multigen, etc.
Graphics Resources
Building an immersive display– NSF grant to build a state-of-the-art
immersive display: 6 projectors, 3 screens, passive stereo High-end wide-area head tracker 8 channel spatial audio PCs to drive it all
– Need some help building it…
Graphics Resources
Building a rendering cluster?– Trying to get money to build a high-end
rendering cluster for wacky algorithms 12 dual-Xeon PCs:
1 Gb RAM 72 Gb striped RAID nVidia GeForce3
Gigabit interconnect
– Don’t have the money yet, but do have 6 hot Athlon machines
More Information
I only take students who’ve worked with or impressed me somehow– Summer work: best– Semester work: fine, but harder
Interested in graphics? – Graphics Lunch: Fridays @ noon, OLS 228E– An informal seminar/look at cool graphics
papers– Everyone welcome, bring your own lunch