COMPUTER GRAPHICS Dr. Adam P. Anthony Lectures 22,23.

Post on 25-Dec-2015

219 views 1 download

Tags:

Transcript of COMPUTER GRAPHICS Dr. Adam P. Anthony Lectures 22,23.

COMPUTER GRAPHICSDr. Adam P. Anthony

Lectures 22,23

Overview

Tuesday: Significance of Computer Graphics Brief history of Computer Graphics Overview of 3D Graphics Concepts Modeling objects in a 3D environment

Thursday: Rendering images Lighting Animation

Definition of “Graphics”

A branch of computer science that applies computer technology to produce and/or manipulate visual representations

2D vs. 3D graphics: 2D ~ photographs, paintings, signage 3D ~ ‘simulation,’ or conversion of a 3-

dimensional scene on a 2-dimensional platform (screen) Step 1: design the scene (creative, artistic) Step 2: convert it to a 2-dimensional ‘photograph’

(technical) This is where we work to make images more realistic!

What’s so Great about Graphics? Humans are very visual creatures We use graphics/vision to:

Learn new things Make decisions Present/Comprehend information Be entertained

Most people, when thinking of Graphics, think of: Games Movies

Graphics in Games

One Focus: ‘realism’ and detail

Graphics in Games

Another focus: creativity and fun

Graphics in Games

Yet another focus: Cinematics and Story

Graphics in Movies

Virtual + Live Action

Graphics in Movies

Fully Animated

Graphics for Learning, Decision Making

Do you learn more from something like this:

1 2 4 11 14 223 5 6 12 1 84 3 2 9 8 445 2 4 19 5 2 2 3 9 1 15 98 7 1 3 18 22 0 9 1 5 8 7 2 1 9 7 4 0

Graphics for Learning, Decision Making

Or something like this?

Graphics for Presenting/Understanding Information

Using Shading to improve medical analysis:

Some Historical Perspective

Televisions: invented around 1923 First computers: 1940 – 1943 Computers with Monitors: 1956 We knew how to produce images on a screen

before computers were invented! Why were people so interested in connecting the

two? Foresight to anticipate applications previously

mentioned TV’s only copy/reproduce images from a camera,

cannot create them Interactivity!

3D Graphics Creation

Modeling What will be in the picture, what will it look like?

Rendering If the model was real, and we took a picture of it,

what would the picture look like? Displaying

Saving the rendering as a bit-map image On file: movies and pictures On Screen: interactive applications (games, drafting

tools, simulators, etc.) Most games try to Model, Render and Display 30—60

times every second!

10-15

The 3D graphics paradigm

10-16

Modeling Objects

Shape: Represented by a polygonal mesh obtained from Traditional mathematical equations Berzier curves and surfaces Procedural models Other methods being researched

Surface: Can be represented by a texture map

10-17

A polygonal mesh for a sphere

About Polygons

They are easy to model! 2-dimensional Simple definition (connect the dots!)

They are easy to combine to make more complex shapes 2D: Like combining a square and a triangle

to get a house 3D: Like a house of cards or a balsa-wood

model They don’t take up much memory either

A Bit More on Modeling

Many of us are familiar with 2D systems:

To draw a square, we can give coordinates for each corner: Square = (3,3), (6,3), (6,0), (3,0)

3D Coordinate System

A 3D system is harder to visualize, even though we live in one! X,Y coordinates are the ‘floor’ Add a Z coordinate that represents ‘height’

Shapes drawn the same way, specifying vertices + lengths, but obviously will be more complex And now we can talk about cubes , spheres, etc!

z

yx

2D Polygon Image

Find the shapes!

3D Polygon Image

Like a house of cards—Find the shapes!

10-23

A Bezier curve

Refining Hand-Designed Models

Where We’re Headed Next Time: We’ve only seen a single (and arguably easiest)

step in creating a 3D image on a computer screen Where we’ll go next

Lighting (shadows, bright spots, reflection, refraction) Displaying (rasterizing) 3D accelerator cards

Funny thing, though: 90% of all graphical design work is finished after the

modeling phase! Programmers use graphics engines to take care of

lighting, rasterizing, card compatability But to be the best, you need to understand how

it all works!

Drawing a 3D Object: Shape, Transformation, Rotation, Surfacing1. Recall: 3D objects are just 2D polygons ‘glued’ together

Draw the polygonal structure at the Origin (0,0,0) using 2D Polygons

A cube could be: bottom: (0,0,0),(1,0,0),(1,1,0),(0,1,0), top: (0,0,1),(1,0,1),(1,1,1),

(0,1,1) Front: (0,1,0),(1,1,0),(0,1,1),(1,1,1), back: (0,0,0),(1,0,0),(0,0,1),

(1,0,1) Lside: (0,0,0),(0,1,0),(0,0,1),(0,1,1) Rside: (1,0,0),(1,1,0),(1,1,1),

(1,0,1)

2. Transformation: Give the polygon a new position in the world

3. Rotation: Make the shape point in a new direction

4. Surfacing: ‘fill-in’ each polygon with a color/image

z

y

x

Surfacing

Most systems have a ‘fill’ effect: Pick one uniform color for every pixel inside a

polygon May also use special tools to explicitly color

every pixel Costly to produce (hiring artists) Difficult to render (typically not an option in games),

takes lots of memory Texture Mapping

Provide an image that is applied to a polygon like wallpaper (common for brick walls, wood, grass, etc.)

Building a Scene

A scene in a 3D graphics environment consists of: Polygon models of each object:

Shape Position/Orientation Surface Coloring

A virtual ‘camera’: Includes position/orientation information

1 or more virtual light sources All have a position in the world Angles depend on the type of source:

Some have full 360o coverage Others have ‘shades’

10-30

The 3D Scene: A Virtual Photo Studio

Rendering

Take a picture of the ‘virtual’ scene with the ‘virtual’ camera, to get a REAL photograph!

Lots of mathematics are used to: Determine which polygons are actually seen by

the camera (and which are definitely not seen) What shape a polygon will really have in the final

picture Depends on the angle from which it is viewed

Which pixels in the image will represent the polygon

The color/brightness of each pixel in that polygon (lighting model)

Lighting

In the real world, when light hits a surface different portions of that light source will be: Absorbed Reflected Refracted

It is the physical properties of an object that determine how light behaves on it, and, ultimately, what it will look like

Simulating Light

Imagine a source of light as an infinite number of ‘rays’ that we’ll represent as straight lines

Given a single point on an object and a single light source: There is exactly one ray that reaches that point

To simulate absorption, reflection, refraction, we only have to perform calculations for that single point and ray Ultimately done 1000’s of times over for each

and every pixel in the picture

Simulating Reflection

Angle of incidence: angle between ray of light and the polygon

Angle of reflection: angle between reflected ray and surface, always equal to incidence

Surface Properties

Specular Surfaces Smooth, shiny Angle of incidence is perfect with respect to polygon’s position Characterized by bright white reflection from light source A purely specular surface is a mirror

Diffuse Surfaces Surface covered in tiny, rough and random bumps Light is still reflected, but angle of incidence is based on which

‘bump’ it hits Characterized by a warm, uniform coloring across entire

surface Many types of cloth are purely diffuse

Most surfaces have a mixture of the two

Light: A Viewer’s Perspective Reflected light will be viewed only if it is reflected in the

camera’s direction A specular surface creates Specular Light

Follows strict rules of reflection Only visible if the light is ‘aimed’ at the camera

A diffuse surface creates Diffuse Light ‘Random’ bumps ultimately guarantee that some of the light is

reflected in every direction Much more likely to reach the camera lens

Light that is reflected so many times that it doesn’t technically have a source is called Ambient Light Like a tiny bit of light hitting a surface from all directions A diffuse surface looks the same under a bright light, or ambient

light This is how the ‘back’ of an object can be slightly illuminated from

a single light source

10-37

Specular versus diffuse light

Refraction

When a surface is semi- or totally transparent, light will pass through When it passes, refraction will

bend that light in a different direction

Bends a different amount depending on the material

Modeling this phenomenon accurately is difficult Hence, most real-time,

interactive applications favor opaque, reflective objects

Less interactive applications can use special techniques to get impressive results

More Complex Lighting Models Sub-Surface Scattering

Images provided by Penny Rheingans at UMBC

Dealing With Complex Scenes To render an image we need to

determine: What an object looks like when viewed from

a certain camera angle How light reflects from that surface, at that

angle, based on multiple light sources Move the camera 1 millimeter to the left

Have to do all that calculating all over again!

Conclusion: Rule out as much unnecessary work as

possible!

WARNING! The following material is pretty dense. Focus on the concepts, instead of the details and you’ll be fine. And please ask questions if something is unclear!!!

Clipping

Extend the view volume from the camera position using simple geometry

Anything outside the view volume will not be drawn, analyzed

Scan Conversion

Draw a line through each pixel until you reach an object

Easy for one object

Trickier: what if objects overlap?

Hidden Surface Removal

When you take a picture of someone’s face: Can you see the back of their head? Can you see what is on the wall behind

their chest? What about layered objects?

Hidden Surface Removal = determining which polygons are actually visible, throwing out the rest Saves lots of time!

Painter’s Algorithm

Sort all polygons from back to front, then draw the ones in the back first Those overlapping in front will ‘paint over’

Z-Buffer Algorithm

Similar to Painter’s algorithm, but instead of drawing whole objects, focus on determining what is drawn in each pixel Start back to front again For each object:

Check to see if it intersects with that pixel Check to see if anything in front of it also intersects with that pixel If not, then that polygon determines the color of that pixel

Shoot a bow and arrow, draw the first thing it hits! But we don’t program it like this because it’s less efficientImages below provided by

Penny Rheingans at UMBC

Shading

Flat Shading: Add coloring effects to give depth to each individual polygon Creates faceted appearance

Gouraud and Phong Shading: Use mathematics to estimate the original shape Creates smooth, rounded appearance

Bump Mapping: Creates bumpy,

rounded appearance

10-47

A sphere as it might appear when rendered by flat shading

10-48

A sphere as it might appear when rendered by Phong shading

10-49

A sphere as it might appear when rendered using bump mapping

Graphics Accelerators

Most computer graphics calculations involve a great deal of multiplication and addition Transformation/Rotation Lighting/Shading effects Z-buffer computation

A standard processor, fast as it is, can only do 1-2 simple operations per clock cycle But it’s also general-purpose

Graphics Cards have 100’s or 1000’s of tiny parallel processors But all those processors can do is multiply and add!

Result: instead of doing several pixels every second, we can do several screens every second!

Advanced Lighting Models

Local Lighting Model: Does not account for light interactions among objects

Global Lighting Model: Accounts for light interactions among objects Ray Tracing Radiosity

10-52

Ray tracing

Ray Tracing and Radiosity Examples

http://www.oyonale.com/modeles.php?lang=en&page=40

http://blogs.intel.com/research/2007/10/real_time_raytracing_the_end_o.php

10-54

Animation: Simulating Motion Dynamics: Applies laws of physics to

determine position of objects Kinematics: Applies characteristics of

joints and appendages to determine position of objects Avars Motion Capture