Week 2 - Friday. What did we talk about last time? Graphics rendering pipeline Geometry Stage.

24
CS361 Week 2 - Friday

Transcript of Week 2 - Friday. What did we talk about last time? Graphics rendering pipeline Geometry Stage.

CS361Week 2 - Friday

Last time

What did we talk about last time? Graphics rendering pipeline

Geometry Stage

Questions?

Project 1

Assignment 1

Let's see those matrices in SharpDX again

Backface culling

I did not properly describe an important optimization done in the Geometry Stage: backface culling

Backface culling removes all polygons that are not facing toward the screen

A simple dot product is all that is needed This step is done in hardware in SharpDX and OpenGL You just have to turn it on Beware: If you screw up your normals, polygons could

vanish

Graphics rendering pipeline For API design, practical top-down problem

solving, and hardware design, and efficiency, rendering is described as a pipeline

This pipeline contains three conceptual stages:

Produces

material to be

rendered

Application

Decides what, how, and

where to

render

Geometry

Renders the final image

Rasterizer

Student Lecture: Rasterizer Stage

Rasterizer Stage

Rasterizer Stage

The goal of the Rasterizer Stage is to take all the transformed geometric data and set colors for all the pixels in the screen space

Doing so is called: Rasterization Scan Conversion

Note that the word pixel is actually a portmanteau for "picture element"

More pipelines

As you should expect, the Rasterization Stage is also divided into a pipeline of several functional stages:

Triangle

Setup

Triangle Travers

al

Pixel Shadi

ng

Merging

Triangle Setup

Data for each triangle is computed This could include normals This is boring anyway because fixed-

operation (non-customizable) hardware does all the work

Triangle Traversal

Each pixel whose center is overlapped by a triangle must have a fragment generated for the part of the triangle that overlaps the pixel

The properties of this fragment are created by interpolating data from the vertices

Again, boring, fixed-operation hardware does this

Pixel Shading

This is where the magic happens Given the data from the other

stages, per-pixel shading (coloring) happens here

This stage is programmable, allowing for many different shading effects to be applied

Perhaps the most important effect is texturing or texture mapping

Texturing

Texturing is gluing a (usually) 2D image onto a polygon To do so, we map texture coordinates onto polygon

coordinates Pixels in a texture are called texels This is fully supported in hardware Multiple textures can be applied in some cases

Merging

The final screen data containing the colors for each pixel is stored in the color buffer

The merging stage is responsible for merging the colors from each of the fragments from the pixel shading stage into a final color for a pixel

Deeply linked with merging is visibility: The final color of the pixel should be the one corresponding to a visible polygon (and not one behind it)

Z-buffer

To deal with the question of visibility, most modern systems use a Z-buffer or depth buffer

The Z-buffer keeps track of the z-values for each pixel on the screen

As a fragment is rendered, its color is put into the color buffer only if its z value is closer than the current value in the z-buffer (which is then updated)

This is called a depth test

Pros and cons of the Z-buffer

Pros Polygons can usually be rendered in any order Universal hardware support is available

Cons Partially transparent objects must be

rendered in back to front order (painter's algorithm)

Completely transparent values can mess up the z buffer unless they are checked

z-fighting can occur when two polygons have the same (or nearly the same) z values

More buffers

A stencil buffer can be used to record a rendered polygon This stores the part of the screen covered by the

polygon and can be used for special effects Frame buffer is a general term for the set of

all buffers Different images can be rendered to an

accumulation buffer and then averaged together to achieve special effects like blurring or antialiasing

A back buffer allows us to render off screen to avoid popping and tearing

Finals words on the pipeline This pipeline is focused on interactive graphics

Micropolygon pipelines are usually used for film production

Predictive rendering applications usually use ray tracing renderers

The old model was the fixed-function pipeline which gave little control over the application of shading functions

The book focuses on programmable GPUs which allow all kinds of tricks to be done in hardware

Upcoming

Next time…

GPU architecture Programmable shading

Reminders

Read Chapter 3 Start on Assignment 1, due next

Friday, January 30 by 11:59 Keep working on Project 1, due

Friday, February 6 by 11:59