The Graphics Pipeline - Texas Tech University

20
The Graphics Pipeline: Geometric Operations 3D Computer Graphics by Alan Watt Third Edition, Pearson Education Limited, 2000

Transcript of The Graphics Pipeline - Texas Tech University

Page 1: The Graphics Pipeline - Texas Tech University

The Graphics Pipeline:

Geometric Operations

3D Computer Graphics by Alan WattThird Edition, Pearson Education Limited, 2000

Page 2: The Graphics Pipeline - Texas Tech University

2

Graphics Pipeline

Is a progression through various three dimensional spaces. Convert from 3D to 2D3D scene mapped into a two-dimensional space –raster grid or bitmap.Find the color of the pixel.

3D Space

2D Space

Local Space

World Space

View Space

3D Screen Space

Display or Image Space

Modeling Transformation

View Transformation

28-Sep-07 3D Computer Graphics, Alan Watt

Presenter
Presentation Notes
The Graphics Pipeline is the general operational procedures graphic applications use to render an image on your screen or to a file. This Pipeline is a progression through the a series of operational spaces, such as Local Space, World Space, View Space, Screen Space and Display or Image Space. (Click01) The purpose of this progression is to take the three-dimensional description an object – a cylinder in this image, and map it into a two-dimensional representation, such as a raster grid or bitmap. (Click02) The ultimate objective is to find the color and intensity of a pixel either for projection onto your screen or as a instance in a bitmap.
Page 3: The Graphics Pipeline - Texas Tech University

3

Graphics Pipeline ProcessGeometric and Algorithmic processesGeometric Process

Operations on vertices to .Transforming the vertices from one coordinate space to another. (multiplying matrixes)Discarding polygons that cannot be seen from the viewpoint. Geometric are done by the CPU.

Algorithmic processShading and texture mapping more costly than geometric processessupported by 3D cards and GPUs.

28-Sep-07 3D Computer Graphics, Alan Watt

Presenter
Presentation Notes
Watt has identified two general graphic pipeline processes. They are Geometric and Algorithmic. Geometric processes involve operations on vertices. These processes transform the vertices of an object from one coordinate space to another primarily by multiplying matrixes. These geometric procedures are done by the CPU and RAM of your computer. Algorithmic processes involve the rendering of the three-dimensional objects and include operations such as shading and texture mapping. These process are much more costly than geometric process and are often supported by 3D graphics cards. These operations are done by your CPU, but extensive influenced by the amount of RAM you have and your graphics card. Many of these operations are now being moved to Graphic Processing Units or GPUs located on board the graphics card in order to speed up the rendering process.
Page 4: The Graphics Pipeline - Texas Tech University

4

Graphics PipelineLocal Space

World Space

View Space

3D Screen Space

Display or Image Space

Modeling Transformation

View Transformation

28-Sep-07 3D Computer Graphics, Alan Watt

Local SpaceLocal Coordinates - geometricObject Definition - geometric

World SpaceWorld Coordinates - geometricCompose Scene - geometric Define View - geometricDefine Lighting - geometricAnimation – geometricRadiosity Rendering - algorithmic

View SpaceView CoordinatesView volume - geometricCulling – geometricClip to 3D – geometric/algorithmicBack-Face Elimination - geometric

3D Screen SpaceScreen Space CoordinatesView Transformation – geometricClipping - algorithmicShading - algorthmicRasterization - algorithmicHidden surface removal - algorithmic

Presenter
Presentation Notes
The figure on the right shows this progression of operational spaces through the graphics pipeline. On the left are the key elements of each operational space. Notice in the list on the left each space has a different coordinate system and operations that are specific to it. A ‘space’ as used hear refers to a unique coordinate system and attributes specific to a set of operations done in the ‘space’. For instance, World Space refers to a ‘space’ defined by the world coordinate system and the composition of the objects in relation to this coordinate system as well as those attributes needed to define a view, lighting and object motion (animation). 3D Computer Graphics, Alan Watt, Addison-Wesley.
Page 5: The Graphics Pipeline - Texas Tech University

5

Local Space

Local Coordinate SystemObject Origin or Pivot PointPolygon x,y,z & normalsVertex x,y,z & normals

28-Sep-07 3D Computer Graphics, Alan Watt

Presenter
Presentation Notes
The graphics pipeline begins in Local Space. Local space refers to the objects in your scene, their local coordinates, and attributes specific to the object. Local Space is defined by a Local Coordinate System. As we discussed in previous lessons there are different coordinate systems used in computer graphics for different purposes. Local coordinate systems are specific to each object and have their own origin. This point is called the pivot point. In Local Space geometrical information about the object is stored in local coordinates of the object, such as its’ pivot point, the xyz coordinates of the polygons and vertices, and their normals. This is the first step of the rendering pipeline and geometric information is calculated for each object in the scene that will be needed later in the process.
Page 6: The Graphics Pipeline - Texas Tech University

6

Local Space

Local Coordinates Ease of modelingObject Definition

Local Space

World Space

View Space

3D Screen Space

Display or Image Space

Modeling Transformation

28-Sep-07 3D Computer Graphics, Alan Watt

Presenter
Presentation Notes
Local Space also uses this Local Coordinate System for ease of modeling and manipulation of an object. It only “makes sense to store the vertices of a polygon mesh object with respect to some point located in or near the object. When we translate, rotate, or scale the vertices of a polygon mesh we often do this in relation to the object, not to the World Coordinate system. Watt, p. 143
Page 7: The Graphics Pipeline - Texas Tech University

7

World SpaceWorld Coordinate SystemCompose SceneCamera LocationDefine LightingAnimationSurface Attributes

Local Space

World Space

View Space

3D Screen Space

Display or Image Space

Modeling Transformation

28-Sep-07 3D Computer Graphics, Alan Watt

Presenter
Presentation Notes
This World Space is where objects relative spatial relationships may be defined. This is useful in animating, lighting and texturing the objects in the scene. The act of placing an object in a scene defines the transformation required to take the object from local space to world space. The camera location or view is defined in World Space. Lights and their relation to objects are established here, and their lighting normals are adjusted. The animation of objects and collision with each other is determined in World Space. In addition, surfaces attributes of the object such as texture, color, etc. are specified and tuned. Watt, Page 143
Page 8: The Graphics Pipeline - Texas Tech University

8

World Space

World Coordinate SystemScenes x,y,zGlobal originLocal Coordinates Transformations

28-Sep-07 3D Computer Graphics, Alan Watt

Presenter
Presentation Notes
As the objects are modeled they are composed into a scene. The coordinate system of the scene is known as the ‘world coordinate system’. The controlling point of the scene is referred to as the global origin. All object’s local coordinates are transformed into World coordinates and become a part of the World Space. This transformation of all the objects into a common coordinate system is critical to the role of World Space in the graphics pipeline.
Page 9: The Graphics Pipeline - Texas Tech University

9

View SpaceBeginning of the conventional rendering processViewpointViewing directionView volume

Object Space

World Space

View Space

3D Screen Space

Display or Image Space

View Transformation

28-Sep-07 3D Computer Graphics, Alan Watt

C

Presenter
Presentation Notes
View Space is the beginning of the conventional rendering process, because it is here that what we render is determined. View Space contains the view coordinate system and viewing parameters of the scene. The eye or camera location is the origin of this view coordinate system, noted as C in this image. The view coordinate system is used to establish viewing parameters, such as view point, viewing direction, view plane and the view volume.
Page 10: The Graphics Pipeline - Texas Tech University

View Space

Viewpoint ‘C’Viewing Direction ‘N’View Plane

Normal to ‘N’‘d’ distance from ‘C’

View Coordinate SystemOrigin ‘C’UV axesThird axis N

CNWCS

CN

dN

UV

C

View Volume

View Plane

28-Sep-07 103D Computer Graphics, Alan Watt

Presenter
Presentation Notes
The viewing coordinate system for any given view space begins at point C or the location of the camera in relationship to World Space. The direction the camera is pointing is defined by a vector ‘N’. (Click01) A view plane with the coordinates U,V is established perpendicular or normal to the vector ‘N’. The view plane’s dimensions may be fixed or may vary with plane’s distance from the point ‘C’. (Click02) The vertical and horizontal axis of the view coordinate system originates at C and is parallel to the view plane. The third axis is the normal ‘N”. If we expressed this as a left hand coordinate system - our thumb would point in the direction U, our index finger would point in the direction V and our third finger would be pointing in direction of normal N. (Click03) The viewing system for a View Space consists of a camera location ‘C’, an associated view plane and resulting view volume defined by the projection of vectors from ‘C’ through each of the corners of the view plane.
Page 11: The Graphics Pipeline - Texas Tech University

11

View Space

View Coordinate SystemCamera, Eye or View

‘first person’‘third person’

View coordinatesViewpoint = CViewing direction = NView Plane = U,VVCS = Xu, Yv, Zn

View Volume

View Coordinate System

C

Frustum

View Plane

Zn

Xu

Yv

N

View Volume

28-Sep-07 3D Computer Graphics, Alan Watt

U

V

Presenter
Presentation Notes
A View Space therefore is define by: A viewpoint ‘C’, establishing the viewer’s position in World Space and the origin of the view coordinate system A view direction vector N; A view plane U,V establishes the 2D dimensional projection plane representing your screen or a bitmap image. A view coordinate system whose origin is C; Its horizontal and vertical axis Xu and Yv axis; and its third axis Zn is the normal N to the view plane. A view frustum created by extending vectors from C through the corner points of the View Plane. And a view volume defined by the frustum and the view plane.
Page 12: The Graphics Pipeline - Texas Tech University

View Volume

Frustum

View Plane

Far Clipping Plane

12

View Space

View volumeView normal (N) View Plane (U,V)Frustum Far Clipping PlaneNear Clipping PlaneView Volume

Near Clipping Plane

Frustum

View Plane

Far Clipping PlaneFar Clipping Plane

View Plane

Frustum

28-Sep-07 3D Computer Graphics, Alan Watt

Frustum

View Plane

N

Presenter
Presentation Notes
The View Volume defines the area in which objects in a scene can be seen by the viewer. It is sometime referred to as the ‘field of view’. The view normal N and the view plane U,V establish a four sided pyramid of vision called the frustum. (Click01) The View Volume may also have a Far Clipping Plane. The frustum is technically infinite, but often it is advantages to set a limit on its extents. The far clipping plane sets the farthest limit from the viewpoint C that object may be seen. It may also be useful to set a near limit. (Click02) Therefore, a Near Clipping plane may be used to establish another limit to the extents of the frustum. Often this near clipping plane is defaulted as the view plane. (Click03) The View Volume is defined then by the frustum and its near and far clipping plane. Any object that falls inside or crosses this View Volume is seen by the viewer.
Page 13: The Graphics Pipeline - Texas Tech University

View Space

View Coordinate SystemCamera or Views

‘first person’‘third person’orthographic

28-Sep-07 133D Computer Graphics, Alan Watt

Presenter
Presentation Notes
The view coordinate system may be used to represent a number of different Camera or Views. Among the most common are a first person perspective, (Click01) A third person perspective view, and (Click02) orthographic views, such as a Plan View, Front View and Right View.
Page 14: The Graphics Pipeline - Texas Tech University

14

View SpaceView Space Operations CullingRemove Objects that need not be rendered.View Volume

Bounding BoxIn, Out, or ClippingFar-Clipping/Fog

Clipping against the View VolumeDiscard as many polygons as possible

28-Sep-07 3D Computer Graphics, Alan Watt

Presenter
Presentation Notes
(a)There are three very significant and important operations done in View Space, using the view coordinate system and the view volume. The first of these operations is called culling. Culling is a test to determine which objects are seen and which are not. This simple test is carried out using polygon/plane intersection calculations. The View Volume represents the area of the scene that the camera or eye can see, so if we determine an object is outside of the view volume it cannot be seen, therefore it need not be rendered. This culling test is done using the bounding box of an object. If the objects bounding box is outside of the view volume the object is not considered for rendering, such as the teapot in this image. If the object’s bounding box is within the View Volume then we know it should be rendered, such as the cylinder. This simple test discards a number of polygons from the rendering process and therefore can significantly decrease the time to render a scene. The efficiency of this operation can be further enhanced by introducing a far clipping plane. (b)Objects within the view frustum but very far away may not be significant enough to render. By introducing a far clipping plane we can further reduce the extents of the view volume and can exclude those objects. This is a common technique used in early first person video games. Sometimes in a video game an object in the background will ‘POP’ into the scene. This is because the viewpoint finally moved the view volume to include that object and it became render able. To prevent this popping, video games would often introduce a fog in the background and objects would just appear out of the fog rather than pop into the scene. An example of an object excluded by the far clipping plane of view volume is the cone object in this image. (c)During the culling process, if an object is neither in nor out of the View Volume - like the cube in this image. It is set aside to be clipped. The procedure is called ‘clipping against the view volume”, but the calculations involved are more efficiently carried out in three-dimensional screen space, so the objects are just flagged to be clipped later. The whole purpose of Culling and Clipping is to discard as many polygons as possible so we do not have to waste rendering time on objects that will never be seen.
Page 15: The Graphics Pipeline - Texas Tech University

15

View Space

View Space OperationsBack-Face Elimination

The Second step in renderingPolygon Normals facing away from the View Plan

Image 1 Image 228-Sep-07 3D Computer Graphics, Alan Watt

Presenter
Presentation Notes
(a)Back-face elimination is another operation used to discard polygons that need not be rendered. It is sometimes also called Back-face culling, but for this discussion we will refer to it as Back-Face Elimination. The previous culling and clipping operations determined what objects were in the scene - this back-face elimination determines if the camera or eye will see a polygon even though it is within the view volume. Back-face culling is a simple test to determine if a polygon is turned away from the camera. This is done by comparing the dot product of the view direction normal N to the normal of every polygon. If the dot product is less than 90 degrees the surface is render able. If it is greater than 90 degrees then the surfaces ‘back” is turned towards us and it should not be rendered. Let’s look at how this works. (b) In image 1 you can see a simple scene that includes five objects - . Focus your attention on the two cubes, view plane, the frustum and the viewpoint shown in this image. Image two is the view from the viewpoint shown in Image 1, therefore the views direction normal N is pointing in and out of the image, perpendicular to the surface of the slide. The normal of the red surface on the cube to the right, points out from the surface to our right and towards us as viewers. The dot product of N and its normal would be less than 90 degrees, therefore the computer is told “this is a render able surface”. Whereas the two surfaces on the back-side of this cube have normals facing away from viewer. The dot product between N and these surface normals would be greater than 90 degrees and therefore these surfaces are not rendered. (c) The two cubes in this view actually have the same number of surfaces. The difference is that the cube on the left has all of its surface normals facing into the center of the cube rather than out from the center. For Instance, the normal of cube’s top surface is facing down and away from the viewer. The dot product between N and its surface normal is greater than 90 degrees, therefore it is eliminated from being rendered because its back-face is towards the viewer. This is the result of Back-Face Elimination. “Half of the polygons in a polyhedron are back-facing and the advantage of this process is that a simple test removes the polygons from consideration instead of having to use the more expensive hidden surface removal algorithm.” [Watt, 3D Computer Graphics, p. ] There are times when this operation may become visually confusing. There are methods to force the computer to render the back-face of a polygon, but one should be very careful about implementing these procedures. If not done with discretion the rendering time of a scene can be significantly increased.
Page 16: The Graphics Pipeline - Texas Tech University

3D Model

2D Raster Grid

16

3D Screen SpaceBulk of the Rendering workWhat color is this pixel?Geometry Operations

View TransformationCalculate Z-depthZ-Buffer analysis

Object Space

World Space

View Space

3D Screen Space

Display or Image Space

28-Sep-07 3D Computer Graphics, Alan Watt

Presenter
Presentation Notes
Remember the whole purpose of the Graphics Pipeline is to convert a three-dimensional model of an object in a two-dimensional raster grid representation, (Click01) whether the raster grid is the dots on your screen or the pixels in a bitmap. 3D Screen Space is where the majority of this rendering process begins - converting geometry into pixels. (Click02) 3D Screen Space consists of the two-dimensional coordinates of the view plane and a third coordinate along the normal of the view plane. (Click03) There are two important geometric operations done in 3D Screen Space. They are the View Transformation and Z-depth calculation of vertices, edges and polygons.
Page 17: The Graphics Pipeline - Texas Tech University

17

3D Screen SpaceProjection into Screen Space

28-Sep-07 3D Computer Graphics, Alan Watt

View Plane(Screen Space)

Figure 3.0 - Parallel Projection

Projectors areparallel

V1

V2

View Plane(Screen Space)

Figure 1.0 - Perspective Projection

Center of Projection

V1

V2

Figure 2.0 - Perspective Projection

View Plane(Screen Space)

Center of Projection

L1

L2

Presenter
Presentation Notes
The first geometric operation in 3D Screen Space is View Transformation. View Transformation is the process of converting the geometry in the scene from a perspective projection to a parallel projection. Most views of a scene are setup to be a perspective view, which uses perspective projection to determine what the viewer sees. Perspective projection is characterized by a point known as the center of projection as seen in Figure 1. All projectors of vertices project to the center of projection. The projection of a three dimensional line onto the view plane is determined by intersection of the projectors from each of the line’s vertices with the view plane as see in Figure 1. (Click01) Because of the perspective projection, a line parallel to the view plane will appear to be smaller than its true size, Line L1 and L2 in Figure 2 are the same length, but L1 is farther from the view plane than L2 therefore it projected into view plane as shorter. (Click02) The difference between parallel and perspective projection is that the projectors are normal or perpendicular to the view plane instead of projecting to a center of projection. The first step in 3D Screen Space is to convert the geometry in the scene from a perspective projection to a parallel projection. This operation is called View Transformation.
Page 18: The Graphics Pipeline - Texas Tech University

View Transformation of a box from view space into 3D screen space.

Allows parallel projection rays for vertices.

18

View Volume and Z-Depth

View Space

28-Sep-07 3D Computer Graphics, Alan Watt

3D Screen Space

Cube

View Transformation

Transformed Cube

Presenter
Presentation Notes
To make the rendering procedure simpler to calculate and manage the vertices and edges of polygons must be transformed from view space viewpoint to 3D screen space. Each object’s geometry is transformed so the projection of its vertices into 3D Screen Space is a parallel projection rather than a perspective projection. By converting the projection of all vertices to a parallel projection hidden surface calculations need only be done on those vertices that have the same x,y coordinates.
Page 19: The Graphics Pipeline - Texas Tech University

19

3D Screen Space

Calculation of Z-depthsPerspective distortion of Z-depths.Z-depths stored in the Z-buffer

28-Sep-07 3D Computer Graphics, Alan Watt

3d Screen Space

View plane

Viewpoint (C)

V1

V2

Z1

Z1

Zs

Xs

Ys

Presenter
Presentation Notes
In order to perform hidden surface calculations Z-depth information for the vertices, edges and polygons must be generated. It is in 3D Screen Space these geometric calculations are done. 3D Screen Space coordinate system consists of the view plan Xs and Ys as seen in the figure. The Zs coordinate goes back into the scene towards the objects. 3D Screen Space differs from the view plane, because of this third - Z coordinate. The depth of each vertices, edge and polygon is calculated along the Zs axis. This information is stored in the Z-buffer. The Z-depths in the Z-buffer are then used to calculate the intersection and overlap of objects to determine hidden surface removal. (Click01) These two images (Wikipedia) provide a good visual example of this concept. The top image is of a simple three dimensional scene. The bottom image is a a black and white map of the Z-buffer of this scene. Each pixel in this map represents the depth of the an object in the scene at the location of a specific pixel. The darkest pixels represent objects closes to the view plane and the lightest pixels are farthest from the view plane.
Page 20: The Graphics Pipeline - Texas Tech University

20

Graphics PipelineLocal Space

Local Coordinates - geometricObject Definition - geometric

World SpaceWorld Coordinates - geometricCompose Scene - geometric Define View - geometricDefine Lighting - geometricAnimation – geometricRadiosity Rendering - algorithmic

View SpaceView CoordinatesView volume - geometricCulling – geometricClip to 3D – geometric/algorithmicBack-Face Elimination - geometric

3D Screen SpaceScreen Space CoordinatesView Transformation – geometricClipping - algorithmicShading - algorithmicRasterization - algorithmicHidden surface removal - algorithmic

Local Space

World Space

View Space

3D Screen Space

Display or Image Space

Modeling Transformation

View Transformation

28-Sep-07 3D Computer Graphics, Alan Watt

View Transformation

Presenter
Presentation Notes
This slide is a summary of the different spaces within the Graphics Pipeline and the operations that take place in each of these spaces. The operations we have discussed in this lecture, such as view volume, culling, back-face elimination, and view transformation, are all geometric operations having to do with the manipulation of the geometry in the scene. The real rendering operations of the pipeline are the algorithmic operations. The next lecture will discuss those. Also, notice there was no discussion of the last space - Display or Image Space. We will discuss this when we get to output devices.